diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md
deleted file mode 100644
index ffe94c456771c3e23275af8498a0e2d85eef4487..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
What is Cadmas 11 46?
-
If you are interested in online assessment and comic books, you might have heard of Cadmas 11 46. But what exactly is it? Is it a software, a comic book, or something else? In this article, we will explore what Cadmas and 11 46 are, how they are related, and how they can be used for educational purposes.
-
What is Cadmas?
-
Cadmas is an online assessment platform that helps higher education providers achieve institutional goals through better assessment experiences. It is a secure, online environment that facilitates an end-to-end assessment workflow, simplifying the process of implementing best practice assessment at scale. By empowering academics and supporting students, Cadmas can be used to solve the biggest challenges faced by universities today, such as academic integrity, student retention, remote learning, and online exams.
Cadmus has several features and benefits for both learners and educators. For learners, Cadmus provides a supportive and scaffolded assessment experience that helps them develop their academic skills and achieve better outcomes. For example, Cadmus offers:
-
-
A distraction-free writing environment that blocks access to other websites and applications while completing an assignment
-
A range of learning supports that are intelligently integrated into the writing environment, such as referencing tools, word count, feedback rubric, etc.
-
A proctor-free exam alternative that does not impose on privacy but still ensures academic integrity through various safeguards, such as plagiarism detection, keystroke analysis, etc.
-
A learning analytics dashboard that shows their progress and engagement with the assignment
-
-
For educators, Cadmus simplifies the process of designing and delivering high-quality digital assessment, consistently and at scale. For example, Cadmus offers:
-
-
A template-based approach that allows educators to create assessments that align with best practice principles and institutional standards
-
A seamless integration with learning management systems (LMS) that allows educators to manage assessments from one place
-
A real-time class-level insight that allows educators to monitor student progress and provide timely support and communication
-
A feedback and grading tool that allows educators to provide rich and constructive feedback to students
-
What are some use cases of Cadmus?
-
Cadmus can be used for a range of formative and summative, open-book written assessments and alternatives to exams. Some examples of how Cadmus can be used are:
-
-
An essay that requires students to research a topic and present their arguments in a structured way
-
A report that requires students to analyse data and provide recommendations based on evidence
-
A reflection that requires students to evaluate their own learning process and outcomes
-
A case study that requires students to apply their knowledge and skills to a real-world scenario
-
A short answer test that requires students to demonstrate their understanding of key concepts
-
-
What is 11 46?
-
11 46 is a comic book series by Castle Comics that was published between November 2020 and June 2021 . It is a crime thriller that follows the lives of four strangers who are connected by a mysterious murder that took place at exactly 11:46 pm.
-
What is the plot of 11 46?
-
The plot of 11 46 revolves around four main characters who have different backgrounds and motivations. They are:
-
-
Adam Smith, a journalist who is investigating the murder case and trying to expose the truth behind it
-
Betty Jones, a waitress who witnessed the murder and is being hunted by the killers
-
Charlie Brown, a detective who is assigned to solve the murder case and catch the killers
-
Danny Lee, a hacker who is involved in the murder plot and has a hidden agenda
-
-
The story unfolds through multiple perspectives and timelines, revealing how each character is related to the murder and how their actions affect each other. The story also explores various themes and messages, such as corruption, justice, revenge, loyalty, etc.
-
What are some themes and messages of 11 46?
-
One of the main themes of 11 46 is the idea of fate versus free will. The title of the series refers to the exact time when the murder happened, suggesting that it was predetermined by some higher power or force. However, the series also shows how each character has some degree of choice and agency in their actions. The series asks questions such as:
-
-
How much control do we have over our lives?
-
How do our choices affect others?
-
How do we deal with the consequences of our choices?
-
How do we cope with uncertainty?
-
How are Cadmus and 11 46 related?
-
At first glance, Cadmus and 11 46 seem to have nothing in common. One is an online assessment platform for higher education, while the other is a comic book series for entertainment. However, upon closer examination, we can find some possible connections and similarities between them. For example:
-
Cadmas 11 46 sway office
-Cadmas 11 46 bali finder
-Cadmas 11 46 opensea collection
-Cadmas 11 46 black panther
-Cadmas 11 46 NBA finals
-Cadmas 11 46 Jerome K Jerome
-Cadmas 11 46 Pune event management
-Cadmas 11 46 Oaxaca figs
-Cadmas 11 46 hedge fund
-Cadmas 11 46 short fiction writer
-Cadmas 11 46 glassware
-Cadmas 11 46 dance
-Cadmas 11 46 calculus
-Cadmas 11 46 emperor is dead
-Cadmas 11 46 chainsaw training
-Cadmas 11 46 workhorse
-Cadmas 11 46 fate of cadmus
-Cadmas 11 46 contemporary culinary style
-Cadmas 11 46 force crankset
-Cadmas 11 46 snow sport helmet
-Cadmas 11 46 trail running shoes
-Cadmas 11 46 board shorts
-Cadmas 11 46 slip on shoes
-Cadmas 11 46 black laces and nylon
-Cadmas 11 46 jeans by yeezy
-Cadmas 11 46 navy women's supercrew sweatshirt
-Cadmas 11 46 ebay items at great prices
-Cadmas 11 46 Baxter pharma earnings call
-Cadmas 11 46 black smoke burner
-Cadmas 11 46 Venus sign
-Cadmas 11 46 IT business solutions
-Cadmas 11 BB/38 UHM blazer SS7MU
-Cadmas #47 bdfb3a6fcd made with Microsoft sway
-Cadmus #48 cadmus and his legacy Kirstein
-Cadmus #49 history of science A.L. Kirstein
-
How can Cadmus be used to assess 11 46?
-
One way to use Cadmus to assess 11 46 is to design and deliver a Cadmus assignment based on the comic book series. For example, an educator can create an assignment that requires students to:
-
-
Read the comic book series and analyse its plot, characters, themes, and messages
-
Write a critical review of the comic book series, using evidence and examples from the text
-
Use appropriate academic conventions, such as referencing, structure, language, etc.
-
-
The assignment can be aligned with the learning outcomes and assessment criteria of the course or subject. The assignment can also be tailored to suit different levels of difficulty and complexity, depending on the students' needs and abilities.
-
What are some benefits and challenges of using Cadmus for 11 46?
-
Using Cadmus for 11 46 can have some benefits and challenges for both learners and educators. Some of the benefits are:
-
-
Learners can develop their critical thinking, analytical, and writing skills by engaging with a creative and complex text
-
Learners can enjoy a more interesting and relevant assessment experience that connects to their interests and passions
-
Educators can assess learners' understanding and application of key concepts and skills in a more authentic and meaningful way
-
Educators can ensure academic integrity and quality of assessment by using Cadmus' features and safeguards
-
-
Some of the challenges are:
-
-
Learners may have difficulty accessing or reading the comic book series due to availability or cost issues
-
Learners may have different levels of familiarity or preference with the comic book genre or medium
-
Educators may have difficulty finding or creating suitable assessment tasks or rubrics that align with the comic book series
-
Educators may have to deal with potential plagiarism or cheating issues that may arise from using a popular or widely available text
-
-
Conclusion
-
In conclusion, Cadmas 11 46 is a combination of an online assessment platform and a comic book series that can be used for educational purposes. Cadmas is a platform that helps higher education providers achieve institutional goals through better assessment experiences. 11 46 is a series that follows the lives of four strangers who are connected by a mysterious murder. By using Cadmus to assess 11 46, learners and educators can enjoy some benefits, such as developing critical thinking skills, engaging with a creative text, ensuring academic integrity, etc. However, they may also face some challenges, such as accessing or reading the text, finding or creating suitable assessment tasks, dealing with plagiarism or cheating issues, etc. Therefore, it is important to consider these factors before using Cadmus 11 46 for assessment.
-
FAQs
-
Here are some frequently asked questions and answers about Cadmas and 11 46:
-
-
Where can I find Cadmus? Cadmus is an online platform that can be accessed through your LMS. You can find more information about Cadmus on their website: https://www.cadmus.io/.
-
Where can I find 11 46? 11 46 is a comic book series that was published by Castle Comics. You can find more information about 11 46 on their website: https://www.castlecomics.com/1146.
-
How much does Cadmus cost? Cadmus is free for learners and educators who use it for assessment purposes. However, Cadmus may charge a fee for institutions who want to use it for other purposes.
-
How much does 11 46 cost? 11 46 costs $3.99 per issue or $19.99 for the complete series. You can buy it online or in physical stores.
-
How long does it take to complete a Cadmus assignment? The length of a Cadmus assignment depends on the type and complexity of the task. However, most Cadmus assignments take between one to three hours to complete.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md
deleted file mode 100644
index 06a69fa948f0baf85c943ab2095a97f010a806d2..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
FSX - Maddog 2008 Professional cracked by Komu: A Review
-
If you are a fan of flight simulation games, you might have heard of FSX - Maddog 2008 Professional, a popular add-on for Microsoft Flight Simulator X that lets you fly the Leonardo Maddog, a realistic and complex simulation of the McDonnell Douglas MD-80 aircraft. But did you know that there is a way to get this add-on for free, thanks to a crack made by a user named Komu? In this article, we will review FSX - Maddog 2008 Professional cracked by Komu, a download that claims to unlock all the features and benefits of the original add-on without paying a dime. We will also show you how to install and use it, as well as the pros and cons of using this crack. Finally, we will suggest some alternatives to this crack in case you are looking for other options.
-
What is FSX - Maddog 2008 Professional?
-
FSX - Maddog 2008 Professional is an add-on for Microsoft Flight Simulator X that was released in 2008 by Leonardo Software House, a company that specializes in developing flight simulation software. This add-on is a highly detailed and accurate simulation of the McDonnell Douglas MD-80 aircraft, also known as the Maddog, a twin-engine, medium-range jet airliner that was widely used by many airlines around the world from the 1980s to the 2000s.
This add-on offers many features and benefits for flight simulation enthusiasts, such as:
-
-
A realistic and fully functional cockpit with custom gauges, systems, sounds, animations, and lighting.
-
A comprehensive flight management system (FMS) with navigation, performance, fuel, and route planning functions.
-
A realistic flight model with accurate aerodynamics, engine performance, fuel consumption, and weight and balance calculations.
-
A custom load manager that allows you to configure the payload, fuel, passengers, and cargo of your aircraft.
-
A failure simulation system that lets you experience various malfunctions and emergencies during your flight.
-
A weather radar that displays precipitation, turbulence, windshear, and storm cells.
-
A traffic collision avoidance system (TCAS) that warns you of potential conflicts with other aircraft.
-
A ground proximity warning system (GPWS) that alerts you of terrain hazards.
-
A custom sound set that reproduces the engine noise, cockpit sounds, environmental sounds, and voice alerts of the real aircraft.
-
A variety of liveries that represent different airlines that operated the MD-80 aircraft.
-
-
FSX - Maddog 2008 Professional is widely regarded as one of the best add-ons for FSX in terms of realism, complexity, and immersion. However, it also comes with a price tag of $59.99 USD (as of May 2023), which might be too expensive for some users who want to enjoy this add-on without breaking the bank.
-
FSX Maddog 2008 Pro full version download
-How to install FSX Maddog 2008 Professional crack
-FSX Maddog 2008 Professional free torrent
-FSX Maddog 2008 Pro activation key
-FSX Maddog 2008 Professional patch by Komu
-FSX Maddog 2008 Pro serial number
-FSX Maddog 2008 Professional license code
-FSX Maddog 2008 Pro keygen
-FSX Maddog 2008 Professional gameplay video
-FSX Maddog 2008 Pro review
-FSX Maddog 2008 Professional system requirements
-FSX Maddog 2008 Pro manual pdf
-FSX Maddog 2008 Professional update
-FSX Maddog 2008 Pro mods
-FSX Maddog 2008 Professional liveries
-FSX Maddog 2008 Pro cockpit view
-FSX Maddog 2008 Professional tutorial
-FSX Maddog 2008 Pro tips and tricks
-FSX Maddog 2008 Pro cheats
-FSX Maddog 2008 Pro error fix
-FSX Maddog 2008 Professional forum
-FSX Maddog 2008 Professional support
-FSX Maddog 2008 Professional online multiplayer
-FSX Maddog 2008 Professional VR compatibility
-FSX Maddog 2008 Professional best settings
-FSX Maddog 2008 Pro comparison with other flight simulators
-FSX Maddog 2008 Pro realistic flight model
-FSX Maddog 2008 Pro sound pack
-FSX Maddog 2008 Pro scenery add-ons
-FSX Maddog 2008 Pro weather engine
-FSX Maddog 2008 Pro navigation database
-FSX Maddog 2008 Pro fuel planner
-FSX Maddog 2008 Pro flight plan generator
-FSX Maddog 2008 Pro charts and maps
-FSX Maddog 2008 Pro ATC communication
-FSX Maddog 2008 Pro emergency procedures
-FSX Maddog 2008 Pro failures simulation
-FSX Maddog 2008 Pro cold and dark start up
-FSX Maddog 2008 Pro take off and landing performance calculator
-FSX Maddog 2008 Pro autopilot functions
-FSX Maddog 2008 Pro FMC programming
-FSX Maddog 2008 Pro VNAV and LNAV modes
-FSX Maddog 2008 Pro SID and STAR procedures
-FSX Maddog 2008 Pro ILS approach and landing
-FSX Maddog 2008 Pro RNAV approach and landing
-FSX Maddog 2008 Pro VOR approach and landing
-FSX Maddog 2008 Pro visual approach and landing
-FSX Maddog 2008 Pro go around procedure
-FSX Maddog 2008 Pro holding pattern procedure
-FSX Maddog 2008 Pro diverting to alternate airport procedure
-
What is Komu's crack?
-
Komu's crack is a download that claims to bypass the activation process of FSX - Maddog 2008 Professional and allow users to use it for free. It was created by a user named Komu who uploaded it on various torrent sites in 2010. According to Komu's description, his crack does not modify any files or registry entries of the original add-on, but simply replaces the original .dll file with a cracked one that disables the activation check. He also claims that his crack does not affect any features or functions of the add-on, and that it works with any version of FSX.
-
Komu's crack has been downloaded by thousands of users who wanted to try FSX - Maddog 2008 Professional without paying for it. Some users have reported that the crack works as advertised and that they have not encountered any problems or issues with it. However, other users have reported that the crack does not work at all or that it causes various errors or crashes during their flights. Moreover, some users have expressed ethical concerns about using this crack, as it violates the intellectual property rights of Leonardo Software House and deprives them of their deserved revenue.
-
How to install and use FSX - Maddog 2008 Professional cracked by Komu?
-
If you want to install and use FSX - Maddog 2008 Professional cracked by Komu, you will need to follow these steps:
-
-
Download FSX - Maddog 2008 Professional cracked by Komu from one of the torrent sites where it is available. You will need a torrent client such as uTorrent or BitTorrent to do this.
-
Extract the downloaded file using a program such as WinRAR or 7-Zip. You will get a folder named "Maddog Pro" that contains two files: "maddog pro fsx.exe" and "maddog pro fsx crack by komu.dll".
-
Run "maddog pro fsx.exe" and follow the installation instructions. You will need to specify the location of your FSX folder during the installation process.
-
Copy "maddog pro fsx crack by komu.dll" and paste it into your FSX folder. You will need to overwrite the original .dll file with the same name.
-
Launch FSX and select "Fly The Maddog" from the menu. You should be able to use FSX - Maddog 2008 Professional without any activation prompts or restrictions.
-
-
Note: These steps are based on Komu's instructions and user feedback. We do not endorse or recommend using this crack or any other illegal downloads. Use them at your own risk.
-
Pros and cons of FSX - Maddog 2008 Professional cracked by Komu
-
FSX - Maddog 2008 Professional cracked by Komu has some pros and cons that you should consider before using it:
-
Pros:
-
-
You can use FSX - Maddog 2008 Professional for free without paying $59.99 USD for it.
-
You can enjoy all the features and benefits of FSX - Maddog 2008 Professional, such as realistic cockpit, systems, sounds, flight model, FMS, weather radar, TCAS, GPWS, failures simulation system etc.
-
You can fly one of the most complex and immersive simulations of the MD-80 aircraft in FSX.
-
You can choose from various liveries that represent different airlines that operated the MD-80 aircraft.
-
-
Cons:
-
-
You are violating Leonardo Software House's intellectual property rights and depriving them of their deserved revenue.
-
You are risking legal consequences if Leonardo Software House decides to take action against illegal downloads.
-
You are exposing your computer to potential viruses or malware that might be hidden in the download file or torrent site.
-
You are compromising your flight simulation experience if the crack causes errors or crashes during your flights.
-
You are missing out on updates or support from Leonardo Software House if they release new versions or patches for FSX - Maddog 2008 Professional.
-
You are limiting your options if you want to try other add-ons or cracks for FSX that might be incompatible with Komu's crack.
-
-
Alternatives to FSX - Maddog 2008 Professional cracked by Komu
-
If you are looking for alternatives to <
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md b/spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md
deleted file mode 100644
index a601095a40b21f76fb0042153c84061c7ea4606a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
if you are not certain what is the best budget hotel for you, please take into account your initial budget as well as the purpose of your trip. the more affordable hotels may not be suitable for your needs. you should also consider what other options are available in the area. hotels that are in less populated areas tend to be less expensive, but also are farther from popular attractions.
you'll be very satisfied with the hotel's service. the clarion express was very nice - the lobby had free wireless internet, and the rooms had a fridge and a coffeemaker. the walk to the hotel from downtown was fast and easy, even though i had to use the train to get to clarion. the hotel was very easy to get into, and the staff were friendly. a very nice choice.
-
the clarion express hotel is a best choice for a budget hotel with a great location. enjoy our complimentary cooked-to-order breakfast each morning before you head out exploring. we offer free wireless internet, free local calls, and 32" lcd hd tvs with free cable in every room. just hop a train to clarion university in less than three miles. our city welcomes business travelers, so clarion express is an ideal hotel for travelers seeking a modern downtown hotel with the amenities and location of a big city hotel at a reasonable price.
-
clarion city inn & suites in downtown harrisburg offers 100 rooms with complimentary internet access. non-smoking rooms include microwaves and refrigerators. rooms have microwaves, hair dryers, and coffee/tea makers. this harrisburg hotel has both seasonal and indoor pools. parking is free. complimentary breakfast is served daily.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md b/spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md
deleted file mode 100644
index 06de35fb9c25bbde3a5cf0daddfb3ead9a40a625..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
8 Ball Pool Long Line Tool APK: A Guide for Beginners
-
If you are a fan of billiards games, you might have heard of 8 Ball Pool, one of the most popular and addictive online pool games in the world. But did you know that there is a way to enhance your gaming experience and improve your skills with a simple tool? In this article, we will introduce you to 8 Ball Pool Long Line Tool APK, a modded version of the game that allows you to have longer aiming lines and more accurate shots. We will also show you how to download and install it on your Android device, and share some tips and tricks to win in 8 Ball Pool.
-
What is 8 Ball Pool and How to Play It?
-
8 Ball Pool is a game developed by Miniclip that simulates the real-life pool game of the same name. You can play it online with millions of players from around the world, or offline with your friends. You can also participate in tournaments, win trophies, and collect coins and cash to buy better cues and enter higher-stakes tables.
8 Ball Pool is played with a cue ball and fifteen object balls, numbered 1 through 15. Balls 1–7 are solid colors and commonly referred to as “low balls”, and balls 9–15 are striped and commonly referred to as “high balls.” One player must pocket balls of solid colors, while the other player must pocket the striped balls. The player who pockets their entire group and then legally pockets the 8-ball wins the game.
-
The Rules of 8 Ball Pool
-
For the break shot to be legal, the breaker (with the base of the cue ball placed anywhere behind the head string) must either pocket a number ball or drive at least four (4) number balls to one or more rails. No ball is called, and the cue ball is not required to hit any particular object ball first. If the breaker fails to make the legal break requirement, the balls will be re-racked and the opponent shall have the option of breaking or requesting the offending player to break again.
-
If any numbered ball is pocketed on a legal break, the breaking player is to continue their inning. If the breaker makes a legal break but commits a foul, the game is to continue with the opponent having ball in hand anywhere behind the head-string, but must shoot an object ball beyond the head-string (outside of the “kitchen”) or it is a foul.
-
If the breaker pockets the 8-ball on a legal break shot, they win the game unless they also scratch (pocket or drive off the table) the cue ball, in which case they lose. If any other object ball leaves the table on a legal break shot, it is spotted on its original position before shooting player plays their next shot.
-
During normal play, each player remains at the table until they fail to legally pocket a ball of their group or commit a foul. If a player pockets any ball on a legal shot except for their own group or an opponent’s group (if playing an open table), they continue their inning. If they pocket their own group and an opponent’s group on one shot (if playing an open table), they continue their inning but must declare which group they are playing before their next shot.
-
If a player pockets any ball on a foul shot, it remains pocketed except for the cue ball which is returned behind head string or spotted if it leaves table. If a player pockets the 8-ball on a legal shot, they win the game unless they also scratch, in which case they lose. If a player pockets the 8-ball on an illegal shot, they lose the game.
-
8 ball pool mod apk with long lines
-8 ball pool hack apk long line tool
-8 ball pool unlimited coins and long lines apk
-8 ball pool long line tool apk download
-8 ball pool long line tool apk no root
-8 ball pool long line tool apk latest version
-8 ball pool long line tool apk for android
-8 ball pool long line tool apk free download
-8 ball pool long line tool apk online
-8 ball pool long line tool apk 2023
-8 ball pool cheat apk long line tool
-8 ball pool guideline hack apk long line tool
-8 ball pool mega mod apk long lines
-8 ball pool extended lines apk tool
-8 ball pool long line tool apk without ban
-8 ball pool aim hack apk long line tool
-8 ball pool anti ban apk long line tool
-8 ball pool premium apk long lines
-8 ball pool cracked apk long line tool
-8 ball pool modded apk with long lines
-8 ball pool unlimited guideline apk tool
-8 ball pool pro apk long line tool
-8 ball pool full version apk long lines
-8 ball pool unlocked apk long line tool
-8 ball pool patcher apk long lines
-8 ball pool generator apk long line tool
-8 ball pool trainer apk long lines
-8 ball pool mod menu apk with long lines
-8 ball pool glitch apk long line tool
-8 ball pool update apk long lines
-8 ball pool best mod apk with long lines
-8 ball pool easy win apk long line tool
-8 ball pool legendary cues apk with long lines
-8 ball pool rewards apk long line tool
-8 ball pool cash hack apk with long lines
-8 ball pool instant win apk long line tool
-8 ball pool level up hack apk with long lines
-8 ball pool auto win apk long line tool
-8 ball pool all cues unlocked apk with long lines
-8 ball pool vip mod apk with long lines
-
A foul occurs when a player fails to hit their own group of balls first, fails to hit any ball at all, scratches the cue ball, drives any ball off the table, touches any ball with their hand or cue, or violates any other rule of the game. When a foul is committed, the opponent gets ball in hand anywhere on the table. However, if the cue ball is behind the head string and an object ball is outside of the head string, the player must shoot an object ball outside of the head string or it is a foul.
-
What is 8 Ball Pool Long Line Tool APK and How to Download It?
-
8 Ball Pool Long Line Tool APK is a modified version of the original 8 Ball Pool game that gives you some extra advantages over your opponents. It is not an official app from Miniclip, but a third-party app that you can download and install on your Android device for free.
-
The Features of 8 Ball Pool Long Line Tool APK
-
Some of the features that 8 Ball Pool Long Line Tool APK offers are:
-
-
Longer aiming lines: You can see the trajectory of your shots more clearly and accurately, which helps you to aim better and avoid mistakes.
-
No root required: You don't need to root your device to use this app, which means you don't have to risk damaging your device or voiding your warranty.
-
Anti-ban protection: You can use this app without worrying about getting banned by Miniclip, as it has a built-in anti-ban system that prevents detection.
-
Easy to use: You don't need any special skills or knowledge to use this app, as it has a simple and user-friendly interface that guides you through the process.
-
-
The Benefits of 8 Ball Pool Long Line Tool APK
-
Some of the benefits that 8 Ball Pool Long Line Tool APK provides are:
-
-
More fun and enjoyment: You can have more fun and enjoyment playing 8 Ball Pool with this app, as you can make more impressive shots and win more games.
-
More coins and cash: You can earn more coins and cash by winning more games with this app, which allows you to buy better cues and enter higher-stakes tables.
-
More confidence and skill: You can improve your confidence and skill in playing 8 Ball Pool with this app, as you can learn from your mistakes and practice your techniques.
-
-
The Installation Process of 8 Ball Pool Long Line Tool APK
-
To install 8 Ball Pool Long Line Tool APK on your Android device, you need to follow these steps:
-
-
Download the APK file from a trusted source. You can search for it online or use this link: .
-
Enable unknown sources on your device. Go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded APK file on your device and tap on it to start the installation.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the app and enjoy playing 8 Ball Pool with longer aiming lines.
-
-
What are Some Tips and Tricks to Win in 8 Ball Pool?
-
Besides using 8 Ball Pool Long Line Tool APK, there are some other tips and tricks that you can apply to win in 8 Ball Pool. Here are some of them:
-
Choose Your Tables Wisely
-
When you play online, you can choose from different tables with different entry fees and prizes. The higher the entry fee, the higher the prize, but also the higher the risk. If you are a beginner, you should start with lower-level tables and work your way up gradually. Don't play on tables that are too expensive for your budget or skill level, as you might lose more than you gain.
-
Buy a Better Cue
-
A cue is one of the most important factors that affect your performance in 8 Ball Pool. A better cue can give you more power, spin, aim, and time. You can buy cues with coins or cash in the game shop, or win them in tournaments or surprise boxes. You can also upgrade your cues with coins to improve their attributes. A good cue can make a big difference in your game, so don't hesitate to invest in one.
-
Use a Little English
-
English is a term that refers to the amount of spin you put on the cue ball when you hit it. By using English, you can control the direction and speed of the cue ball after it hits an object ball or a rail. You can use English to avoid scratches, make difficult shots, or set up your next shot. To use English, you need to hit the cue ball on the left or right side, rather than the center. You can also adjust the power and angle of your shot to achieve the desired effect.
-
Shoot Faster
-
One of the challenges of playing online is that you have a limited time to make your shot. If you take too long, you might lose your turn or even the game. To avoid this, you should try to shoot faster and more confidently. You can do this by planning your shots ahead, using 8 Ball Pool Long Line Tool APK to aim better, and practicing your skills offline. Shooting faster can also put pressure on your opponent and make them nervous or impatient.
-
Extend Your Aim
-
Another way to improve your accuracy and precision in 8 Ball Pool is to extend your aim beyond the object ball. This means that you should visualize where you want the cue ball to go after it hits the object ball, and align your cue accordingly. This can help you to avoid scratches, position your cue ball better, and make more complex shots. You can also use 8 Ball Pool Long Line Tool APK to see the extended aiming lines and adjust your shots accordingly.
-
Conclusion
-
8 Ball Pool is a fun and exciting game that can keep you entertained for hours. However, if you want to take your game to the next level, you might want to try 8 Ball Pool Long Line Tool APK, a modded version of the game that gives you longer aiming lines and more accurate shots. You can download and install it on your Android device for free and enjoy playing 8 Ball Pool with an edge over your opponents. You can also use some tips and tricks to win in 8 Ball Pool, such as choosing your tables wisely, buying a better cue, using a little English, shooting faster, and extending your aim. With these tools and techniques, you can become a master of 8 Ball Pool in no time.
-
FAQs
-
Here are some frequently asked questions about 8 Ball Pool Long Line Tool APK:
-
-
Is 8 Ball Pool Long Line Tool APK safe to use?
-
Yes, 8 Ball Pool Long Line Tool APK is safe to use as long as you download it from a trusted source and follow the installation instructions carefully. It has an anti-ban system that prevents detection by Miniclip, so you don't have to worry about getting banned or losing your account.
-
Is 8 Ball Pool Long Line Tool APK compatible with all devices?
-
No, 8 Ball Pool Long Line Tool APK is only compatible with Android devices that have Android 4.1 or higher versions. It is not compatible with iOS devices or other platforms.
-
Can I play online with 8 Ball Pool Long Line Tool APK?
-
Yes, you can play online with 8 Ball Pool Long Line Tool APK as long as you have a stable internet connection and a valid Miniclip account. You can play with other players who are using the same app or the original game.
-
Can I update 8 Ball Pool Long Line Tool APK?
-
No, you cannot update 8 Ball Pool Long Line Tool APK as it is not an official app from Miniclip. If you update it, you might lose the modded features or encounter errors. You should always check for new versions of the app from the source where you downloaded it.
-
Can I use 8 Ball Pool Long Line Tool APK with other mods or hacks?
-
No, you should not use 8 Ball Pool Long Line Tool APK with other mods or hacks as they might interfere with each other or cause problems. You should only use one mod or hack at a time for optimal performance and safety.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md b/spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md
deleted file mode 100644
index db3dc834075d5877e78c0476cee6ce6fb5d30c4d..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Download APK FIFA 20: How to Install and Play the Latest Version of the Popular Soccer Game on Your Android Device
-
If you are a fan of soccer games, you have probably heard of FIFA 20, the latest installment of the popular FIFA series by Electronic Arts. FIFA 20 is a realistic and immersive soccer simulation game that lets you experience the thrill of playing with your favorite teams and players in various modes and competitions. Whether you want to play solo or with friends, offline or online, FIFA 20 has something for everyone.
But what if you don't have a console or a PC to play FIFA 20? Don't worry, you can still enjoy this amazing game on your Android device. All you need is to download and install the FIFA 20 APK and OBB data files, which are modified versions of the original game that can run on Android devices without any issues. In this article, we will show you how to do that, as well as give you some tips and tricks to play FIFA 20 like a pro.
-
What are the features and benefits of FIFA 20
-
FIFA 20 is not just another soccer game. It is a game that offers you a lot of features and benefits that make it stand out from other games in the genre. Here are some of them:
-
-
Stunning graphics and sound: FIFA 20 boasts of high-quality graphics and sound that make you feel like you are watching a real soccer match. The players, stadiums, crowds, kits, balls, and animations are all detailed and realistic. The commentary, sound effects, and music are also immersive and dynamic.
-
Realistic gameplay and physics: FIFA 20 uses a sophisticated gameplay engine that simulates the physics and mechanics of soccer in a realistic way. The players move, dribble, pass, shoot, tackle, and react according to their attributes, skills, and situations. The ball also behaves realistically, bouncing, spinning, curving, and swerving according to its speed, direction, and contact.
-
Various modes and competitions: FIFA 20 offers you a variety of modes and competitions to choose from, depending on your preference and mood. You can play quick matches, tournaments, leagues, career mode, ultimate team mode, volta mode, online seasons, online friendlies, online co-op seasons, online draft mode, online squad battles, online champions league mode, online world cup mode, online pro clubs mode, online division rivals mode, online weekend league mode, online fut champions mode, online fut friendlies mode, online fut events mode, online fut seasons mode.
-
Ultimate team mode: This is one of the most popular modes in FIFA 20. It allows you to create your own dream team by collecting and trading players from different leagues and nations. You can customize your team's formation, tactics, kits, badges, stadiums, managers, chemistry styles, consumables, etc. You can also compete with other players' teams in various online modes.
-
Volta mode: This is a new mode in FIFA 20 that brings back the street soccer style of previous FIFA games. It allows you to play in small -sided matches with different rules and settings. You can play in various locations around the world, such as rooftops, cages, courts, etc. You can also customize your avatar's appearance, clothing, accessories, tattoos, etc.
-
-
How to download and install FIFA 20 APK and OBB data on your Android device
-
Now that you know the features and benefits of FIFA 20, you might be wondering how to download and install it on your Android device. Well, it's not as hard as you might think. Just follow these simple steps:
-
Step 1: Enable unknown sources on your device
-
Before you can install any APK file on your device, you need to enable the option to allow unknown sources. This will let you install apps that are not from the Google Play Store. To do this, go to your device's settings, then security, then unknown sources. Toggle the switch to enable it.
-
download apk fifa 20 mod
-download apk fifa 20 offline
-download apk fifa 20 mobile
-download apk fifa 20 android
-download apk fifa 20 latest version
-download apk fifa 20 ultimate team
-download apk fifa 20 for free
-download apk fifa 20 with obb data
-download apk fifa 20 update
-download apk fifa 20 hack
-download apk fifa 20 full version
-download apk fifa 20 online
-download apk fifa 20 cracked
-download apk fifa 20 no verification
-download apk fifa 20 without human verification
-download apk fifa 20 from apkpure
-download apk fifa 20 from google play store
-download apk fifa 20 from uptodown
-download apk fifa 20 from apkmirror
-download apk fifa 20 from apksfull
-download apk fifa 20 with commentary
-download apk fifa 20 with real faces
-download apk fifa 20 with new kits
-download apk fifa 20 with new transfers
-download apk fifa 20 with unlimited coins
-download apk fifa 20 manager mode
-download apk fifa 20 tournament mode
-download apk fifa 20 career mode
-download apk fifa 20 volta mode
-download apk fifa 20 street mode
-download apk fifa 20 ps4 camera view
-download apk fifa 20 hd graphics
-download apk fifa 20 high compress
-download apk fifa 20 low mb
-download apk fifa 20 original
-download apk fifa 20 beta
-download apk fifa 20 demo
-download apk fifa 20 pro evolution soccer (pes)
-download apk fifa 20 dream league soccer (dls)
-download apk fifa 20 first touch soccer (fts)
-download apk fifa 20 efootball (efootball)
-download apk fifa 20 world cup edition (wc)
-download apk fifa 20 champions league edition (cl)
-download apk fifa 20 euro cup edition (ec)
-download apk fifa 20 copa america edition (ca)
-download apk fifa 20 africa cup of nations edition (afcon)
-download apk fifa 20 women's world cup edition (wwc)
-download apk fifa 20 fut companion app (fut)
-download apk fifa 20 pack opener app (pack)
-download apk fifa 20 player potentials app (potentials)
-
Step 2: Download the FIFA 20 APK and OBB files from a trusted source
-
The next step is to download the FIFA 20 APK and OBB files from a trusted source. There are many websites that offer these files, but be careful not to download from shady or malicious ones. You can use this link to download the files safely and securely. The APK file is about 30 MB, while the OBB file is about 1.5 GB.
-
Step 3: Install the APK file and extract the OBB file to the right folder
-
After downloading the files, you need to install the APK file and extract the OBB file to the right folder. To do this, locate the APK file in your device's file manager and tap on it to install it. Then, use a file extractor app like ZArchiver to extract the OBB file. You will get a folder named com.ea.gp.fifaworld. Move this folder to Android/OBB in your device's internal storage.
-
Step 4: Launch the game and enjoy
-
The final step is to launch the game and enjoy. To do this, go to your app drawer and tap on the FIFA 20 icon. The game will start and ask you to verify your data. Just tap on OK and wait for a few seconds. The game will then load and take you to the main menu. You can now choose your mode and start playing.
-
What are the tips and tricks to play FIFA 20 like a pro
-
FIFA 20 is a fun and challenging game that requires skill and strategy to master. If you want to play like a pro, you need to know some tips and tricks that will help you improve your performance and win more matches. Here are some of them:
-
Customize your controls and settings
-
One of the first things you should do is customize your controls and settings according to your preference and comfort. You can do this by going to settings, then controls, then customize controls. You can choose between classic or casual controls, adjust the sensitivity and size of the buttons, enable or disable auto-switching, auto-sprint, auto-shoot, etc.
-
Choose your game mode and difficulty level
-
The next thing you should do is choose your game mode and difficulty level according to your skill and goal. You can do this by going to play, then select mode. You can choose between quick match, tournament, league, career mode, ultimate team mode, volta mode, etc. You can also choose between beginner, amateur, semi-pro, professional, world class, legendary, or ultimate difficulty level.
-
Master the skills and tactics
-
The most important thing you should do is master the skills and tactics that will help you win more matches. You can do this by practicing in training mode or playing against AI opponents. You should learn how to dribble, pass, shoot, tackle, cross, head, defend, etc. You should also learn how to use different tactics, such as formation, style, mentality, instructions, etc.
-
Build your ultimate team and manage your players
-
If you are playing ultimate team mode, you should build your ultimate team and manage your players effectively. You can do this by collecting and trading players from different leagues and nations. You should aim for high-rated players with good chemistry and attributes. You should also manage your players' fitness, morale, contracts, injuries, etc.
-
Participate in online tournaments and events
-
If you want to challenge yourself and compete with other players, you should participate in online tournaments and events. You can do this by going to play online, then select mode. You can choose between online seasons, online friendlies, online co-op seasons, online draft mode, online squad battles, online champions league mode, online world cup mode, online pro clubs mode, online division rivals mode, online weekend league mode, online fut champions mode, online fut friendlies mode, online fut events mode, online fut seasons mode. You can win rewards and trophies by playing and winning these modes.
-
Conclusion
-
FIFA 20 is a fantastic soccer game that you can download and play on your Android device. It offers you a lot of features and benefits that make it one of the best games in the genre. It also gives you some tips and tricks that will help you play like a pro. So what are you waiting for? Download APK FIFA 20 now and enjoy the ultimate soccer experience.
-
FAQs
-
Here are some frequently asked questions about FIFA 20:
-
-
Q: Is FIFA 20 free to download and play?
-A: Yes, FIFA 20 is free to download and play on your Android device. However, some features and modes may require in-app purchases or subscriptions.
-
Q: Is FIFA 20 compatible with my device?
-A: FIFA 20 is compatible with most Android devices that have at least 2 GB of RAM and 4 GB of free storage space. However, some devices may experience performance issues or crashes due to hardware limitations.
-
Q: Is FIFA 20 safe to download and install?
-A: Yes, FIFA 20 is safe to download and install on your device. However, you should always download it from a trusted source and scan it for viruses or malware before installing it.
-
Q: How can I update FIFA 20 to the latest version?
-A: You can update FIFA 20 to the latest version by downloading and installing the latest APK and OBB files from the same source you downloaded them from. You should also delete the old files before installing the new ones.
-
Q: How can I contact the developers or support team of FIFA 20?
-A: You can contact the developers or support team of FIFA 20 by visiting their official website or social media pages. You can also email them at help@ea.com or call them at +1-866-543-5435.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md b/spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md
deleted file mode 100644
index 7b4f447cf2ffc9df32131b7b92d146f2806e8bd5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
Final Fantasy XIII APK Full Download: How to Play the Epic JRPG on Your Android Device
-
Are you a fan of Final Fantasy, one of the most popular and influential JRPG series of all time? If so, you might be interested in playing Final Fantasy XIII, the thirteenth installment of the main series, on your Android device. In this article, we will show you how to download Final Fantasy XIII APK full version and enjoy the epic adventure on your smartphone or tablet. We will also share some tips and tricks to enhance your gaming experience. Let's get started!
-
Introduction
-
What is Final Fantasy XIII?
-
Final Fantasy XIII is a role-playing game developed and published by Square Enix in 2009. It is set in a futuristic world where two opposing forces, Cocoon and Pulse, are locked in a conflict. The game follows the story of six characters who are branded as traitors by Cocoon's government and must fight against their fate. The game features a fast-paced combat system, stunning graphics, and a rich soundtrack. It received critical acclaim and sold over seven million copies worldwide.
Why play Final Fantasy XIII on your Android device?
-
Playing Final Fantasy XIII on your Android device has many benefits. First of all, you can enjoy the game anytime and anywhere, without being tied to a console or a PC. You can also save space on your device, as you don't need to download a large file or install anything. Moreover, you can take advantage of the touch screen, gyroscope, and other features of your device to enhance your gameplay. Finally, you can connect your device to a TV or a monitor and play on a bigger screen.
-
How to download Final Fantasy XIII APK
-
Option 1: Use the official cloud game service from Square Enix
-
The easiest and safest way to play Final Fantasy XIII on your Android device is to use the official cloud game service from Square Enix. This service allows you to stream high-definition games over a Wi-Fi connection, without downloading or installing anything. Here are the steps to follow:
-
Step 1: Download the FINAL FANTASY XIII app from APKCombo
-
The first step is to download the FINAL FANTASY XIII app from APKCombo, a website that provides free APK files for Android apps and games. You can use this link to access the app page and click on the "Download APK" button. The app size is about 12 MB and it requires Android 5.0 or higher.
-
Step 2: Launch the app and sign up for the cloud game service
-
The next step is to launch the app and sign up for the cloud game service. You will need to create an account with your email address and password, or log in with your existing Square Enix account. You will also need to agree to the terms of service and privacy policy.
-
Step 3: Enjoy the free trial and purchase the license if you like it
-
The final step is to enjoy the free trial and purchase the license if you like it. You can play the first 30 minutes of the game for free, and then decide whether to buy the full game for $15.99. You can also choose to pay $5.99 per month and access other cloud games from Square Enix, such as Final Fantasy VII and Final Fantasy VIII.
-
Option 2: Use an unofficial source from the Internet Archive
-
If you don't want to use the official cloud game service from Square Enix, you can try another option: use an unofficial source from the Internet Archive. The Internet Archive is a non-profit organization that preserves digital content, such as books, music, videos, and games. You can find a copy of Final Fantasy XIII for PC on their website and play it on your Android device with an emulator or a streaming app. However, this option is not recommended, as it may be illegal, unsafe, or unstable. Here are the steps to follow:
-
Step 1: Download the final fantasy xiii file from the Internet Archive
-
The first step is to download the final fantasy xiii file from the Internet Archive. You can use this link to access the file page and click on the "DOWNLOAD OPTIONS" button. You will see several formats available, such as ISO, ZIP, or TORRENT. The file size is about 13 GB and it requires a PC with Windows XP or higher.
-
Step 2: Extract the file and install the game on your PC
-
The next step is to extract the file and install the game on your PC. You will need a software like WinRAR or 7-Zip to unzip the file and get the game folder. Then, you will need to run the setup.exe file and follow the instructions to install the game on your PC. You may also need to install some additional components, such as DirectX or Visual C++.
-
final fantasy xiii android apk free download
-final fantasy xiii mobile game download apk
-final fantasy xiii apk obb download
-final fantasy xiii apk mod download
-final fantasy xiii apk offline download
-final fantasy xiii apk data download
-final fantasy xiii apk full version download
-final fantasy xiii apk cracked download
-final fantasy xiii apk unlimited money download
-final fantasy xiii apk cloud game download
-final fantasy xiii apk no license download
-final fantasy xiii apk english version download
-final fantasy xiii apk latest version download
-final fantasy xiii apk direct download
-final fantasy xiii apk mirror download
-final fantasy xiii apk google drive download
-final fantasy xiii apk mega download
-final fantasy xiii apk mediafire download
-final fantasy xiii apk zippyshare download
-final fantasy xiii apk utorrent download
-final fantasy xiii apk for pc download
-final fantasy xiii apk for ios download
-final fantasy xiii apk for tablet download
-final fantasy xiii apk for tv download
-final fantasy xiii apk for chromebook download
-final fantasy xiii hd apk full download
-final fantasy xiii 2 apk full download
-final fantasy xiii 3 apk full download
-final fantasy xiii lightning returns apk full download
-final fantasy xiii remastered apk full download
-how to download final fantasy xiii apk full
-where to download final fantasy xiii apk full
-best site to download final fantasy xiii apk full
-safe site to download final fantasy xiii apk full
-legit site to download final fantasy xiii apk full
-trusted site to download final fantasy xiii apk full
-working link to download final fantasy xiii apk full
-updated link to download final fantasy xiii apk full
-fast link to download final fantasy xiii apk full
-easy way to download final fantasy xiii apk full
-free way to download final fantasy xiii apk full
-legal way to download final fantasy xiii apk full
-illegal way to download final fantasy xiii apk full
-tips and tricks to download final fantasy xiii apk full
-guide and tutorial to download final fantasy xiii apk full
-review and rating of final fantasy xiii apk full download
-gameplay and features of final fantasy xiii apk full download
-problems and solutions of final fantasy xiii apk full download
-requirements and compatibility of final fantasy xiii apk full download
-
Step 3: Use an emulator or a streaming app to play the game on your Android device
-
The final step is to use an emulator or a streaming app to play the game on your Android device. An emulator is a software that mimics the behavior of another device, such as a PC or a console. A streaming app is a software that allows you to stream games from your PC to your Android device over a Wi-Fi connection. Some examples of emulators are ExaGear RPG or Wine, and some examples of streaming apps are Steam Link or Moonlight. You will need to configure these apps according to your preferences and requirements.
-
Tips and tricks for playing Final Fantasy XIII on your Android device
-
Adjust the settings to optimize the performance and battery life
-
One of the challenges of playing Final Fantasy XIII on your Android device is to optimize the performance and battery life of your device. Depending on your device model and specifications, you may experience lagging, crashing, overheating, or draining issues. To avoid these problems, you can adjust some settings in your device or in your app. For example, you can lower the resolution, brightness, volume, or frame rate of your device or app. You can also close other apps running in the background, turn off notifications, or activate airplane mode.
-
Use a controller or a keyboard for better control and comfort
-
Another challenge of playing Final Fantasy XIII on your Android device is to control the game with touch screen gestures. While this may be convenient for some players, others may find it difficult, uncomfortable, or inaccurate. To improve your control and comfort, you can use a controller or a keyboard instead of touch screen gestures. You can connect your controller or keyboard to your device via Bluetooth, USB, or Wi-Fi. You can also customize your controller or keyboard layout according to your preferences.
-
Save your progress frequently and back up your data online
-
The last challenge of playing Final Fantasy XIII on your Android device is to save your progress frequently and back up your data online. Unlike playing on a console or a PC, playing on an Android device may expose you to risks of losing your data due to various reasons, such as deleting the app by mistake, running out of storage space, resetting your device, or losing your device. To prevent these scenarios from happening, you should save your progress frequently in different slots and back up your data online using cloud services like Google Drive or Dropbox.
-
Conclusion
-
Summary of the main points
-
In conclusion, playing Final Fantasy XIII on your Android device is possible and enjoyable if you follow some simple steps and tips. You can download Final Fantasy XIII APK full version from either the official cloud game service from Square Enix or from an unofficial source from the Internet Archive. You can also adjust the settings, use a controller or a keyboard, and save your progress frequently and back up your data online to optimize your gaming experience. Final Fantasy XIII is a great game that deserves to be played on any device you want.
-
Call to action and invitation to comment
-
If you are ready to play Final Fantasy XIII on your Android device, don't hesitate to download the APK file and follow the instructions in this article. You will be amazed by the quality and the fun of this game. And if you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you and help you with any issues you may encounter. Happy gaming!
-
FAQs
-
Here are some frequently asked questions about playing Final Fantasy XIII on your Android device:
-
-
Is Final Fantasy XIII APK safe to download?
-
Yes, Final Fantasy XIII APK is safe to download if you use the official cloud game service from Square Enix or a reputable website like APKCombo. However, if you use an unofficial source from the Internet Archive, you may encounter some risks, such as viruses, malware, or legal issues. Therefore, we recommend that you use the official option or scan the file with an antivirus before installing it.
-
How much data does Final Fantasy XIII APK use?
-
Final Fantasy XIII APK uses a lot of data, as it streams high-definition games over a Wi-Fi connection. The exact amount of data depends on various factors, such as the resolution, frame rate, and duration of your gameplay. However, according to some estimates, streaming a game can use up to 3 GB of data per hour. Therefore, we suggest that you use a Wi-Fi connection with unlimited data or a high data plan when playing Final Fantasy XIII APK.
-
Can I play Final Fantasy XIII APK offline?
-
No, you cannot play Final Fantasy XIII APK offline, as it requires a constant internet connection to stream the game from the cloud server. If you lose your connection or have a weak signal, you may experience interruptions, lagging, or disconnection. Therefore, we advise that you play Final Fantasy XIII APK in a place with a stable and strong Wi-Fi connection.
-
Can I play Final Fantasy XIII APK with friends?
-
Yes, you can play Final Fantasy XIII APK with friends, as it supports online multiplayer mode. You can join other players from around the world and cooperate or compete with them in various missions and battles. You can also chat with them using voice or text messages. To play Final Fantasy XIII APK with friends, you will need to create or join a party in the game menu and invite or accept other players.
-
Can I transfer my save data from Final Fantasy XIII APK to another device?
-
Yes, you can transfer your save data from Final Fantasy XIII APK to another device, as long as you use the same account and service. For example, if you use the official cloud game service from Square Enix, you can access your save data from any device that supports the service, such as another Android device, an iOS device, or a PC. However, if you use an unofficial source from the Internet Archive, you may not be able to transfer your save data easily.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/221091lstwcm/textgenerator/app.py b/spaces/221091lstwcm/textgenerator/app.py
deleted file mode 100644
index 67e6bf13da457d066b6ea7b6a5728599f00b5ebb..0000000000000000000000000000000000000000
--- a/spaces/221091lstwcm/textgenerator/app.py
+++ /dev/null
@@ -1,11 +0,0 @@
-#libraries
-import gradio as gr
-from gradio.mix import Parallel
-
-#variables, functions and parameters
-model1=gr.Interface.load("huggingface/gpt2")
-model2=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model3=gr.Interface.load("huggingface/distilgpt2")
-
-#funcations, parameters and variables
-gr.Parallel(model1, model2, model3).launch()
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py
deleted file mode 100644
index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .fused_act import FusedLeakyReLU, fused_leaky_relu
-from .upfirdn2d import upfirdn2d
diff --git a/spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py b/spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py
deleted file mode 100644
index 5f6e6b77b0ceb2089539caa440f7106c7b1e8aa2..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from tqdm import tqdm
-import torch
-from torch import nn
-
-
-class Audio2Exp(nn.Module):
- def __init__(self, netG, cfg, device, prepare_training_loss=False):
- super(Audio2Exp, self).__init__()
- self.cfg = cfg
- self.device = device
- self.netG = netG.to(device)
-
- def test(self, batch):
-
- mel_input = batch['indiv_mels'] # bs T 1 80 16
- bs = mel_input.shape[0]
- T = mel_input.shape[1]
-
- exp_coeff_pred = []
-
- for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames
-
- current_mel_input = mel_input[:,i:i+10]
-
- ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64
- ratio = batch['ratio_gt'][:, i:i+10] #bs T
-
- audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16
-
- curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64
-
- exp_coeff_pred += [curr_exp_coeff_pred]
-
- # BS x T x 64
- results_dict = {
- 'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1)
- }
- return results_dict
-
-
diff --git a/spaces/801artistry/RVC801/demucs/utils.py b/spaces/801artistry/RVC801/demucs/utils.py
deleted file mode 100644
index 4364184059b1afe3c8379c77793a8e76dccf9699..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/demucs/utils.py
+++ /dev/null
@@ -1,323 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import errno
-import functools
-import hashlib
-import inspect
-import io
-import os
-import random
-import socket
-import tempfile
-import warnings
-import zlib
-from contextlib import contextmanager
-
-from diffq import UniformQuantizer, DiffQuantizer
-import torch as th
-import tqdm
-from torch import distributed
-from torch.nn import functional as F
-
-
-def center_trim(tensor, reference):
- """
- Center trim `tensor` with respect to `reference`, along the last dimension.
- `reference` can also be a number, representing the length to trim to.
- If the size difference != 0 mod 2, the extra sample is removed on the right side.
- """
- if hasattr(reference, "size"):
- reference = reference.size(-1)
- delta = tensor.size(-1) - reference
- if delta < 0:
- raise ValueError("tensor must be larger than reference. " f"Delta is {delta}.")
- if delta:
- tensor = tensor[..., delta // 2:-(delta - delta // 2)]
- return tensor
-
-
-def average_metric(metric, count=1.):
- """
- Average `metric` which should be a float across all hosts. `count` should be
- the weight for this particular host (i.e. number of examples).
- """
- metric = th.tensor([count, count * metric], dtype=th.float32, device='cuda')
- distributed.all_reduce(metric, op=distributed.ReduceOp.SUM)
- return metric[1].item() / metric[0].item()
-
-
-def free_port(host='', low=20000, high=40000):
- """
- Return a port number that is most likely free.
- This could suffer from a race condition although
- it should be quite rare.
- """
- sock = socket.socket()
- while True:
- port = random.randint(low, high)
- try:
- sock.bind((host, port))
- except OSError as error:
- if error.errno == errno.EADDRINUSE:
- continue
- raise
- return port
-
-
-def sizeof_fmt(num, suffix='B'):
- """
- Given `num` bytes, return human readable size.
- Taken from https://stackoverflow.com/a/1094933
- """
- for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
- if abs(num) < 1024.0:
- return "%3.1f%s%s" % (num, unit, suffix)
- num /= 1024.0
- return "%.1f%s%s" % (num, 'Yi', suffix)
-
-
-def human_seconds(seconds, display='.2f'):
- """
- Given `seconds` seconds, return human readable duration.
- """
- value = seconds * 1e6
- ratios = [1e3, 1e3, 60, 60, 24]
- names = ['us', 'ms', 's', 'min', 'hrs', 'days']
- last = names.pop(0)
- for name, ratio in zip(names, ratios):
- if value / ratio < 0.3:
- break
- value /= ratio
- last = name
- return f"{format(value, display)} {last}"
-
-
-class TensorChunk:
- def __init__(self, tensor, offset=0, length=None):
- total_length = tensor.shape[-1]
- assert offset >= 0
- assert offset < total_length
-
- if length is None:
- length = total_length - offset
- else:
- length = min(total_length - offset, length)
-
- self.tensor = tensor
- self.offset = offset
- self.length = length
- self.device = tensor.device
-
- @property
- def shape(self):
- shape = list(self.tensor.shape)
- shape[-1] = self.length
- return shape
-
- def padded(self, target_length):
- delta = target_length - self.length
- total_length = self.tensor.shape[-1]
- assert delta >= 0
-
- start = self.offset - delta // 2
- end = start + target_length
-
- correct_start = max(0, start)
- correct_end = min(total_length, end)
-
- pad_left = correct_start - start
- pad_right = end - correct_end
-
- out = F.pad(self.tensor[..., correct_start:correct_end], (pad_left, pad_right))
- assert out.shape[-1] == target_length
- return out
-
-
-def tensor_chunk(tensor_or_chunk):
- if isinstance(tensor_or_chunk, TensorChunk):
- return tensor_or_chunk
- else:
- assert isinstance(tensor_or_chunk, th.Tensor)
- return TensorChunk(tensor_or_chunk)
-
-
-def apply_model(model, mix, shifts=None, split=False,
- overlap=0.25, transition_power=1., progress=False):
- """
- Apply model to a given mixture.
-
- Args:
- shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec
- and apply the oppositve shift to the output. This is repeated `shifts` time and
- all predictions are averaged. This effectively makes the model time equivariant
- and improves SDR by up to 0.2 points.
- split (bool): if True, the input will be broken down in 8 seconds extracts
- and predictions will be performed individually on each and concatenated.
- Useful for model with large memory footprint like Tasnet.
- progress (bool): if True, show a progress bar (requires split=True)
- """
- assert transition_power >= 1, "transition_power < 1 leads to weird behavior."
- device = mix.device
- channels, length = mix.shape
- if split:
- out = th.zeros(len(model.sources), channels, length, device=device)
- sum_weight = th.zeros(length, device=device)
- segment = model.segment_length
- stride = int((1 - overlap) * segment)
- offsets = range(0, length, stride)
- scale = stride / model.samplerate
- if progress:
- offsets = tqdm.tqdm(offsets, unit_scale=scale, ncols=120, unit='seconds')
- # We start from a triangle shaped weight, with maximal weight in the middle
- # of the segment. Then we normalize and take to the power `transition_power`.
- # Large values of transition power will lead to sharper transitions.
- weight = th.cat([th.arange(1, segment // 2 + 1),
- th.arange(segment - segment // 2, 0, -1)]).to(device)
- assert len(weight) == segment
- # If the overlap < 50%, this will translate to linear transition when
- # transition_power is 1.
- weight = (weight / weight.max())**transition_power
- for offset in offsets:
- chunk = TensorChunk(mix, offset, segment)
- chunk_out = apply_model(model, chunk, shifts=shifts)
- chunk_length = chunk_out.shape[-1]
- out[..., offset:offset + segment] += weight[:chunk_length] * chunk_out
- sum_weight[offset:offset + segment] += weight[:chunk_length]
- offset += segment
- assert sum_weight.min() > 0
- out /= sum_weight
- return out
- elif shifts:
- max_shift = int(0.5 * model.samplerate)
- mix = tensor_chunk(mix)
- padded_mix = mix.padded(length + 2 * max_shift)
- out = 0
- for _ in range(shifts):
- offset = random.randint(0, max_shift)
- shifted = TensorChunk(padded_mix, offset, length + max_shift - offset)
- shifted_out = apply_model(model, shifted)
- out += shifted_out[..., max_shift - offset:]
- out /= shifts
- return out
- else:
- valid_length = model.valid_length(length)
- mix = tensor_chunk(mix)
- padded_mix = mix.padded(valid_length)
- with th.no_grad():
- out = model(padded_mix.unsqueeze(0))[0]
- return center_trim(out, length)
-
-
-@contextmanager
-def temp_filenames(count, delete=True):
- names = []
- try:
- for _ in range(count):
- names.append(tempfile.NamedTemporaryFile(delete=False).name)
- yield names
- finally:
- if delete:
- for name in names:
- os.unlink(name)
-
-
-def get_quantizer(model, args, optimizer=None):
- quantizer = None
- if args.diffq:
- quantizer = DiffQuantizer(
- model, min_size=args.q_min_size, group_size=8)
- if optimizer is not None:
- quantizer.setup_optimizer(optimizer)
- elif args.qat:
- quantizer = UniformQuantizer(
- model, bits=args.qat, min_size=args.q_min_size)
- return quantizer
-
-
-def load_model(path, strict=False):
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- load_from = path
- package = th.load(load_from, 'cpu')
-
- klass = package["klass"]
- args = package["args"]
- kwargs = package["kwargs"]
-
- if strict:
- model = klass(*args, **kwargs)
- else:
- sig = inspect.signature(klass)
- for key in list(kwargs):
- if key not in sig.parameters:
- warnings.warn("Dropping inexistant parameter " + key)
- del kwargs[key]
- model = klass(*args, **kwargs)
-
- state = package["state"]
- training_args = package["training_args"]
- quantizer = get_quantizer(model, training_args)
-
- set_state(model, quantizer, state)
- return model
-
-
-def get_state(model, quantizer):
- if quantizer is None:
- state = {k: p.data.to('cpu') for k, p in model.state_dict().items()}
- else:
- state = quantizer.get_quantized_state()
- buf = io.BytesIO()
- th.save(state, buf)
- state = {'compressed': zlib.compress(buf.getvalue())}
- return state
-
-
-def set_state(model, quantizer, state):
- if quantizer is None:
- model.load_state_dict(state)
- else:
- buf = io.BytesIO(zlib.decompress(state["compressed"]))
- state = th.load(buf, "cpu")
- quantizer.restore_quantized_state(state)
-
- return state
-
-
-def save_state(state, path):
- buf = io.BytesIO()
- th.save(state, buf)
- sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8]
-
- path = path.parent / (path.stem + "-" + sig + path.suffix)
- path.write_bytes(buf.getvalue())
-
-
-def save_model(model, quantizer, training_args, path):
- args, kwargs = model._init_args_kwargs
- klass = model.__class__
-
- state = get_state(model, quantizer)
-
- save_to = path
- package = {
- 'klass': klass,
- 'args': args,
- 'kwargs': kwargs,
- 'state': state,
- 'training_args': training_args,
- }
- th.save(package, save_to)
-
-
-def capture_init(init):
- @functools.wraps(init)
- def __init__(self, *args, **kwargs):
- self._init_args_kwargs = (args, kwargs)
- init(self, *args, **kwargs)
-
- return __init__
diff --git a/spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py b/spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py
deleted file mode 100644
index 6f30b7177d17fc61a4173c21b4233172a890be58..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import numpy as np
-import torch
-from torch.nn import functional as F
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md b/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md
deleted file mode 100644
index 2b6bbff02cfbe61fdbccb81d069285eb513c160b..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
-------
-[**更新日誌**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
-
-[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md))
-
-> [示範映像](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 確認해 보세요!
-
-> RVC를活用한實時間音聲變換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
-
-> 基本모델은 50時間假量의 高品質 오픈 소스 VCTK 데이터셋을 使用하였으므로, 著作權上의 念慮가 없으니 安心하고 使用하시기 바랍니다.
-
-> 著作權問題가 없는 高品質의 노래를 以後에도 繼續해서 訓練할 豫定입니다.
-
-## 紹介
-本Repo는 다음과 같은 特徵을 가지고 있습니다:
-+ top1檢索을利用하여 入力音色特徵을 訓練세트音色特徵으로 代替하여 音色의漏出을 防止;
-+ 相對的으로 낮은性能의 GPU에서도 빠른訓練可能;
-+ 적은量의 데이터로 訓練해도 좋은 結果를 얻을 수 있음 (最小10分以上의 低雜음音聲데이터를 使用하는 것을 勸獎);
-+ 모델融合을通한 音色의 變調可能 (ckpt處理탭->ckpt混合選擇);
-+ 使用하기 쉬운 WebUI (웹 使用者인터페이스);
-+ UVR5 모델을 利用하여 목소리와 背景音樂의 빠른 分離;
-
-## 環境의準備
-poetry를通해 依存를設置하는 것을 勸獎합니다.
-
-다음命令은 Python 버전3.8以上의環境에서 實行되어야 합니다:
-```bash
-# PyTorch 關聯主要依存設置, 이미設置되어 있는 境遇 건너뛰기 可能
-# 參照: https://pytorch.org/get-started/locally/
-pip install torch torchvision torchaudio
-
-# Windows + Nvidia Ampere Architecture(RTX30xx)를 使用하고 있다面, #21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 指定해야 합니다.
-#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
-
-# Poetry 設置, 이미設置되어 있는 境遇 건너뛰기 可能
-# Reference: https://python-poetry.org/docs/#installation
-curl -sSL https://install.python-poetry.org | python3 -
-
-# 依存設置
-poetry install
-```
-pip를 活用하여依存를 設置하여도 無妨합니다.
-
-```bash
-pip install -r requirements.txt
-```
-
-## 其他預備모델準備
-RVC 모델은 推論과訓練을 依하여 다른 預備모델이 必要합니다.
-
-[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 通해서 다운로드 할 수 있습니다.
-
-다음은 RVC에 必要한 預備모델 및 其他 파일 目錄입니다:
-```bash
-hubert_base.pt
-
-./pretrained
-
-./uvr5_weights
-
-# Windows를 使用하는境遇 이 사전도 必要할 수 있습니다. FFmpeg가 設置되어 있으면 건너뛰어도 됩니다.
-ffmpeg.exe
-```
-그後 以下의 命令을 使用하여 WebUI를 始作할 수 있습니다:
-```bash
-python infer-web.py
-```
-Windows를 使用하는境遇 `RVC-beta.7z`를 다운로드 및 壓縮解除하여 RVC를 直接使用하거나 `go-web.bat`을 使用하여 WebUi를 直接할 수 있습니다.
-
-## 參考
-+ [ContentVec](https://github.com/auspicious3000/contentvec/)
-+ [VITS](https://github.com/jaywalnut310/vits)
-+ [HIFIGAN](https://github.com/jik876/hifi-gan)
-+ [Gradio](https://github.com/gradio-app/gradio)
-+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
-+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
-+ [audio-slicer](https://github.com/openvpi/audio-slicer)
-## 모든寄與者분들의勞力에感謝드립니다
-
-
-
-
-
diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py
deleted file mode 100644
index 84da679f79d9f36b288d7312fb4ad9dc04723b0d..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import json
-from tqdm import tqdm
-import logging
-import pickle
-from collections import Counter
-import re
-import fire
-
-class Vocabulary(object):
- """Simple vocabulary wrapper."""
- def __init__(self):
- self.word2idx = {}
- self.idx2word = {}
- self.idx = 0
-
- def add_word(self, word):
- if not word in self.word2idx:
- self.word2idx[word] = self.idx
- self.idx2word[self.idx] = word
- self.idx += 1
-
- def __call__(self, word):
- if not word in self.word2idx:
- return self.word2idx[""]
- return self.word2idx[word]
-
- def __len__(self):
- return len(self.word2idx)
-
-
-def build_vocab(input_json: str,
- output_json: str,
- threshold: int,
- keep_punctuation: bool,
- host_address: str,
- character_level: bool = False,
- retokenize: bool = True,
- zh: bool = True ):
- """Build vocabulary from csv file with a given threshold to drop all counts < threshold
-
- Args:
- input_json(string): Preprossessed json file. Structure like this:
- {
- 'audios': [
- {
- 'audio_id': 'xxx',
- 'captions': [
- {
- 'caption': 'xxx',
- 'cap_id': 'xxx'
- }
- ]
- },
- ...
- ]
- }
- threshold (int): Threshold to drop all words with counts < threshold
- keep_punctuation (bool): Includes or excludes punctuation.
-
- Returns:
- vocab (Vocab): Object with the processed vocabulary
-"""
- data = json.load(open(input_json, "r"))["audios"]
- counter = Counter()
- if retokenize:
- pretokenized = False
- else:
- pretokenized = "tokens" in data[0]["captions"][0]
-
- if zh:
- from nltk.parse.corenlp import CoreNLPParser
- from zhon.hanzi import punctuation
- if not pretokenized:
- parser = CoreNLPParser(host_address)
- for audio_idx in tqdm(range(len(data)), leave=False, ascii=True):
- for cap_idx in range(len(data[audio_idx]["captions"])):
- if pretokenized:
- tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split()
- else:
- caption = data[audio_idx]["captions"][cap_idx]["caption"]
- # Remove all punctuations
- if not keep_punctuation:
- caption = re.sub("[{}]".format(punctuation), "", caption)
- if character_level:
- tokens = list(caption)
- else:
- tokens = list(parser.tokenize(caption))
- data[audio_idx]["captions"][cap_idx]["tokens"] = " ".join(tokens)
- counter.update(tokens)
- else:
- if pretokenized:
- for audio_idx in tqdm(range(len(data)), leave=False, ascii=True):
- for cap_idx in range(len(data[audio_idx]["captions"])):
- tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split()
- counter.update(tokens)
- else:
- import spacy
- tokenizer = spacy.load("en_core_web_sm", disable=["parser", "ner"])
- for audio_idx in tqdm(range(len(data)), leave=False, ascii=True):
- captions = data[audio_idx]["captions"]
- for cap_idx in range(len(captions)):
- caption = captions[cap_idx]["caption"]
- doc = tokenizer(caption)
- tokens = " ".join([str(token).lower() for token in doc])
- data[audio_idx]["captions"][cap_idx]["tokens"] = tokens
- counter.update(tokens.split(" "))
-
- if not pretokenized:
- if output_json is None:
- json.dump({ "audios": data }, open(input_json, "w"),
- indent=4, ensure_ascii=not zh)
- else:
- json.dump({ "audios": data }, open(output_json, "w"),
- indent=4, ensure_ascii=not zh)
-
- words = [word for word, cnt in counter.items() if cnt >= threshold]
-
- # Create a vocab wrapper and add some special tokens.
- vocab = Vocabulary()
- vocab.add_word("")
- vocab.add_word("")
- vocab.add_word("")
- vocab.add_word("")
-
- # Add the words to the vocabulary.
- for word in words:
- vocab.add_word(word)
- return vocab
-
-def process(input_json: str,
- output_file: str,
- output_json: str = None,
- threshold: int = 1,
- keep_punctuation: bool = False,
- character_level: bool = False,
- retokenize: bool = False,
- host_address: str = "http://localhost:9000",
- zh: bool = True):
- logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s"
- logging.basicConfig(level=logging.INFO, format=logfmt)
- logging.info("Build Vocab")
- vocabulary = build_vocab(
- input_json=input_json, output_json=output_json, threshold=threshold,
- keep_punctuation=keep_punctuation, host_address=host_address,
- character_level=character_level, retokenize=retokenize, zh=zh)
- pickle.dump(vocabulary, open(output_file, "wb"))
- logging.info("Total vocabulary size: {}".format(len(vocabulary)))
- logging.info("Saved vocab to '{}'".format(output_file))
-
-
-if __name__ == '__main__':
- fire.Fire(process)
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py
deleted file mode 100644
index d5833cd4cff1c9290f9703f3470b1fc63bd752e3..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py
+++ /dev/null
@@ -1,2861 +0,0 @@
-default_scope = 'mmpose'
-default_hooks = dict(
- timer=dict(type='IterTimerHook'),
- logger=dict(type='LoggerHook', interval=50),
- param_scheduler=dict(type='ParamSchedulerHook'),
- checkpoint=dict(
- type='CheckpointHook', interval=10, save_best='PCK', rule='greater'),
- sampler_seed=dict(type='DistSamplerSeedHook'),
- visualization=dict(type='PoseVisualizationHook', enable=False))
-custom_hooks = [dict(type='SyncBuffersHook')]
-env_cfg = dict(
- cudnn_benchmark=False,
- mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
- dist_cfg=dict(backend='nccl'))
-vis_backends = [dict(type='LocalVisBackend')]
-visualizer = dict(
- type='PoseLocalVisualizer',
- vis_backends=[dict(type='LocalVisBackend'),
- dict(type='WandbVisBackend')],
- name='visualizer')
-log_processor = dict(
- type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
-log_level = 'INFO'
-load_from = None
-resume = False
-backend_args = dict(backend='local')
-train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10)
-val_cfg = dict()
-test_cfg = dict()
-colors = dict(
- sss=[255, 128, 0],
- lss=[255, 0, 128],
- sso=[128, 0, 255],
- lso=[0, 128, 255],
- vest=[0, 128, 128],
- sling=[0, 0, 128],
- shorts=[128, 128, 128],
- trousers=[128, 0, 128],
- skirt=[64, 128, 128],
- ssd=[64, 64, 128],
- lsd=[128, 64, 0],
- vd=[128, 64, 255],
- sd=[128, 64, 0])
-dataset_info = dict(
- dataset_name='deepfashion2',
- paper_info=dict(
- author=
- 'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo',
- title=
- 'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images',
- container=
- 'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)',
- year='2019',
- homepage='https://github.com/switchablenorms/DeepFashion2'),
- keypoint_info=dict({
- 0:
- dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''),
- 1:
- dict(
- name='sss_kpt2',
- id=1,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt6'),
- 2:
- dict(
- name='sss_kpt3',
- id=2,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt5'),
- 3:
- dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''),
- 4:
- dict(
- name='sss_kpt5',
- id=4,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt3'),
- 5:
- dict(
- name='sss_kpt6',
- id=5,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt2'),
- 6:
- dict(
- name='sss_kpt7',
- id=6,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt25'),
- 7:
- dict(
- name='sss_kpt8',
- id=7,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt24'),
- 8:
- dict(
- name='sss_kpt9',
- id=8,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt23'),
- 9:
- dict(
- name='sss_kpt10',
- id=9,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt22'),
- 10:
- dict(
- name='sss_kpt11',
- id=10,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt21'),
- 11:
- dict(
- name='sss_kpt12',
- id=11,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt20'),
- 12:
- dict(
- name='sss_kpt13',
- id=12,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt19'),
- 13:
- dict(
- name='sss_kpt14',
- id=13,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt18'),
- 14:
- dict(
- name='sss_kpt15',
- id=14,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt17'),
- 15:
- dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''),
- 16:
- dict(
- name='sss_kpt17',
- id=16,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt15'),
- 17:
- dict(
- name='sss_kpt18',
- id=17,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt14'),
- 18:
- dict(
- name='sss_kpt19',
- id=18,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt13'),
- 19:
- dict(
- name='sss_kpt20',
- id=19,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt12'),
- 20:
- dict(
- name='sss_kpt21',
- id=20,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt11'),
- 21:
- dict(
- name='sss_kpt22',
- id=21,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt10'),
- 22:
- dict(
- name='sss_kpt23',
- id=22,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt9'),
- 23:
- dict(
- name='sss_kpt24',
- id=23,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt8'),
- 24:
- dict(
- name='sss_kpt25',
- id=24,
- color=[255, 128, 0],
- type='',
- swap='sss_kpt7'),
- 25:
- dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''),
- 26:
- dict(
- name='lss_kpt2',
- id=26,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt6'),
- 27:
- dict(
- name='lss_kpt3',
- id=27,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt5'),
- 28:
- dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''),
- 29:
- dict(
- name='lss_kpt5',
- id=29,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt3'),
- 30:
- dict(
- name='lss_kpt6',
- id=30,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt2'),
- 31:
- dict(
- name='lss_kpt7',
- id=31,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt33'),
- 32:
- dict(
- name='lss_kpt8',
- id=32,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt32'),
- 33:
- dict(
- name='lss_kpt9',
- id=33,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt31'),
- 34:
- dict(
- name='lss_kpt10',
- id=34,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt30'),
- 35:
- dict(
- name='lss_kpt11',
- id=35,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt29'),
- 36:
- dict(
- name='lss_kpt12',
- id=36,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt28'),
- 37:
- dict(
- name='lss_kpt13',
- id=37,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt27'),
- 38:
- dict(
- name='lss_kpt14',
- id=38,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt26'),
- 39:
- dict(
- name='lss_kpt15',
- id=39,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt25'),
- 40:
- dict(
- name='lss_kpt16',
- id=40,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt24'),
- 41:
- dict(
- name='lss_kpt17',
- id=41,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt23'),
- 42:
- dict(
- name='lss_kpt18',
- id=42,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt22'),
- 43:
- dict(
- name='lss_kpt19',
- id=43,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt21'),
- 44:
- dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''),
- 45:
- dict(
- name='lss_kpt21',
- id=45,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt19'),
- 46:
- dict(
- name='lss_kpt22',
- id=46,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt18'),
- 47:
- dict(
- name='lss_kpt23',
- id=47,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt17'),
- 48:
- dict(
- name='lss_kpt24',
- id=48,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt16'),
- 49:
- dict(
- name='lss_kpt25',
- id=49,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt15'),
- 50:
- dict(
- name='lss_kpt26',
- id=50,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt14'),
- 51:
- dict(
- name='lss_kpt27',
- id=51,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt13'),
- 52:
- dict(
- name='lss_kpt28',
- id=52,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt12'),
- 53:
- dict(
- name='lss_kpt29',
- id=53,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt11'),
- 54:
- dict(
- name='lss_kpt30',
- id=54,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt10'),
- 55:
- dict(
- name='lss_kpt31',
- id=55,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt9'),
- 56:
- dict(
- name='lss_kpt32',
- id=56,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt8'),
- 57:
- dict(
- name='lss_kpt33',
- id=57,
- color=[255, 0, 128],
- type='',
- swap='lss_kpt7'),
- 58:
- dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''),
- 59:
- dict(
- name='sso_kpt2',
- id=59,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt26'),
- 60:
- dict(
- name='sso_kpt3',
- id=60,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt5'),
- 61:
- dict(
- name='sso_kpt4',
- id=61,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt6'),
- 62:
- dict(
- name='sso_kpt5',
- id=62,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt3'),
- 63:
- dict(
- name='sso_kpt6',
- id=63,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt4'),
- 64:
- dict(
- name='sso_kpt7',
- id=64,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt25'),
- 65:
- dict(
- name='sso_kpt8',
- id=65,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt24'),
- 66:
- dict(
- name='sso_kpt9',
- id=66,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt23'),
- 67:
- dict(
- name='sso_kpt10',
- id=67,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt22'),
- 68:
- dict(
- name='sso_kpt11',
- id=68,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt21'),
- 69:
- dict(
- name='sso_kpt12',
- id=69,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt20'),
- 70:
- dict(
- name='sso_kpt13',
- id=70,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt19'),
- 71:
- dict(
- name='sso_kpt14',
- id=71,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt18'),
- 72:
- dict(
- name='sso_kpt15',
- id=72,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt17'),
- 73:
- dict(
- name='sso_kpt16',
- id=73,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt29'),
- 74:
- dict(
- name='sso_kpt17',
- id=74,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt15'),
- 75:
- dict(
- name='sso_kpt18',
- id=75,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt14'),
- 76:
- dict(
- name='sso_kpt19',
- id=76,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt13'),
- 77:
- dict(
- name='sso_kpt20',
- id=77,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt12'),
- 78:
- dict(
- name='sso_kpt21',
- id=78,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt11'),
- 79:
- dict(
- name='sso_kpt22',
- id=79,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt10'),
- 80:
- dict(
- name='sso_kpt23',
- id=80,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt9'),
- 81:
- dict(
- name='sso_kpt24',
- id=81,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt8'),
- 82:
- dict(
- name='sso_kpt25',
- id=82,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt7'),
- 83:
- dict(
- name='sso_kpt26',
- id=83,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt2'),
- 84:
- dict(
- name='sso_kpt27',
- id=84,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt30'),
- 85:
- dict(
- name='sso_kpt28',
- id=85,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt31'),
- 86:
- dict(
- name='sso_kpt29',
- id=86,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt16'),
- 87:
- dict(
- name='sso_kpt30',
- id=87,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt27'),
- 88:
- dict(
- name='sso_kpt31',
- id=88,
- color=[128, 0, 255],
- type='',
- swap='sso_kpt28'),
- 89:
- dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''),
- 90:
- dict(
- name='lso_kpt2',
- id=90,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt6'),
- 91:
- dict(
- name='lso_kpt3',
- id=91,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt5'),
- 92:
- dict(
- name='lso_kpt4',
- id=92,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt34'),
- 93:
- dict(
- name='lso_kpt5',
- id=93,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt3'),
- 94:
- dict(
- name='lso_kpt6',
- id=94,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt2'),
- 95:
- dict(
- name='lso_kpt7',
- id=95,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt33'),
- 96:
- dict(
- name='lso_kpt8',
- id=96,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt32'),
- 97:
- dict(
- name='lso_kpt9',
- id=97,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt31'),
- 98:
- dict(
- name='lso_kpt10',
- id=98,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt30'),
- 99:
- dict(
- name='lso_kpt11',
- id=99,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt29'),
- 100:
- dict(
- name='lso_kpt12',
- id=100,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt28'),
- 101:
- dict(
- name='lso_kpt13',
- id=101,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt27'),
- 102:
- dict(
- name='lso_kpt14',
- id=102,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt26'),
- 103:
- dict(
- name='lso_kpt15',
- id=103,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt25'),
- 104:
- dict(
- name='lso_kpt16',
- id=104,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt24'),
- 105:
- dict(
- name='lso_kpt17',
- id=105,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt23'),
- 106:
- dict(
- name='lso_kpt18',
- id=106,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt22'),
- 107:
- dict(
- name='lso_kpt19',
- id=107,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt21'),
- 108:
- dict(
- name='lso_kpt20',
- id=108,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt37'),
- 109:
- dict(
- name='lso_kpt21',
- id=109,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt19'),
- 110:
- dict(
- name='lso_kpt22',
- id=110,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt18'),
- 111:
- dict(
- name='lso_kpt23',
- id=111,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt17'),
- 112:
- dict(
- name='lso_kpt24',
- id=112,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt16'),
- 113:
- dict(
- name='lso_kpt25',
- id=113,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt15'),
- 114:
- dict(
- name='lso_kpt26',
- id=114,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt14'),
- 115:
- dict(
- name='lso_kpt27',
- id=115,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt13'),
- 116:
- dict(
- name='lso_kpt28',
- id=116,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt12'),
- 117:
- dict(
- name='lso_kpt29',
- id=117,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt11'),
- 118:
- dict(
- name='lso_kpt30',
- id=118,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt10'),
- 119:
- dict(
- name='lso_kpt31',
- id=119,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt9'),
- 120:
- dict(
- name='lso_kpt32',
- id=120,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt8'),
- 121:
- dict(
- name='lso_kpt33',
- id=121,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt7'),
- 122:
- dict(
- name='lso_kpt34',
- id=122,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt4'),
- 123:
- dict(
- name='lso_kpt35',
- id=123,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt38'),
- 124:
- dict(
- name='lso_kpt36',
- id=124,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt39'),
- 125:
- dict(
- name='lso_kpt37',
- id=125,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt20'),
- 126:
- dict(
- name='lso_kpt38',
- id=126,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt35'),
- 127:
- dict(
- name='lso_kpt39',
- id=127,
- color=[0, 128, 255],
- type='',
- swap='lso_kpt36'),
- 128:
- dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''),
- 129:
- dict(
- name='vest_kpt2',
- id=129,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt6'),
- 130:
- dict(
- name='vest_kpt3',
- id=130,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt5'),
- 131:
- dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''),
- 132:
- dict(
- name='vest_kpt5',
- id=132,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt3'),
- 133:
- dict(
- name='vest_kpt6',
- id=133,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt2'),
- 134:
- dict(
- name='vest_kpt7',
- id=134,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt15'),
- 135:
- dict(
- name='vest_kpt8',
- id=135,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt14'),
- 136:
- dict(
- name='vest_kpt9',
- id=136,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt13'),
- 137:
- dict(
- name='vest_kpt10',
- id=137,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt12'),
- 138:
- dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''),
- 139:
- dict(
- name='vest_kpt12',
- id=139,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt10'),
- 140:
- dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''),
- 141:
- dict(
- name='vest_kpt14',
- id=141,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt8'),
- 142:
- dict(
- name='vest_kpt15',
- id=142,
- color=[0, 128, 128],
- type='',
- swap='vest_kpt7'),
- 143:
- dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''),
- 144:
- dict(
- name='sling_kpt2',
- id=144,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt6'),
- 145:
- dict(
- name='sling_kpt3',
- id=145,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt5'),
- 146:
- dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''),
- 147:
- dict(
- name='sling_kpt5',
- id=147,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt3'),
- 148:
- dict(
- name='sling_kpt6',
- id=148,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt2'),
- 149:
- dict(
- name='sling_kpt7',
- id=149,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt15'),
- 150:
- dict(
- name='sling_kpt8',
- id=150,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt14'),
- 151:
- dict(
- name='sling_kpt9',
- id=151,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt13'),
- 152:
- dict(
- name='sling_kpt10',
- id=152,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt12'),
- 153:
- dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''),
- 154:
- dict(
- name='sling_kpt12',
- id=154,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt10'),
- 155:
- dict(
- name='sling_kpt13',
- id=155,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt9'),
- 156:
- dict(
- name='sling_kpt14',
- id=156,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt8'),
- 157:
- dict(
- name='sling_kpt15',
- id=157,
- color=[0, 0, 128],
- type='',
- swap='sling_kpt7'),
- 158:
- dict(
- name='shorts_kpt1',
- id=158,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt3'),
- 159:
- dict(
- name='shorts_kpt2',
- id=159,
- color=[128, 128, 128],
- type='',
- swap=''),
- 160:
- dict(
- name='shorts_kpt3',
- id=160,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt1'),
- 161:
- dict(
- name='shorts_kpt4',
- id=161,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt10'),
- 162:
- dict(
- name='shorts_kpt5',
- id=162,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt9'),
- 163:
- dict(
- name='shorts_kpt6',
- id=163,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt8'),
- 164:
- dict(
- name='shorts_kpt7',
- id=164,
- color=[128, 128, 128],
- type='',
- swap=''),
- 165:
- dict(
- name='shorts_kpt8',
- id=165,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt6'),
- 166:
- dict(
- name='shorts_kpt9',
- id=166,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt5'),
- 167:
- dict(
- name='shorts_kpt10',
- id=167,
- color=[128, 128, 128],
- type='',
- swap='shorts_kpt4'),
- 168:
- dict(
- name='trousers_kpt1',
- id=168,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt3'),
- 169:
- dict(
- name='trousers_kpt2',
- id=169,
- color=[128, 0, 128],
- type='',
- swap=''),
- 170:
- dict(
- name='trousers_kpt3',
- id=170,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt1'),
- 171:
- dict(
- name='trousers_kpt4',
- id=171,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt14'),
- 172:
- dict(
- name='trousers_kpt5',
- id=172,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt13'),
- 173:
- dict(
- name='trousers_kpt6',
- id=173,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt12'),
- 174:
- dict(
- name='trousers_kpt7',
- id=174,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt11'),
- 175:
- dict(
- name='trousers_kpt8',
- id=175,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt10'),
- 176:
- dict(
- name='trousers_kpt9',
- id=176,
- color=[128, 0, 128],
- type='',
- swap=''),
- 177:
- dict(
- name='trousers_kpt10',
- id=177,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt8'),
- 178:
- dict(
- name='trousers_kpt11',
- id=178,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt7'),
- 179:
- dict(
- name='trousers_kpt12',
- id=179,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt6'),
- 180:
- dict(
- name='trousers_kpt13',
- id=180,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt5'),
- 181:
- dict(
- name='trousers_kpt14',
- id=181,
- color=[128, 0, 128],
- type='',
- swap='trousers_kpt4'),
- 182:
- dict(
- name='skirt_kpt1',
- id=182,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt3'),
- 183:
- dict(
- name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''),
- 184:
- dict(
- name='skirt_kpt3',
- id=184,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt1'),
- 185:
- dict(
- name='skirt_kpt4',
- id=185,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt8'),
- 186:
- dict(
- name='skirt_kpt5',
- id=186,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt7'),
- 187:
- dict(
- name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''),
- 188:
- dict(
- name='skirt_kpt7',
- id=188,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt5'),
- 189:
- dict(
- name='skirt_kpt8',
- id=189,
- color=[64, 128, 128],
- type='',
- swap='skirt_kpt4'),
- 190:
- dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''),
- 191:
- dict(
- name='ssd_kpt2',
- id=191,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt6'),
- 192:
- dict(
- name='ssd_kpt3',
- id=192,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt5'),
- 193:
- dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''),
- 194:
- dict(
- name='ssd_kpt5',
- id=194,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt3'),
- 195:
- dict(
- name='ssd_kpt6',
- id=195,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt2'),
- 196:
- dict(
- name='ssd_kpt7',
- id=196,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt29'),
- 197:
- dict(
- name='ssd_kpt8',
- id=197,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt28'),
- 198:
- dict(
- name='ssd_kpt9',
- id=198,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt27'),
- 199:
- dict(
- name='ssd_kpt10',
- id=199,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt26'),
- 200:
- dict(
- name='ssd_kpt11',
- id=200,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt25'),
- 201:
- dict(
- name='ssd_kpt12',
- id=201,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt24'),
- 202:
- dict(
- name='ssd_kpt13',
- id=202,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt23'),
- 203:
- dict(
- name='ssd_kpt14',
- id=203,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt22'),
- 204:
- dict(
- name='ssd_kpt15',
- id=204,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt21'),
- 205:
- dict(
- name='ssd_kpt16',
- id=205,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt20'),
- 206:
- dict(
- name='ssd_kpt17',
- id=206,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt19'),
- 207:
- dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''),
- 208:
- dict(
- name='ssd_kpt19',
- id=208,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt17'),
- 209:
- dict(
- name='ssd_kpt20',
- id=209,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt16'),
- 210:
- dict(
- name='ssd_kpt21',
- id=210,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt15'),
- 211:
- dict(
- name='ssd_kpt22',
- id=211,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt14'),
- 212:
- dict(
- name='ssd_kpt23',
- id=212,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt13'),
- 213:
- dict(
- name='ssd_kpt24',
- id=213,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt12'),
- 214:
- dict(
- name='ssd_kpt25',
- id=214,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt11'),
- 215:
- dict(
- name='ssd_kpt26',
- id=215,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt10'),
- 216:
- dict(
- name='ssd_kpt27',
- id=216,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt9'),
- 217:
- dict(
- name='ssd_kpt28',
- id=217,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt8'),
- 218:
- dict(
- name='ssd_kpt29',
- id=218,
- color=[64, 64, 128],
- type='',
- swap='ssd_kpt7'),
- 219:
- dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''),
- 220:
- dict(
- name='lsd_kpt2',
- id=220,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt6'),
- 221:
- dict(
- name='lsd_kpt3',
- id=221,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt5'),
- 222:
- dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''),
- 223:
- dict(
- name='lsd_kpt5',
- id=223,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt3'),
- 224:
- dict(
- name='lsd_kpt6',
- id=224,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt2'),
- 225:
- dict(
- name='lsd_kpt7',
- id=225,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt37'),
- 226:
- dict(
- name='lsd_kpt8',
- id=226,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt36'),
- 227:
- dict(
- name='lsd_kpt9',
- id=227,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt35'),
- 228:
- dict(
- name='lsd_kpt10',
- id=228,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt34'),
- 229:
- dict(
- name='lsd_kpt11',
- id=229,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt33'),
- 230:
- dict(
- name='lsd_kpt12',
- id=230,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt32'),
- 231:
- dict(
- name='lsd_kpt13',
- id=231,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt31'),
- 232:
- dict(
- name='lsd_kpt14',
- id=232,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt30'),
- 233:
- dict(
- name='lsd_kpt15',
- id=233,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt29'),
- 234:
- dict(
- name='lsd_kpt16',
- id=234,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt28'),
- 235:
- dict(
- name='lsd_kpt17',
- id=235,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt27'),
- 236:
- dict(
- name='lsd_kpt18',
- id=236,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt26'),
- 237:
- dict(
- name='lsd_kpt19',
- id=237,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt25'),
- 238:
- dict(
- name='lsd_kpt20',
- id=238,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt24'),
- 239:
- dict(
- name='lsd_kpt21',
- id=239,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt23'),
- 240:
- dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''),
- 241:
- dict(
- name='lsd_kpt23',
- id=241,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt21'),
- 242:
- dict(
- name='lsd_kpt24',
- id=242,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt20'),
- 243:
- dict(
- name='lsd_kpt25',
- id=243,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt19'),
- 244:
- dict(
- name='lsd_kpt26',
- id=244,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt18'),
- 245:
- dict(
- name='lsd_kpt27',
- id=245,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt17'),
- 246:
- dict(
- name='lsd_kpt28',
- id=246,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt16'),
- 247:
- dict(
- name='lsd_kpt29',
- id=247,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt15'),
- 248:
- dict(
- name='lsd_kpt30',
- id=248,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt14'),
- 249:
- dict(
- name='lsd_kpt31',
- id=249,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt13'),
- 250:
- dict(
- name='lsd_kpt32',
- id=250,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt12'),
- 251:
- dict(
- name='lsd_kpt33',
- id=251,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt11'),
- 252:
- dict(
- name='lsd_kpt34',
- id=252,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt10'),
- 253:
- dict(
- name='lsd_kpt35',
- id=253,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt9'),
- 254:
- dict(
- name='lsd_kpt36',
- id=254,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt8'),
- 255:
- dict(
- name='lsd_kpt37',
- id=255,
- color=[128, 64, 0],
- type='',
- swap='lsd_kpt7'),
- 256:
- dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''),
- 257:
- dict(
- name='vd_kpt2',
- id=257,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt6'),
- 258:
- dict(
- name='vd_kpt3',
- id=258,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt5'),
- 259:
- dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''),
- 260:
- dict(
- name='vd_kpt5',
- id=260,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt3'),
- 261:
- dict(
- name='vd_kpt6',
- id=261,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt2'),
- 262:
- dict(
- name='vd_kpt7',
- id=262,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt19'),
- 263:
- dict(
- name='vd_kpt8',
- id=263,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt18'),
- 264:
- dict(
- name='vd_kpt9',
- id=264,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt17'),
- 265:
- dict(
- name='vd_kpt10',
- id=265,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt16'),
- 266:
- dict(
- name='vd_kpt11',
- id=266,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt15'),
- 267:
- dict(
- name='vd_kpt12',
- id=267,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt14'),
- 268:
- dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''),
- 269:
- dict(
- name='vd_kpt14',
- id=269,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt12'),
- 270:
- dict(
- name='vd_kpt15',
- id=270,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt11'),
- 271:
- dict(
- name='vd_kpt16',
- id=271,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt10'),
- 272:
- dict(
- name='vd_kpt17',
- id=272,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt9'),
- 273:
- dict(
- name='vd_kpt18',
- id=273,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt8'),
- 274:
- dict(
- name='vd_kpt19',
- id=274,
- color=[128, 64, 255],
- type='',
- swap='vd_kpt7'),
- 275:
- dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''),
- 276:
- dict(
- name='sd_kpt2',
- id=276,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt6'),
- 277:
- dict(
- name='sd_kpt3',
- id=277,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt5'),
- 278:
- dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''),
- 279:
- dict(
- name='sd_kpt5',
- id=279,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt3'),
- 280:
- dict(
- name='sd_kpt6',
- id=280,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt2'),
- 281:
- dict(
- name='sd_kpt7',
- id=281,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt19'),
- 282:
- dict(
- name='sd_kpt8',
- id=282,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt18'),
- 283:
- dict(
- name='sd_kpt9',
- id=283,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt17'),
- 284:
- dict(
- name='sd_kpt10',
- id=284,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt16'),
- 285:
- dict(
- name='sd_kpt11',
- id=285,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt15'),
- 286:
- dict(
- name='sd_kpt12',
- id=286,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt14'),
- 287:
- dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''),
- 288:
- dict(
- name='sd_kpt14',
- id=288,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt12'),
- 289:
- dict(
- name='sd_kpt15',
- id=289,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt11'),
- 290:
- dict(
- name='sd_kpt16',
- id=290,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt10'),
- 291:
- dict(
- name='sd_kpt17',
- id=291,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt9'),
- 292:
- dict(
- name='sd_kpt18',
- id=292,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt8'),
- 293:
- dict(
- name='sd_kpt19',
- id=293,
- color=[128, 64, 0],
- type='',
- swap='sd_kpt7')
- }),
- skeleton_info=dict({
- 0:
- dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]),
- 1:
- dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]),
- 2:
- dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]),
- 3:
- dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]),
- 4:
- dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]),
- 5:
- dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]),
- 6:
- dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]),
- 7:
- dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]),
- 8:
- dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]),
- 9:
- dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]),
- 10:
- dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]),
- 11:
- dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]),
- 12:
- dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]),
- 13:
- dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]),
- 14:
- dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]),
- 15:
- dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]),
- 16:
- dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]),
- 17:
- dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]),
- 18:
- dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]),
- 19:
- dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]),
- 20:
- dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]),
- 21:
- dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]),
- 22:
- dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]),
- 23:
- dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]),
- 24:
- dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]),
- 25:
- dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]),
- 26:
- dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]),
- 27:
- dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]),
- 28:
- dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]),
- 29:
- dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]),
- 30:
- dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]),
- 31:
- dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]),
- 32:
- dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]),
- 33:
- dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]),
- 34:
- dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]),
- 35:
- dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]),
- 36:
- dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]),
- 37:
- dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]),
- 38:
- dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]),
- 39:
- dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]),
- 40:
- dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]),
- 41:
- dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]),
- 42:
- dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]),
- 43:
- dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]),
- 44:
- dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]),
- 45:
- dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]),
- 46:
- dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]),
- 47:
- dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]),
- 48:
- dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]),
- 49:
- dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]),
- 50:
- dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]),
- 51:
- dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]),
- 52:
- dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]),
- 53:
- dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]),
- 54:
- dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]),
- 55:
- dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]),
- 56:
- dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]),
- 57:
- dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]),
- 58:
- dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]),
- 59:
- dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]),
- 60:
- dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]),
- 61:
- dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]),
- 62:
- dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]),
- 63:
- dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]),
- 64:
- dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]),
- 65:
- dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]),
- 66:
- dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]),
- 67:
- dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]),
- 68:
- dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]),
- 69:
- dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]),
- 70:
- dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]),
- 71:
- dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]),
- 72:
- dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]),
- 73:
- dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]),
- 74:
- dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]),
- 75:
- dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]),
- 76:
- dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]),
- 77:
- dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]),
- 78:
- dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]),
- 79:
- dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]),
- 80:
- dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]),
- 81:
- dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]),
- 82:
- dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]),
- 83:
- dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]),
- 84:
- dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]),
- 85:
- dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]),
- 86:
- dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]),
- 87:
- dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]),
- 88:
- dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]),
- 89:
- dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]),
- 90:
- dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]),
- 91:
- dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]),
- 92:
- dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]),
- 93:
- dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]),
- 94:
- dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]),
- 95:
- dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]),
- 96:
- dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]),
- 97:
- dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]),
- 98:
- dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]),
- 99:
- dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]),
- 100:
- dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]),
- 101:
- dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]),
- 102:
- dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]),
- 103:
- dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]),
- 104:
- dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]),
- 105:
- dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]),
- 106:
- dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]),
- 107:
- dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]),
- 108:
- dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]),
- 109:
- dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]),
- 110:
- dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]),
- 111:
- dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]),
- 112:
- dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]),
- 113:
- dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]),
- 114:
- dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]),
- 115:
- dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]),
- 116:
- dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]),
- 117:
- dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]),
- 118:
- dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]),
- 119:
- dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]),
- 120:
- dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]),
- 121:
- dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]),
- 122:
- dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]),
- 123:
- dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]),
- 124:
- dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]),
- 125:
- dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]),
- 126:
- dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]),
- 127:
- dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]),
- 128:
- dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]),
- 129:
- dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]),
- 130:
- dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]),
- 131:
- dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]),
- 132:
- dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]),
- 133:
- dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]),
- 134:
- dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]),
- 135:
- dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]),
- 136:
- dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]),
- 137:
- dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]),
- 138:
- dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]),
- 139:
- dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]),
- 140:
- dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]),
- 141:
- dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]),
- 142:
- dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]),
- 143:
- dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]),
- 144:
- dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]),
- 145:
- dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]),
- 146:
- dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]),
- 147:
- dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]),
- 148:
- dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]),
- 149:
- dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]),
- 150:
- dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]),
- 151:
- dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]),
- 152:
- dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]),
- 153:
- dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]),
- 154:
- dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]),
- 155:
- dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]),
- 156:
- dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]),
- 157:
- dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]),
- 158:
- dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]),
- 159:
- dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]),
- 160:
- dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]),
- 161:
- dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]),
- 162:
- dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]),
- 163:
- dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]),
- 164:
- dict(
- link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128,
- 128]),
- 165:
- dict(
- link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128,
- 128]),
- 166:
- dict(
- link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128,
- 128]),
- 167:
- dict(
- link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128,
- 128]),
- 168:
- dict(
- link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128,
- 128]),
- 169:
- dict(
- link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128,
- 128]),
- 170:
- dict(
- link=('shorts_kpt9', 'shorts_kpt10'),
- id=170,
- color=[128, 128, 128]),
- 171:
- dict(
- link=('shorts_kpt10', 'shorts_kpt3'),
- id=171,
- color=[128, 128, 128]),
- 172:
- dict(
- link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128,
- 128]),
- 173:
- dict(
- link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128,
- 128]),
- 174:
- dict(
- link=('trousers_kpt1', 'trousers_kpt4'),
- id=174,
- color=[128, 0, 128]),
- 175:
- dict(
- link=('trousers_kpt4', 'trousers_kpt5'),
- id=175,
- color=[128, 0, 128]),
- 176:
- dict(
- link=('trousers_kpt5', 'trousers_kpt6'),
- id=176,
- color=[128, 0, 128]),
- 177:
- dict(
- link=('trousers_kpt6', 'trousers_kpt7'),
- id=177,
- color=[128, 0, 128]),
- 178:
- dict(
- link=('trousers_kpt7', 'trousers_kpt8'),
- id=178,
- color=[128, 0, 128]),
- 179:
- dict(
- link=('trousers_kpt8', 'trousers_kpt9'),
- id=179,
- color=[128, 0, 128]),
- 180:
- dict(
- link=('trousers_kpt9', 'trousers_kpt10'),
- id=180,
- color=[128, 0, 128]),
- 181:
- dict(
- link=('trousers_kpt10', 'trousers_kpt11'),
- id=181,
- color=[128, 0, 128]),
- 182:
- dict(
- link=('trousers_kpt11', 'trousers_kpt12'),
- id=182,
- color=[128, 0, 128]),
- 183:
- dict(
- link=('trousers_kpt12', 'trousers_kpt13'),
- id=183,
- color=[128, 0, 128]),
- 184:
- dict(
- link=('trousers_kpt13', 'trousers_kpt14'),
- id=184,
- color=[128, 0, 128]),
- 185:
- dict(
- link=('trousers_kpt14', 'trousers_kpt3'),
- id=185,
- color=[128, 0, 128]),
- 186:
- dict(
- link=('trousers_kpt3', 'trousers_kpt2'),
- id=186,
- color=[128, 0, 128]),
- 187:
- dict(
- link=('trousers_kpt2', 'trousers_kpt1'),
- id=187,
- color=[128, 0, 128]),
- 188:
- dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]),
- 189:
- dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]),
- 190:
- dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]),
- 191:
- dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]),
- 192:
- dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]),
- 193:
- dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]),
- 194:
- dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]),
- 195:
- dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]),
- 196:
- dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]),
- 197:
- dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]),
- 198:
- dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]),
- 199:
- dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]),
- 200:
- dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]),
- 201:
- dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]),
- 202:
- dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]),
- 203:
- dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]),
- 204:
- dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]),
- 205:
- dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]),
- 206:
- dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]),
- 207:
- dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]),
- 208:
- dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]),
- 209:
- dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]),
- 210:
- dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]),
- 211:
- dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]),
- 212:
- dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]),
- 213:
- dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]),
- 214:
- dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]),
- 215:
- dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]),
- 216:
- dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]),
- 217:
- dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]),
- 218:
- dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]),
- 219:
- dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]),
- 220:
- dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]),
- 221:
- dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]),
- 222:
- dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]),
- 223:
- dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]),
- 224:
- dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]),
- 225:
- dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]),
- 226:
- dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]),
- 227:
- dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]),
- 228:
- dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]),
- 229:
- dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]),
- 230:
- dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]),
- 231:
- dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]),
- 232:
- dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]),
- 233:
- dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]),
- 234:
- dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]),
- 235:
- dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]),
- 236:
- dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]),
- 237:
- dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]),
- 238:
- dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]),
- 239:
- dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]),
- 240:
- dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]),
- 241:
- dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]),
- 242:
- dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]),
- 243:
- dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]),
- 244:
- dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]),
- 245:
- dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]),
- 246:
- dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]),
- 247:
- dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]),
- 248:
- dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]),
- 249:
- dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]),
- 250:
- dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]),
- 251:
- dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]),
- 252:
- dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]),
- 253:
- dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]),
- 254:
- dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]),
- 255:
- dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]),
- 256:
- dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]),
- 257:
- dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]),
- 258:
- dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]),
- 259:
- dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]),
- 260:
- dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]),
- 261:
- dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]),
- 262:
- dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]),
- 263:
- dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]),
- 264:
- dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]),
- 265:
- dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]),
- 266:
- dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]),
- 267:
- dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]),
- 268:
- dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]),
- 269:
- dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]),
- 270:
- dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]),
- 271:
- dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]),
- 272:
- dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]),
- 273:
- dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]),
- 274:
- dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]),
- 275:
- dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]),
- 276:
- dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]),
- 277:
- dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]),
- 278:
- dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]),
- 279:
- dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]),
- 280:
- dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]),
- 281:
- dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]),
- 282:
- dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]),
- 283:
- dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]),
- 284:
- dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]),
- 285:
- dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]),
- 286:
- dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]),
- 287:
- dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]),
- 288:
- dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]),
- 289:
- dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]),
- 290:
- dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]),
- 291:
- dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]),
- 292:
- dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]),
- 293:
- dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]),
- 294:
- dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]),
- 295:
- dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]),
- 296:
- dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]),
- 297:
- dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]),
- 298:
- dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]),
- 299:
- dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]),
- 300:
- dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]),
- 301:
- dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]),
- 302:
- dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]),
- 303:
- dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0])
- }),
- joint_weights=[
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
- ],
- sigmas=[])
-param_scheduler = [
- dict(
- type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
- dict(
- type='MultiStepLR',
- begin=0,
- end=120,
- milestones=[80, 100],
- gamma=0.1,
- by_epoch=True)
-]
-optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
-auto_scale_lr = dict(base_batch_size=512)
-dataset_type = 'DeepFashion2Dataset'
-data_mode = 'topdown'
-data_root = 'data/deepfashion2/'
-codec = dict(
- type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
-train_pipeline = [
- dict(type='LoadImage'),
- dict(type='GetBBoxCenterScale'),
- dict(type='RandomFlip', direction='horizontal'),
- dict(
- type='RandomBBoxTransform',
- shift_prob=0,
- rotate_factor=60,
- scale_factor=(0.75, 1.25)),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(
- type='GenerateTarget',
- encoder=dict(
- type='MSRAHeatmap',
- input_size=(192, 256),
- heatmap_size=(48, 64),
- sigma=2)),
- dict(type='PackPoseInputs')
-]
-val_pipeline = [
- dict(type='LoadImage', backend_args=dict(backend='local')),
- dict(type='GetBBoxCenterScale'),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(type='PackPoseInputs')
-]
-train_dataloader = dict(
- batch_size=64,
- num_workers=6,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- dataset=dict(
- type='DeepFashion2Dataset',
- data_root='data/deepfashion2/',
- data_mode='topdown',
- ann_file='train/deepfashion2_skirt.json',
- data_prefix=dict(img='train/image/'),
- pipeline=[
- dict(type='LoadImage'),
- dict(type='GetBBoxCenterScale'),
- dict(type='RandomFlip', direction='horizontal'),
- dict(
- type='RandomBBoxTransform',
- shift_prob=0,
- rotate_factor=60,
- scale_factor=(0.75, 1.25)),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(
- type='GenerateTarget',
- encoder=dict(
- type='MSRAHeatmap',
- input_size=(192, 256),
- heatmap_size=(48, 64),
- sigma=2)),
- dict(type='PackPoseInputs')
- ]))
-val_dataloader = dict(
- batch_size=32,
- num_workers=6,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=dict(
- type='DeepFashion2Dataset',
- data_root='data/deepfashion2/',
- data_mode='topdown',
- ann_file='validation/deepfashion2_skirt.json',
- data_prefix=dict(img='validation/image/'),
- test_mode=True,
- pipeline=[
- dict(type='LoadImage', backend_args=dict(backend='local')),
- dict(type='GetBBoxCenterScale'),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(type='PackPoseInputs')
- ]))
-test_dataloader = dict(
- batch_size=32,
- num_workers=6,
- persistent_workers=True,
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=dict(
- type='DeepFashion2Dataset',
- data_root='data/deepfashion2/',
- data_mode='topdown',
- ann_file='validation/deepfashion2_skirt.json',
- data_prefix=dict(img='validation/image/'),
- test_mode=True,
- pipeline=[
- dict(type='LoadImage', backend_args=dict(backend='local')),
- dict(type='GetBBoxCenterScale'),
- dict(type='TopdownAffine', input_size=(192, 256)),
- dict(type='PackPoseInputs')
- ]))
-channel_cfg = dict(
- num_output_channels=294,
- dataset_joints=294,
- dataset_channel=[[
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
- 290, 291, 292, 293
- ]],
- inference_channel=[
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
- 290, 291, 292, 293
- ])
-model = dict(
- type='TopdownPoseEstimator',
- data_preprocessor=dict(
- type='PoseDataPreprocessor',
- mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- bgr_to_rgb=True),
- backbone=dict(
- type='ResNet',
- depth=50,
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
- head=dict(
- type='HeatmapHead',
- in_channels=2048,
- out_channels=294,
- loss=dict(type='KeypointMSELoss', use_target_weight=True),
- decoder=dict(
- type='MSRAHeatmap',
- input_size=(192, 256),
- heatmap_size=(48, 64),
- sigma=2)),
- test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True))
-val_evaluator = [
- dict(type='PCKAccuracy', thr=0.2),
- dict(type='AUC'),
- dict(type='EPE')
-]
-test_evaluator = [
- dict(type='PCKAccuracy', thr=0.2),
- dict(type='AUC'),
- dict(type='EPE')
-]
-launcher = 'pytorch'
-work_dir = './work_dirs/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192'
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py
deleted file mode 100644
index 372a6632ae325da40530356fa2dc51479986359d..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py
+++ /dev/null
@@ -1,113 +0,0 @@
-_base_ = [ # 此配置文件将继承所有 `_base_` 中的配置
- '../configs/_base_/schedules/custom_schedule.py', # 训练策略配置
- '../configs/_base_/default_runtime.py' # 默认运行设置
-]
-
-default_hooks = dict(
- # print log every 50 iterations.
- logger=dict(type='LoggerHook', interval=50),
- # save checkpoint per 8 epochs.
- checkpoint=dict(save_best='auto', interval=16)
-)
-
-visualizer = dict(
- vis_backends=[dict(type='LocalVisBackend'),
- dict(type='WandbVisBackend')])
-
-dataset_type = 'CustomDataset'
-
-# config of pipline
-train_pipeline = [
- dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像
- dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪
- dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转
- dict(type='PackInputs'), # 准备图像以及标签
-]
-
-test_pipeline = [
- dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像
- dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px
- dict(type='CenterCrop', crop_size=224), # 中心裁剪
- dict(type='PackInputs'), # 准备图像以及标签
-]
-
-# config of dataloader
-train_dataloader = dict(
- batch_size=16, # 每张 GPU 的 batchsize
- num_workers=5, # 每个 GPU 的线程数
- dataset=dict( # 训练数据集
- type=dataset_type,
- data_root='../2_preprocess_data_3000',
- with_label=True,
- ann_file='',
- data_prefix='train',
- pipeline=train_pipeline),
- sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器
- persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间
-)
-
-# 构造验证集 dataloader
-val_dataloader = dict(
- batch_size=16,
- num_workers=5,
- dataset=dict(
- type=dataset_type,
- data_root='../2_preprocess_data_3000',
- with_label=True,
- ann_file='',
- data_prefix='val',
- pipeline=test_pipeline),
- sampler=dict(type='DefaultSampler', shuffle=False),
- persistent_workers=True,
-)
-
-# set evaluator of validation dataset. Here uses top1 and top3 accuracy
-val_evaluator = dict(type='Accuracy', topk=(1, 3))
-
-test_dataloader = val_dataloader
-test_evaluator = val_evaluator
-
-model = dict(
- type='ImageClassifier', # 主模型类型(对于图像分类任务,使用 `ImageClassifier`)
- backbone=dict(
- type='HRNet', # 主干网络类型
- arch='w32', # 主干网络架构
- in_channels=4,
- extra=dict(
- stage1=dict(
- num_modules=1,
- num_branches=1,
- block='BOTTLENECK',
- num_blocks=(4, ),
- num_channels=(64, )),
- stage2=dict(
- num_modules=1,
- num_branches=2,
- block='BASIC',
- num_blocks=(4, 4),
- num_channels=(32, 64)),
- stage3=dict(
- num_modules=4,
- num_branches=3,
- block='BASIC',
- num_blocks=(4, 4, 4),
- num_channels=(32, 64, 128)),
- stage4=dict(
- num_modules=3,
- num_branches=4,
- block='BASIC',
- num_blocks=(4, 4, 4, 4),
- num_channels=(32, 64, 128, 256))),
- ),
- neck=dict(type='GlobalAveragePooling'), # 颈网络类型
- head=dict(
- type='LinearClsHead', # 分类颈网络类型
- # 除了 `type` 之外的所有字段都来自 `LinearClsHead` 类的 __init__ 方法
- # 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.heads.LinearClsHead.html
- num_classes=7, # 分类类别数
- in_channels=256,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0), # 损失函数配置信息
- topk=(1, 3), # 评估指标,Top-k 准确率
- ))
-
-
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py
deleted file mode 100644
index 1c1aa5a2c4eee10c10159175224d9b77ea57e57b..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/resnet50_label_smooth.py',
- '../_base_/datasets/imagenet_bs32.py',
- '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py
deleted file mode 100644
index a1c233ce494365e0bb3a31c671aa015b3ecc8043..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import argparse
-import random
-
-import numpy as np
-import torch
-
-from trainer import CBHGTrainer, Seq2SeqTrainer, GPTTrainer
-
-SEED = 1234
-random.seed(SEED)
-np.random.seed(SEED)
-torch.manual_seed(SEED)
-torch.cuda.manual_seed(SEED)
-torch.backends.cudnn.deterministic = True
-torch.backends.cudnn.benchmark = False
-
-
-def train_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_kind", dest="model_kind", type=str, required=True)
- parser.add_argument(
- "--model_desc", dest="model_desc", type=str, required=False, default=""
- )
- parser.add_argument("--config", dest="config", type=str, required=True)
- parser.add_argument(
- "--reset_dir",
- dest="clear_dir",
- action="store_true",
- help="deletes everything under this config's folder.",
- )
- return parser
-
-
-parser = train_parser()
-args = parser.parse_args()
-
-
-if args.model_kind in ["seq2seq"]:
- trainer = Seq2SeqTrainer(args.config, args.model_kind, args.model_desc)
-elif args.model_kind in ["tacotron_based"]:
- trainer = Seq2SeqTrainer(args.config, args.model_kind, args.model_desc)
-elif args.model_kind in ["baseline", "cbhg"]:
- trainer = CBHGTrainer(args.config, args.model_kind, args.model_desc)
-elif args.model_kind in ["gpt"]:
- trainer = GPTTrainer(args.config, args.model_kind, args.model_desc)
-else:
- raise ValueError("The model kind is not supported")
-
-trainer.run()
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts
deleted file mode 100644
index d94b030da72c4b269f5385580b99b8509efbdf8f..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts
+++ /dev/null
@@ -1,10 +0,0 @@
-import { base } from "$app/paths";
-import { authCondition } from "$lib/server/auth";
-import { collections } from "$lib/server/database";
-import { redirect } from "@sveltejs/kit";
-
-export const actions = {
- delete: async function ({ locals }) {
- throw redirect(303, `${base}/`);
- },
-};
diff --git a/spaces/Adapter/CoAdapter/ldm/inference_base.py b/spaces/Adapter/CoAdapter/ldm/inference_base.py
deleted file mode 100644
index b57e4580ea90515e450afd6e8e267401e75b86e9..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/inference_base.py
+++ /dev/null
@@ -1,292 +0,0 @@
-import argparse
-import torch
-from omegaconf import OmegaConf
-
-from ldm.models.diffusion.ddim import DDIMSampler
-from ldm.models.diffusion.plms import PLMSSampler
-from ldm.modules.encoders.adapter import Adapter, StyleAdapter, Adapter_light
-from ldm.modules.extra_condition.api import ExtraCondition
-from ldm.util import fix_cond_shapes, load_model_from_config, read_state_dict
-
-DEFAULT_NEGATIVE_PROMPT = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
- 'fewer digits, cropped, worst quality, low quality'
-
-
-def get_base_argument_parser() -> argparse.ArgumentParser:
- """get the base argument parser for inference scripts"""
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--outdir',
- type=str,
- help='dir to write results to',
- default=None,
- )
-
- parser.add_argument(
- '--prompt',
- type=str,
- nargs='?',
- default=None,
- help='positive prompt',
- )
-
- parser.add_argument(
- '--neg_prompt',
- type=str,
- default=DEFAULT_NEGATIVE_PROMPT,
- help='negative prompt',
- )
-
- parser.add_argument(
- '--cond_path',
- type=str,
- default=None,
- help='condition image path',
- )
-
- parser.add_argument(
- '--cond_inp_type',
- type=str,
- default='image',
- help='the type of the input condition image, take depth T2I as example, the input can be raw image, '
- 'which depth will be calculated, or the input can be a directly a depth map image',
- )
-
- parser.add_argument(
- '--sampler',
- type=str,
- default='ddim',
- choices=['ddim', 'plms'],
- help='sampling algorithm, currently, only ddim and plms are supported, more are on the way',
- )
-
- parser.add_argument(
- '--steps',
- type=int,
- default=50,
- help='number of sampling steps',
- )
-
- parser.add_argument(
- '--sd_ckpt',
- type=str,
- default='models/sd-v1-4.ckpt',
- help='path to checkpoint of stable diffusion model, both .ckpt and .safetensor are supported',
- )
-
- parser.add_argument(
- '--vae_ckpt',
- type=str,
- default=None,
- help='vae checkpoint, anime SD models usually have seperate vae ckpt that need to be loaded',
- )
-
- parser.add_argument(
- '--adapter_ckpt',
- type=str,
- default=None,
- help='path to checkpoint of adapter',
- )
-
- parser.add_argument(
- '--config',
- type=str,
- default='configs/stable-diffusion/sd-v1-inference.yaml',
- help='path to config which constructs SD model',
- )
-
- parser.add_argument(
- '--max_resolution',
- type=float,
- default=512 * 512,
- help='max image height * width, only for computer with limited vram',
- )
-
- parser.add_argument(
- '--resize_short_edge',
- type=int,
- default=None,
- help='resize short edge of the input image, if this arg is set, max_resolution will not be used',
- )
-
- parser.add_argument(
- '--C',
- type=int,
- default=4,
- help='latent channels',
- )
-
- parser.add_argument(
- '--f',
- type=int,
- default=8,
- help='downsampling factor',
- )
-
- parser.add_argument(
- '--scale',
- type=float,
- default=7.5,
- help='unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))',
- )
-
- parser.add_argument(
- '--cond_tau',
- type=float,
- default=1.0,
- help='timestamp parameter that determines until which step the adapter is applied, '
- 'similar as Prompt-to-Prompt tau',
- )
-
- parser.add_argument(
- '--style_cond_tau',
- type=float,
- default=1.0,
- help='timestamp parameter that determines until which step the adapter is applied, '
- 'similar as Prompt-to-Prompt tau',
- )
-
- parser.add_argument(
- '--cond_weight',
- type=float,
- default=1.0,
- help='the adapter features are multiplied by the cond_weight. The larger the cond_weight, the more aligned '
- 'the generated image and condition will be, but the generated quality may be reduced',
- )
-
- parser.add_argument(
- '--seed',
- type=int,
- default=42,
- )
-
- parser.add_argument(
- '--n_samples',
- type=int,
- default=4,
- help='# of samples to generate',
- )
-
- return parser
-
-
-def get_sd_models(opt):
- """
- build stable diffusion model, sampler
- """
- # SD
- config = OmegaConf.load(f"{opt.config}")
- model = load_model_from_config(config, opt.sd_ckpt, opt.vae_ckpt)
- sd_model = model.to(opt.device)
-
- # sampler
- if opt.sampler == 'plms':
- sampler = PLMSSampler(model)
- elif opt.sampler == 'ddim':
- sampler = DDIMSampler(model)
- else:
- raise NotImplementedError
-
- return sd_model, sampler
-
-
-def get_t2i_adapter_models(opt):
- config = OmegaConf.load(f"{opt.config}")
- model = load_model_from_config(config, opt.sd_ckpt, opt.vae_ckpt)
- adapter_ckpt_path = getattr(opt, f'{opt.which_cond}_adapter_ckpt', None)
- if adapter_ckpt_path is None:
- adapter_ckpt_path = getattr(opt, 'adapter_ckpt')
- adapter_ckpt = read_state_dict(adapter_ckpt_path)
- new_state_dict = {}
- for k, v in adapter_ckpt.items():
- if not k.startswith('adapter.'):
- new_state_dict[f'adapter.{k}'] = v
- else:
- new_state_dict[k] = v
- m, u = model.load_state_dict(new_state_dict, strict=False)
- if len(u) > 0:
- print(f"unexpected keys in loading adapter ckpt {adapter_ckpt_path}:")
- print(u)
-
- model = model.to(opt.device)
-
- # sampler
- if opt.sampler == 'plms':
- sampler = PLMSSampler(model)
- elif opt.sampler == 'ddim':
- sampler = DDIMSampler(model)
- else:
- raise NotImplementedError
-
- return model, sampler
-
-
-def get_cond_ch(cond_type: ExtraCondition):
- if cond_type == ExtraCondition.sketch or cond_type == ExtraCondition.canny:
- return 1
- return 3
-
-
-def get_adapters(opt, cond_type: ExtraCondition):
- adapter = {}
- cond_weight = getattr(opt, f'{cond_type.name}_weight', None)
- if cond_weight is None:
- cond_weight = getattr(opt, 'cond_weight')
- adapter['cond_weight'] = cond_weight
-
- if cond_type == ExtraCondition.style:
- adapter['model'] = StyleAdapter(width=1024, context_dim=768, num_head=8, n_layes=3, num_token=8).to(opt.device)
- elif cond_type == ExtraCondition.color:
- adapter['model'] = Adapter_light(
- cin=64 * get_cond_ch(cond_type),
- channels=[320, 640, 1280, 1280],
- nums_rb=4).to(opt.device)
- else:
- adapter['model'] = Adapter(
- cin=64 * get_cond_ch(cond_type),
- channels=[320, 640, 1280, 1280][:4],
- nums_rb=2,
- ksize=1,
- sk=True,
- use_conv=False).to(opt.device)
- ckpt_path = getattr(opt, f'{cond_type.name}_adapter_ckpt', None)
- if ckpt_path is None:
- ckpt_path = getattr(opt, 'adapter_ckpt')
- adapter['model'].load_state_dict(torch.load(ckpt_path))
-
- return adapter
-
-
-def diffusion_inference(opt, model, sampler, adapter_features, append_to_context=None):
- # get text embedding
- c = model.get_learned_conditioning([opt.prompt])
- if opt.scale != 1.0:
- uc = model.get_learned_conditioning([opt.neg_prompt])
- else:
- uc = None
- c, uc = fix_cond_shapes(model, c, uc)
-
- if not hasattr(opt, 'H'):
- opt.H = 512
- opt.W = 512
- shape = [opt.C, opt.H // opt.f, opt.W // opt.f]
-
- samples_latents, _ = sampler.sample(
- S=opt.steps,
- conditioning=c,
- batch_size=1,
- shape=shape,
- verbose=False,
- unconditional_guidance_scale=opt.scale,
- unconditional_conditioning=uc,
- x_T=None,
- features_adapter=adapter_features,
- append_to_context=append_to_context,
- cond_tau=opt.cond_tau,
- style_cond_tau=opt.style_cond_tau,
- )
-
- x_samples = model.decode_first_stage(samples_latents)
- x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0)
-
- return x_samples
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py b/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py
deleted file mode 100644
index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py
+++ /dev/null
@@ -1,916 +0,0 @@
-import os
-import math
-import random
-import numpy as np
-import torch
-import cv2
-from torchvision.utils import make_grid
-from datetime import datetime
-#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
-
-
-os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
-
-
-'''
-# --------------------------------------------
-# Kai Zhang (github: https://github.com/cszn)
-# 03/Mar/2019
-# --------------------------------------------
-# https://github.com/twhui/SRGAN-pyTorch
-# https://github.com/xinntao/BasicSR
-# --------------------------------------------
-'''
-
-
-IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
-
-
-def is_image_file(filename):
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
-
-
-def get_timestamp():
- return datetime.now().strftime('%y%m%d-%H%M%S')
-
-
-def imshow(x, title=None, cbar=False, figsize=None):
- plt.figure(figsize=figsize)
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-
-def surf(Z, cmap='rainbow', figsize=None):
- plt.figure(figsize=figsize)
- ax3 = plt.axes(projection='3d')
-
- w, h = Z.shape[:2]
- xx = np.arange(0,w,1)
- yy = np.arange(0,h,1)
- X, Y = np.meshgrid(xx, yy)
- ax3.plot_surface(X,Y,Z,cmap=cmap)
- #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
- plt.show()
-
-
-'''
-# --------------------------------------------
-# get image pathes
-# --------------------------------------------
-'''
-
-
-def get_image_paths(dataroot):
- paths = None # return None if dataroot is None
- if dataroot is not None:
- paths = sorted(_get_paths_from_images(dataroot))
- return paths
-
-
-def _get_paths_from_images(path):
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
- images = []
- for dirpath, _, fnames in sorted(os.walk(path)):
- for fname in sorted(fnames):
- if is_image_file(fname):
- img_path = os.path.join(dirpath, fname)
- images.append(img_path)
- assert images, '{:s} has no valid image file'.format(path)
- return images
-
-
-'''
-# --------------------------------------------
-# split large images into small images
-# --------------------------------------------
-'''
-
-
-def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
- w, h = img.shape[:2]
- patches = []
- if w > p_max and h > p_max:
- w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
- h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
- w1.append(w-p_size)
- h1.append(h-p_size)
-# print(w1)
-# print(h1)
- for i in w1:
- for j in h1:
- patches.append(img[i:i+p_size, j:j+p_size,:])
- else:
- patches.append(img)
-
- return patches
-
-
-def imssave(imgs, img_path):
- """
- imgs: list, N images of size WxHxC
- """
- img_name, ext = os.path.splitext(os.path.basename(img_path))
-
- for i, img in enumerate(imgs):
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
- cv2.imwrite(new_path, img)
-
-
-def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
- """
- split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
- and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
- will be splitted.
- Args:
- original_dataroot:
- taget_dataroot:
- p_size: size of small images
- p_overlap: patch size in training is a good choice
- p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
- """
- paths = get_image_paths(original_dataroot)
- for img_path in paths:
- # img_name, ext = os.path.splitext(os.path.basename(img_path))
- img = imread_uint(img_path, n_channels=n_channels)
- patches = patches_from_image(img, p_size, p_overlap, p_max)
- imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
- #if original_dataroot == taget_dataroot:
- #del img_path
-
-'''
-# --------------------------------------------
-# makedir
-# --------------------------------------------
-'''
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def mkdirs(paths):
- if isinstance(paths, str):
- mkdir(paths)
- else:
- for path in paths:
- mkdir(path)
-
-
-def mkdir_and_rename(path):
- if os.path.exists(path):
- new_name = path + '_archived_' + get_timestamp()
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
- os.rename(path, new_name)
- os.makedirs(path)
-
-
-'''
-# --------------------------------------------
-# read image from path
-# opencv is fast, but read BGR numpy image
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# get uint8 image of size HxWxn_channles (RGB)
-# --------------------------------------------
-def imread_uint(path, n_channels=3):
- # input: path
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
- if n_channels == 1:
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
- img = np.expand_dims(img, axis=2) # HxWx1
- elif n_channels == 3:
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
- else:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
- return img
-
-
-# --------------------------------------------
-# matlab's imwrite
-# --------------------------------------------
-def imsave(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-def imwrite(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-
-
-# --------------------------------------------
-# get single image of size HxWxn_channles (BGR)
-# --------------------------------------------
-def read_img(path):
- # read image by cv2
- # return: Numpy float32, HWC, BGR, [0,1]
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
- img = img.astype(np.float32) / 255.
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- # some images have 4 channels
- if img.shape[2] > 3:
- img = img[:, :, :3]
- return img
-
-
-'''
-# --------------------------------------------
-# image format conversion
-# --------------------------------------------
-# numpy(single) <---> numpy(unit)
-# numpy(single) <---> tensor
-# numpy(unit) <---> tensor
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# numpy(single) [0, 1] <---> numpy(unit)
-# --------------------------------------------
-
-
-def uint2single(img):
-
- return np.float32(img/255.)
-
-
-def single2uint(img):
-
- return np.uint8((img.clip(0, 1)*255.).round())
-
-
-def uint162single(img):
-
- return np.float32(img/65535.)
-
-
-def single2uint16(img):
-
- return np.uint16((img.clip(0, 1)*65535.).round())
-
-
-# --------------------------------------------
-# numpy(unit) (HxWxC or HxW) <---> tensor
-# --------------------------------------------
-
-
-# convert uint to 4-dimensional torch tensor
-def uint2tensor4(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
-
-
-# convert uint to 3-dimensional torch tensor
-def uint2tensor3(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
-
-
-# convert 2/3/4-dimensional torch tensor to uint
-def tensor2uint(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- return np.uint8((img*255.0).round())
-
-
-# --------------------------------------------
-# numpy(single) (HxWxC) <---> tensor
-# --------------------------------------------
-
-
-# convert single (HxWxC) to 3-dimensional torch tensor
-def single2tensor3(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
-
-
-# convert single (HxWxC) to 4-dimensional torch tensor
-def single2tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
-
-
-# convert torch tensor to single
-def tensor2single(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
-
- return img
-
-# convert torch tensor to single
-def tensor2single3(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- elif img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return img
-
-
-def single2tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
-
-
-def single32tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
-
-
-def single42tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
-
-
-# from skimage.io import imread, imsave
-def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
- '''
- Converts a torch Tensor into an image Numpy array of BGR channel order
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
- '''
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
- n_dim = tensor.dim()
- if n_dim == 4:
- n_img = len(tensor)
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 3:
- img_np = tensor.numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 2:
- img_np = tensor.numpy()
- else:
- raise TypeError(
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
- if out_type == np.uint8:
- img_np = (img_np * 255.0).round()
- # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
- return img_np.astype(out_type)
-
-
-'''
-# --------------------------------------------
-# Augmentation, flipe and/or rotate
-# --------------------------------------------
-# The following two are enough.
-# (1) augmet_img: numpy image of WxHxC or WxH
-# (2) augment_img_tensor4: tensor image 1xCxWxH
-# --------------------------------------------
-'''
-
-
-def augment_img(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return np.flipud(np.rot90(img))
- elif mode == 2:
- return np.flipud(img)
- elif mode == 3:
- return np.rot90(img, k=3)
- elif mode == 4:
- return np.flipud(np.rot90(img, k=2))
- elif mode == 5:
- return np.rot90(img)
- elif mode == 6:
- return np.rot90(img, k=2)
- elif mode == 7:
- return np.flipud(np.rot90(img, k=3))
-
-
-def augment_img_tensor4(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return img.rot90(1, [2, 3]).flip([2])
- elif mode == 2:
- return img.flip([2])
- elif mode == 3:
- return img.rot90(3, [2, 3])
- elif mode == 4:
- return img.rot90(2, [2, 3]).flip([2])
- elif mode == 5:
- return img.rot90(1, [2, 3])
- elif mode == 6:
- return img.rot90(2, [2, 3])
- elif mode == 7:
- return img.rot90(3, [2, 3]).flip([2])
-
-
-def augment_img_tensor(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- img_size = img.size()
- img_np = img.data.cpu().numpy()
- if len(img_size) == 3:
- img_np = np.transpose(img_np, (1, 2, 0))
- elif len(img_size) == 4:
- img_np = np.transpose(img_np, (2, 3, 1, 0))
- img_np = augment_img(img_np, mode=mode)
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
- if len(img_size) == 3:
- img_tensor = img_tensor.permute(2, 0, 1)
- elif len(img_size) == 4:
- img_tensor = img_tensor.permute(3, 2, 0, 1)
-
- return img_tensor.type_as(img)
-
-
-def augment_img_np3(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return img.transpose(1, 0, 2)
- elif mode == 2:
- return img[::-1, :, :]
- elif mode == 3:
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 4:
- return img[:, ::-1, :]
- elif mode == 5:
- img = img[:, ::-1, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 6:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- return img
- elif mode == 7:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
-
-
-def augment_imgs(img_list, hflip=True, rot=True):
- # horizontal flip OR rotate
- hflip = hflip and random.random() < 0.5
- vflip = rot and random.random() < 0.5
- rot90 = rot and random.random() < 0.5
-
- def _augment(img):
- if hflip:
- img = img[:, ::-1, :]
- if vflip:
- img = img[::-1, :, :]
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- return [_augment(img) for img in img_list]
-
-
-'''
-# --------------------------------------------
-# modcrop and shave
-# --------------------------------------------
-'''
-
-
-def modcrop(img_in, scale):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- if img.ndim == 2:
- H, W = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r]
- elif img.ndim == 3:
- H, W, C = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r, :]
- else:
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
- return img
-
-
-def shave(img_in, border=0):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- h, w = img.shape[:2]
- img = img[border:h-border, border:w-border]
- return img
-
-
-'''
-# --------------------------------------------
-# image processing process on numpy image
-# channel_convert(in_c, tar_type, img_list):
-# rgb2ycbcr(img, only_y=True):
-# bgr2ycbcr(img, only_y=True):
-# ycbcr2rgb(img):
-# --------------------------------------------
-'''
-
-
-def rgb2ycbcr(img, only_y=True):
- '''same as matlab rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def ycbcr2rgb(img):
- '''same as matlab ycbcr2rgb
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def bgr2ycbcr(img, only_y=True):
- '''bgr version of rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def channel_convert(in_c, tar_type, img_list):
- # conversion among BGR, gray and y
- if in_c == 3 and tar_type == 'gray': # BGR to gray
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in gray_list]
- elif in_c == 3 and tar_type == 'y': # BGR to y
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in y_list]
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
- else:
- return img_list
-
-
-'''
-# --------------------------------------------
-# metric, PSNR and SSIM
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# PSNR
-# --------------------------------------------
-def calculate_psnr(img1, img2, border=0):
- # img1 and img2 have range [0, 255]
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-# --------------------------------------------
-# SSIM
-# --------------------------------------------
-def calculate_ssim(img1, img2, border=0):
- '''calculate SSIM
- the same outputs as MATLAB's
- img1, img2: [0, 255]
- '''
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- if img1.ndim == 2:
- return ssim(img1, img2)
- elif img1.ndim == 3:
- if img1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
- return np.array(ssims).mean()
- elif img1.shape[2] == 1:
- return ssim(np.squeeze(img1), np.squeeze(img2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-'''
-# --------------------------------------------
-# matlab's bicubic imresize (numpy and torch) [0, 1]
-# --------------------------------------------
-'''
-
-
-# matlab 'imresize' function, now only support 'bicubic'
-def cubic(x):
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- if (scale < 1) and (antialiasing):
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5+scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- P = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
- 1, P).expand(out_length, P)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
- # apply cubic kernel
- if (scale < 1) and (antialiasing):
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, P)
-
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, P - 2)
- weights = weights.narrow(1, 1, P - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, P - 2)
- weights = weights.narrow(1, 0, P - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-# --------------------------------------------
-# imresize for tensor image [0, 1]
-# --------------------------------------------
-def imresize(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: pytorch tensor, CHW or HW [0,1]
- # output: CHW or HW [0,1] w/o round
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(0)
- in_C, in_H, in_W = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:, :sym_len_Hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_He:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_Ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_We:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
- return out_2
-
-
-# --------------------------------------------
-# imresize for numpy image [0, 1]
-# --------------------------------------------
-def imresize_np(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: Numpy, HWC or HW [0,1]
- # output: HWC or HW [0,1] w/o round
- img = torch.from_numpy(img)
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(2)
-
- in_H, in_W, in_C = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:sym_len_Hs, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[-sym_len_He:, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :sym_len_Ws, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, -sym_len_We:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
-
- return out_2.numpy()
-
-
-if __name__ == '__main__':
- print('---')
-# img = imread_uint('test.bmp', 3)
-# img = uint2single(img)
-# img_bicubic = imresize_np(img, 1/4)
\ No newline at end of file
diff --git a/spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py b/spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py
deleted file mode 100644
index 0a1f7976a2051d07bb028f9fd68eb52f45234f43..0000000000000000000000000000000000000000
--- a/spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import numpy as np
-import onnx
-from onnx import shape_inference
-try:
- import onnx_graphsurgeon as gs
-except Exception as e:
- print('Import onnx_graphsurgeon failure: %s' % e)
-
-import logging
-
-LOGGER = logging.getLogger(__name__)
-
-class RegisterNMS(object):
- def __init__(
- self,
- onnx_model_path: str,
- precision: str = "fp32",
- ):
-
- self.graph = gs.import_onnx(onnx.load(onnx_model_path))
- assert self.graph
- LOGGER.info("ONNX graph created successfully")
- # Fold constants via ONNX-GS that PyTorch2ONNX may have missed
- self.graph.fold_constants()
- self.precision = precision
- self.batch_size = 1
- def infer(self):
- """
- Sanitize the graph by cleaning any unconnected nodes, do a topological resort,
- and fold constant inputs values. When possible, run shape inference on the
- ONNX graph to determine tensor shapes.
- """
- for _ in range(3):
- count_before = len(self.graph.nodes)
-
- self.graph.cleanup().toposort()
- try:
- for node in self.graph.nodes:
- for o in node.outputs:
- o.shape = None
- model = gs.export_onnx(self.graph)
- model = shape_inference.infer_shapes(model)
- self.graph = gs.import_onnx(model)
- except Exception as e:
- LOGGER.info(f"Shape inference could not be performed at this time:\n{e}")
- try:
- self.graph.fold_constants(fold_shapes=True)
- except TypeError as e:
- LOGGER.error(
- "This version of ONNX GraphSurgeon does not support folding shapes, "
- f"please upgrade your onnx_graphsurgeon module. Error:\n{e}"
- )
- raise
-
- count_after = len(self.graph.nodes)
- if count_before == count_after:
- # No new folding occurred in this iteration, so we can stop for now.
- break
-
- def save(self, output_path):
- """
- Save the ONNX model to the given location.
- Args:
- output_path: Path pointing to the location where to write
- out the updated ONNX model.
- """
- self.graph.cleanup().toposort()
- model = gs.export_onnx(self.graph)
- onnx.save(model, output_path)
- LOGGER.info(f"Saved ONNX model to {output_path}")
-
- def register_nms(
- self,
- *,
- score_thresh: float = 0.25,
- nms_thresh: float = 0.45,
- detections_per_img: int = 100,
- ):
- """
- Register the ``EfficientNMS_TRT`` plugin node.
- NMS expects these shapes for its input tensors:
- - box_net: [batch_size, number_boxes, 4]
- - class_net: [batch_size, number_boxes, number_labels]
- Args:
- score_thresh (float): The scalar threshold for score (low scoring boxes are removed).
- nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU
- overlap with previously selected boxes are removed).
- detections_per_img (int): Number of best detections to keep after NMS.
- """
-
- self.infer()
- # Find the concat node at the end of the network
- op_inputs = self.graph.outputs
- op = "EfficientNMS_TRT"
- attrs = {
- "plugin_version": "1",
- "background_class": -1, # no background class
- "max_output_boxes": detections_per_img,
- "score_threshold": score_thresh,
- "iou_threshold": nms_thresh,
- "score_activation": False,
- "box_coding": 0,
- }
-
- if self.precision == "fp32":
- dtype_output = np.float32
- elif self.precision == "fp16":
- dtype_output = np.float16
- else:
- raise NotImplementedError(f"Currently not supports precision: {self.precision}")
-
- # NMS Outputs
- output_num_detections = gs.Variable(
- name="num_dets",
- dtype=np.int32,
- shape=[self.batch_size, 1],
- ) # A scalar indicating the number of valid detections per batch image.
- output_boxes = gs.Variable(
- name="det_boxes",
- dtype=dtype_output,
- shape=[self.batch_size, detections_per_img, 4],
- )
- output_scores = gs.Variable(
- name="det_scores",
- dtype=dtype_output,
- shape=[self.batch_size, detections_per_img],
- )
- output_labels = gs.Variable(
- name="det_classes",
- dtype=np.int32,
- shape=[self.batch_size, detections_per_img],
- )
-
- op_outputs = [output_num_detections, output_boxes, output_scores, output_labels]
-
- # Create the NMS Plugin node with the selected inputs. The outputs of the node will also
- # become the final outputs of the graph.
- self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs)
- LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}")
-
- self.graph.outputs = op_outputs
-
- self.infer()
-
- def save(self, output_path):
- """
- Save the ONNX model to the given location.
- Args:
- output_path: Path pointing to the location where to write
- out the updated ONNX model.
- """
- self.graph.cleanup().toposort()
- model = gs.export_onnx(self.graph)
- onnx.save(model, output_path)
- LOGGER.info(f"Saved ONNX model to {output_path}")
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts
deleted file mode 100644
index 2e1d6076c75d1adc8a639b5c7410d8905f84a046..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import Base from '../base/Base';
-export default class Ball extends Base { }
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js
deleted file mode 100644
index 3c21efa258bc47863843c86131ec6b2c2dca8ee3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js
+++ /dev/null
@@ -1,25 +0,0 @@
-import GetChildrenWidth from './GetChildrenWidth.js';
-import GetChildrenHeight from './GetChildrenHeight.js';
-import GetExpandedChildWidth from './GetExpandedChildWidth.js';
-import GetExpandedChildHeight from './GetExpandedChildHeight.js';
-import GetChildrenSizers from './GetChildrenSizers.js';
-import LayoutChildren from './LayoutChildren.js';
-import AddChildMethods from './AddChildMethods.js';
-import RemoveChildMethods from './RemoveChildMethods.js';
-
-var methods = {
- getChildrenWidth: GetChildrenWidth,
- getChildrenHeight: GetChildrenHeight,
- getExpandedChildWidth: GetExpandedChildWidth,
- getExpandedChildHeight: GetExpandedChildHeight,
- getChildrenSizers: GetChildrenSizers,
- layoutChildren: LayoutChildren,
-};
-
-Object.assign(
- methods,
- AddChildMethods,
- RemoveChildMethods
-);
-
-export default methods;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py b/spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py
deleted file mode 100644
index 83dd15c3df1d9f40baf0091a373fa224532c9ddd..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import torch
-import torch.nn as nn
-
-class DepthWiseSeperableConv(nn.Module):
- def __init__(self, in_dim, out_dim, *args, **kwargs):
- super().__init__()
- if 'groups' in kwargs:
- # ignoring groups for Depthwise Sep Conv
- del kwargs['groups']
-
- self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs)
- self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1)
-
- def forward(self, x):
- out = self.depthwise(x)
- out = self.pointwise(out)
- return out
\ No newline at end of file
diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py
deleted file mode 100644
index 258b618cd338322365dfa25bec468a0a3f70ccd1..0000000000000000000000000000000000000000
--- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-import IPython.display as ipd
-import torch
-import commons
-import utils
-import ONNXVITS_infer
-from text import text_to_sequence
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json")
-
-net_g = ONNXVITS_infer.SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
-_ = net_g.eval()
-
-_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g)
-
-text1 = get_text("おはようございます。", hps)
-stn_tst = text1
-with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- sid = torch.LongTensor([0])
- audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
-print(audio)
\ No newline at end of file
diff --git a/spaces/AlignmentResearch/tuned-lens/Dockerfile b/spaces/AlignmentResearch/tuned-lens/Dockerfile
deleted file mode 100644
index b37462737be5068087d02d7f3e81b162491b5acd..0000000000000000000000000000000000000000
--- a/spaces/AlignmentResearch/tuned-lens/Dockerfile
+++ /dev/null
@@ -1,25 +0,0 @@
-FROM python:3.9
-
-WORKDIR /code
-
-COPY ./requirements.txt /code/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -m -u 1000 user
-
-# Switch to the "user" user
-USER user
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-CMD ["python", "app.py"]
diff --git a/spaces/Ameaou/academic-chatgpt3.1/check_proxy.py b/spaces/Ameaou/academic-chatgpt3.1/check_proxy.py
deleted file mode 100644
index 28711a8c140bfcdb0683efd924032e6ccc0f0df8..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/check_proxy.py
+++ /dev/null
@@ -1,149 +0,0 @@
-
-def check_proxy(proxies):
- import requests
- proxies_https = proxies['https'] if proxies is not None else '无'
- try:
- response = requests.get("https://ipapi.co/json/",
- proxies=proxies, timeout=4)
- data = response.json()
- print(f'查询代理的地理位置,返回的结果是{data}')
- if 'country_name' in data:
- country = data['country_name']
- result = f"代理配置 {proxies_https}, 代理所在地:{country}"
- elif 'error' in data:
- result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
- print(result)
- return result
- except:
- result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
- print(result)
- return result
-
-
-def backup_and_download(current_version, remote_version):
- """
- 一键更新协议:备份和下载
- """
- from toolbox import get_conf
- import shutil
- import os
- import requests
- import zipfile
- os.makedirs(f'./history', exist_ok=True)
- backup_dir = f'./history/backup-{current_version}/'
- new_version_dir = f'./history/new-version-{remote_version}/'
- if os.path.exists(new_version_dir):
- return new_version_dir
- os.makedirs(new_version_dir)
- shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
- proxies, = get_conf('proxies')
- r = requests.get(
- 'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
- zip_file_path = backup_dir+'/master.zip'
- with open(zip_file_path, 'wb+') as f:
- f.write(r.content)
- dst_path = new_version_dir
- with zipfile.ZipFile(zip_file_path, "r") as zip_ref:
- for zip_info in zip_ref.infolist():
- dst_file_path = os.path.join(dst_path, zip_info.filename)
- if os.path.exists(dst_file_path):
- os.remove(dst_file_path)
- zip_ref.extract(zip_info, dst_path)
- return new_version_dir
-
-
-def patch_and_restart(path):
- """
- 一键更新协议:覆盖和重启
- """
- import distutils
- import shutil
- import os
- import sys
- import time
- from colorful import print亮黄, print亮绿, print亮红
- # if not using config_private, move origin config.py as config_private.py
- if not os.path.exists('config_private.py'):
- print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
- '另外您可以随时在history子文件夹下找回旧版的程序。')
- shutil.copyfile('config.py', 'config_private.py')
- distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './')
- import subprocess
- print亮绿('代码已经更新,即将更新pip包依赖……')
- for i in reversed(range(5)): time.sleep(1); print(i)
- try:
- subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
- except:
- print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
- print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
- print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
- print(' ------------------------------ -----------------------------------')
- for i in reversed(range(8)): time.sleep(1); print(i)
- os.execl(sys.executable, sys.executable, *sys.argv)
-
-
-def get_current_version():
- import json
- try:
- with open('./version', 'r', encoding='utf8') as f:
- current_version = json.loads(f.read())['version']
- except:
- current_version = ""
- return current_version
-
-
-def auto_update():
- """
- 一键更新协议:查询版本和用户意见
- """
- try:
- from toolbox import get_conf
- import requests
- import time
- import json
- proxies, = get_conf('proxies')
- response = requests.get(
- "https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
- remote_json_data = json.loads(response.text)
- remote_version = remote_json_data['version']
- if remote_json_data["show_feature"]:
- new_feature = "新功能:" + remote_json_data["new_feature"]
- else:
- new_feature = ""
- with open('./version', 'r', encoding='utf8') as f:
- current_version = f.read()
- current_version = json.loads(current_version)['version']
- if (remote_version - current_version) >= 0.01:
- from colorful import print亮黄
- print亮黄(
- f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
- print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
- user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
- if user_instruction in ['Y', 'y']:
- path = backup_and_download(current_version, remote_version)
- try:
- patch_and_restart(path)
- except:
- print('更新失败。')
- else:
- print('自动更新程序:已禁用')
- return
- else:
- return
- except:
- print('自动更新程序:已禁用')
-
-def warm_up_modules():
- print('正在执行一些模块的预热...')
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- enc.encode("模块预热", disallowed_special=())
- enc = model_info["gpt-4"]['tokenizer']
- enc.encode("模块预热", disallowed_special=())
-
-if __name__ == '__main__':
- import os
- os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- check_proxy(proxies)
diff --git a/spaces/Andy1621/uniformer_image_detection/README.md b/spaces/Andy1621/uniformer_image_detection/README.md
deleted file mode 100644
index b64b353e4e012c612228f0027483770d4a8ecf72..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Uniformer_image_detection
-emoji: 🌍
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.0.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py
deleted file mode 100644
index 26f95a3402f9fd2d54c5919484e2f4958beb8a34..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,18 +0,0 @@
-_base_ = [
- '../_base_/models/rpn_r50_fpn.py', '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_label=False),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-evaluation = dict(interval=1, metric='proposal_fast')
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index a441013a4c1adc39fc064dbac23caaac9efdc4a6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/psanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 284be6d09af1806b99bee5b85286b55ce02e8cbd..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './pspnet_r50-d8_769x769_80k_cityscapes.py'
-model = dict(
- pretrained='torchvision://resnet18',
- backbone=dict(type='ResNet', depth=18),
- decode_head=dict(
- in_channels=512,
- channels=128,
- ),
- auxiliary_head=dict(in_channels=256, channels=64))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py
deleted file mode 100644
index 966ca6f62d97e25557037602bb2202b7197f7066..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import asyncio
-import json
-import sys
-
-try:
- import websockets
-except ImportError:
- print("Websockets package not found. Make sure it's installed.")
-
-# For local streaming, the websockets are hosted without ssl - ws://
-HOST = 'localhost:5005'
-URI = f'ws://{HOST}/api/v1/stream'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - wss://
-# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream'
-
-
-async def run(context):
- # Note: the selected defaults change from time to time.
- request = {
- 'prompt': context,
- 'max_new_tokens': 250,
- 'auto_max_new_tokens': False,
- 'max_tokens_second': 0,
-
- # Generation params. If 'preset' is set to different than 'None', the values
- # in presets/preset-name.yaml are used instead of the individual numbers.
- 'preset': 'None',
- 'do_sample': True,
- 'temperature': 0.7,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'epsilon_cutoff': 0, # In units of 1e-4
- 'eta_cutoff': 0, # In units of 1e-4
- 'tfs': 1,
- 'top_a': 0,
- 'repetition_penalty': 1.18,
- 'repetition_penalty_range': 0,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'mirostat_mode': 0,
- 'mirostat_tau': 5,
- 'mirostat_eta': 0.1,
- 'grammar_string': '',
- 'guidance_scale': 1,
- 'negative_prompt': '',
-
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'custom_token_bans': '',
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- async with websockets.connect(URI, ping_interval=None) as websocket:
- await websocket.send(json.dumps(request))
-
- yield context # Remove this if you just want to see the reply
-
- while True:
- incoming_data = await websocket.recv()
- incoming_data = json.loads(incoming_data)
-
- match incoming_data['event']:
- case 'text_stream':
- yield incoming_data['text']
- case 'stream_end':
- return
-
-
-async def print_response_stream(prompt):
- async for response in run(prompt):
- print(response, end='')
- sys.stdout.flush() # If we don't flush, we won't see tokens in realtime.
-
-
-if __name__ == '__main__':
- prompt = "In order to make homemade bread, follow these steps:\n1)"
- asyncio.run(print_response_stream(prompt))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py
deleted file mode 100644
index 6c40ade1595df0ed4d2963b819211491d55b0aa5..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py
+++ /dev/null
@@ -1,517 +0,0 @@
-"""PipSession and supporting code, containing all pip-specific
-network request configuration and behavior.
-"""
-
-import email.utils
-import io
-import ipaddress
-import json
-import logging
-import mimetypes
-import os
-import platform
-import shutil
-import subprocess
-import sys
-import urllib.parse
-import warnings
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Generator,
- List,
- Mapping,
- Optional,
- Sequence,
- Tuple,
- Union,
-)
-
-from pip._vendor import requests, urllib3
-from pip._vendor.cachecontrol import CacheControlAdapter as _BaseCacheControlAdapter
-from pip._vendor.requests.adapters import DEFAULT_POOLBLOCK, BaseAdapter
-from pip._vendor.requests.adapters import HTTPAdapter as _BaseHTTPAdapter
-from pip._vendor.requests.models import PreparedRequest, Response
-from pip._vendor.requests.structures import CaseInsensitiveDict
-from pip._vendor.urllib3.connectionpool import ConnectionPool
-from pip._vendor.urllib3.exceptions import InsecureRequestWarning
-
-from pip import __version__
-from pip._internal.metadata import get_default_environment
-from pip._internal.models.link import Link
-from pip._internal.network.auth import MultiDomainBasicAuth
-from pip._internal.network.cache import SafeFileCache
-
-# Import ssl from compat so the initial import occurs in only one place.
-from pip._internal.utils.compat import has_tls
-from pip._internal.utils.glibc import libc_ver
-from pip._internal.utils.misc import build_url_from_netloc, parse_netloc
-from pip._internal.utils.urls import url_to_path
-
-if TYPE_CHECKING:
- from ssl import SSLContext
-
- from pip._vendor.urllib3.poolmanager import PoolManager
-
-
-logger = logging.getLogger(__name__)
-
-SecureOrigin = Tuple[str, str, Optional[Union[int, str]]]
-
-
-# Ignore warning raised when using --trusted-host.
-warnings.filterwarnings("ignore", category=InsecureRequestWarning)
-
-
-SECURE_ORIGINS: List[SecureOrigin] = [
- # protocol, hostname, port
- # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
- ("https", "*", "*"),
- ("*", "localhost", "*"),
- ("*", "127.0.0.0/8", "*"),
- ("*", "::1/128", "*"),
- ("file", "*", None),
- # ssh is always secure.
- ("ssh", "*", "*"),
-]
-
-
-# These are environment variables present when running under various
-# CI systems. For each variable, some CI systems that use the variable
-# are indicated. The collection was chosen so that for each of a number
-# of popular systems, at least one of the environment variables is used.
-# This list is used to provide some indication of and lower bound for
-# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive.
-# For more background, see: https://github.com/pypa/pip/issues/5499
-CI_ENVIRONMENT_VARIABLES = (
- # Azure Pipelines
- "BUILD_BUILDID",
- # Jenkins
- "BUILD_ID",
- # AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI
- "CI",
- # Explicit environment variable.
- "PIP_IS_CI",
-)
-
-
-def looks_like_ci() -> bool:
- """
- Return whether it looks like pip is running under CI.
- """
- # We don't use the method of checking for a tty (e.g. using isatty())
- # because some CI systems mimic a tty (e.g. Travis CI). Thus that
- # method doesn't provide definitive information in either direction.
- return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES)
-
-
-def user_agent() -> str:
- """
- Return a string representing the user agent.
- """
- data: Dict[str, Any] = {
- "installer": {"name": "pip", "version": __version__},
- "python": platform.python_version(),
- "implementation": {
- "name": platform.python_implementation(),
- },
- }
-
- if data["implementation"]["name"] == "CPython":
- data["implementation"]["version"] = platform.python_version()
- elif data["implementation"]["name"] == "PyPy":
- pypy_version_info = sys.pypy_version_info # type: ignore
- if pypy_version_info.releaselevel == "final":
- pypy_version_info = pypy_version_info[:3]
- data["implementation"]["version"] = ".".join(
- [str(x) for x in pypy_version_info]
- )
- elif data["implementation"]["name"] == "Jython":
- # Complete Guess
- data["implementation"]["version"] = platform.python_version()
- elif data["implementation"]["name"] == "IronPython":
- # Complete Guess
- data["implementation"]["version"] = platform.python_version()
-
- if sys.platform.startswith("linux"):
- from pip._vendor import distro
-
- linux_distribution = distro.name(), distro.version(), distro.codename()
- distro_infos: Dict[str, Any] = dict(
- filter(
- lambda x: x[1],
- zip(["name", "version", "id"], linux_distribution),
- )
- )
- libc = dict(
- filter(
- lambda x: x[1],
- zip(["lib", "version"], libc_ver()),
- )
- )
- if libc:
- distro_infos["libc"] = libc
- if distro_infos:
- data["distro"] = distro_infos
-
- if sys.platform.startswith("darwin") and platform.mac_ver()[0]:
- data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]}
-
- if platform.system():
- data.setdefault("system", {})["name"] = platform.system()
-
- if platform.release():
- data.setdefault("system", {})["release"] = platform.release()
-
- if platform.machine():
- data["cpu"] = platform.machine()
-
- if has_tls():
- import _ssl as ssl
-
- data["openssl_version"] = ssl.OPENSSL_VERSION
-
- setuptools_dist = get_default_environment().get_distribution("setuptools")
- if setuptools_dist is not None:
- data["setuptools_version"] = str(setuptools_dist.version)
-
- if shutil.which("rustc") is not None:
- # If for any reason `rustc --version` fails, silently ignore it
- try:
- rustc_output = subprocess.check_output(
- ["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5
- )
- except Exception:
- pass
- else:
- if rustc_output.startswith(b"rustc "):
- # The format of `rustc --version` is:
- # `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'`
- # We extract just the middle (1.52.1) part
- data["rustc_version"] = rustc_output.split(b" ")[1].decode()
-
- # Use None rather than False so as not to give the impression that
- # pip knows it is not being run under CI. Rather, it is a null or
- # inconclusive result. Also, we include some value rather than no
- # value to make it easier to know that the check has been run.
- data["ci"] = True if looks_like_ci() else None
-
- user_data = os.environ.get("PIP_USER_AGENT_USER_DATA")
- if user_data is not None:
- data["user_data"] = user_data
-
- return "{data[installer][name]}/{data[installer][version]} {json}".format(
- data=data,
- json=json.dumps(data, separators=(",", ":"), sort_keys=True),
- )
-
-
-class LocalFSAdapter(BaseAdapter):
- def send(
- self,
- request: PreparedRequest,
- stream: bool = False,
- timeout: Optional[Union[float, Tuple[float, float]]] = None,
- verify: Union[bool, str] = True,
- cert: Optional[Union[str, Tuple[str, str]]] = None,
- proxies: Optional[Mapping[str, str]] = None,
- ) -> Response:
- pathname = url_to_path(request.url)
-
- resp = Response()
- resp.status_code = 200
- resp.url = request.url
-
- try:
- stats = os.stat(pathname)
- except OSError as exc:
- # format the exception raised as a io.BytesIO object,
- # to return a better error message:
- resp.status_code = 404
- resp.reason = type(exc).__name__
- resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8"))
- else:
- modified = email.utils.formatdate(stats.st_mtime, usegmt=True)
- content_type = mimetypes.guess_type(pathname)[0] or "text/plain"
- resp.headers = CaseInsensitiveDict(
- {
- "Content-Type": content_type,
- "Content-Length": stats.st_size,
- "Last-Modified": modified,
- }
- )
-
- resp.raw = open(pathname, "rb")
- resp.close = resp.raw.close
-
- return resp
-
- def close(self) -> None:
- pass
-
-
-class _SSLContextAdapterMixin:
- """Mixin to add the ``ssl_context`` constructor argument to HTTP adapters.
-
- The additional argument is forwarded directly to the pool manager. This allows us
- to dynamically decide what SSL store to use at runtime, which is used to implement
- the optional ``truststore`` backend.
- """
-
- def __init__(
- self,
- *,
- ssl_context: Optional["SSLContext"] = None,
- **kwargs: Any,
- ) -> None:
- self._ssl_context = ssl_context
- super().__init__(**kwargs)
-
- def init_poolmanager(
- self,
- connections: int,
- maxsize: int,
- block: bool = DEFAULT_POOLBLOCK,
- **pool_kwargs: Any,
- ) -> "PoolManager":
- if self._ssl_context is not None:
- pool_kwargs.setdefault("ssl_context", self._ssl_context)
- return super().init_poolmanager( # type: ignore[misc]
- connections=connections,
- maxsize=maxsize,
- block=block,
- **pool_kwargs,
- )
-
-
-class HTTPAdapter(_SSLContextAdapterMixin, _BaseHTTPAdapter):
- pass
-
-
-class CacheControlAdapter(_SSLContextAdapterMixin, _BaseCacheControlAdapter):
- pass
-
-
-class InsecureHTTPAdapter(HTTPAdapter):
- def cert_verify(
- self,
- conn: ConnectionPool,
- url: str,
- verify: Union[bool, str],
- cert: Optional[Union[str, Tuple[str, str]]],
- ) -> None:
- super().cert_verify(conn=conn, url=url, verify=False, cert=cert)
-
-
-class InsecureCacheControlAdapter(CacheControlAdapter):
- def cert_verify(
- self,
- conn: ConnectionPool,
- url: str,
- verify: Union[bool, str],
- cert: Optional[Union[str, Tuple[str, str]]],
- ) -> None:
- super().cert_verify(conn=conn, url=url, verify=False, cert=cert)
-
-
-class PipSession(requests.Session):
- timeout: Optional[int] = None
-
- def __init__(
- self,
- *args: Any,
- retries: int = 0,
- cache: Optional[str] = None,
- trusted_hosts: Sequence[str] = (),
- index_urls: Optional[List[str]] = None,
- ssl_context: Optional["SSLContext"] = None,
- **kwargs: Any,
- ) -> None:
- """
- :param trusted_hosts: Domains not to emit warnings for when not using
- HTTPS.
- """
- super().__init__(*args, **kwargs)
-
- # Namespace the attribute with "pip_" just in case to prevent
- # possible conflicts with the base class.
- self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = []
-
- # Attach our User Agent to the request
- self.headers["User-Agent"] = user_agent()
-
- # Attach our Authentication handler to the session
- self.auth = MultiDomainBasicAuth(index_urls=index_urls)
-
- # Create our urllib3.Retry instance which will allow us to customize
- # how we handle retries.
- retries = urllib3.Retry(
- # Set the total number of retries that a particular request can
- # have.
- total=retries,
- # A 503 error from PyPI typically means that the Fastly -> Origin
- # connection got interrupted in some way. A 503 error in general
- # is typically considered a transient error so we'll go ahead and
- # retry it.
- # A 500 may indicate transient error in Amazon S3
- # A 520 or 527 - may indicate transient error in CloudFlare
- status_forcelist=[500, 503, 520, 527],
- # Add a small amount of back off between failed requests in
- # order to prevent hammering the service.
- backoff_factor=0.25,
- ) # type: ignore
-
- # Our Insecure HTTPAdapter disables HTTPS validation. It does not
- # support caching so we'll use it for all http:// URLs.
- # If caching is disabled, we will also use it for
- # https:// hosts that we've marked as ignoring
- # TLS errors for (trusted-hosts).
- insecure_adapter = InsecureHTTPAdapter(max_retries=retries)
-
- # We want to _only_ cache responses on securely fetched origins or when
- # the host is specified as trusted. We do this because
- # we can't validate the response of an insecurely/untrusted fetched
- # origin, and we don't want someone to be able to poison the cache and
- # require manual eviction from the cache to fix it.
- if cache:
- secure_adapter = CacheControlAdapter(
- cache=SafeFileCache(cache),
- max_retries=retries,
- ssl_context=ssl_context,
- )
- self._trusted_host_adapter = InsecureCacheControlAdapter(
- cache=SafeFileCache(cache),
- max_retries=retries,
- )
- else:
- secure_adapter = HTTPAdapter(max_retries=retries, ssl_context=ssl_context)
- self._trusted_host_adapter = insecure_adapter
-
- self.mount("https://", secure_adapter)
- self.mount("http://", insecure_adapter)
-
- # Enable file:// urls
- self.mount("file://", LocalFSAdapter())
-
- for host in trusted_hosts:
- self.add_trusted_host(host, suppress_logging=True)
-
- def update_index_urls(self, new_index_urls: List[str]) -> None:
- """
- :param new_index_urls: New index urls to update the authentication
- handler with.
- """
- self.auth.index_urls = new_index_urls
-
- def add_trusted_host(
- self, host: str, source: Optional[str] = None, suppress_logging: bool = False
- ) -> None:
- """
- :param host: It is okay to provide a host that has previously been
- added.
- :param source: An optional source string, for logging where the host
- string came from.
- """
- if not suppress_logging:
- msg = f"adding trusted host: {host!r}"
- if source is not None:
- msg += f" (from {source})"
- logger.info(msg)
-
- host_port = parse_netloc(host)
- if host_port not in self.pip_trusted_origins:
- self.pip_trusted_origins.append(host_port)
-
- self.mount(
- build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter
- )
- self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter)
- if not host_port[1]:
- self.mount(
- build_url_from_netloc(host, scheme="http") + ":",
- self._trusted_host_adapter,
- )
- # Mount wildcard ports for the same host.
- self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter)
-
- def iter_secure_origins(self) -> Generator[SecureOrigin, None, None]:
- yield from SECURE_ORIGINS
- for host, port in self.pip_trusted_origins:
- yield ("*", host, "*" if port is None else port)
-
- def is_secure_origin(self, location: Link) -> bool:
- # Determine if this url used a secure transport mechanism
- parsed = urllib.parse.urlparse(str(location))
- origin_protocol, origin_host, origin_port = (
- parsed.scheme,
- parsed.hostname,
- parsed.port,
- )
-
- # The protocol to use to see if the protocol matches.
- # Don't count the repository type as part of the protocol: in
- # cases such as "git+ssh", only use "ssh". (I.e., Only verify against
- # the last scheme.)
- origin_protocol = origin_protocol.rsplit("+", 1)[-1]
-
- # Determine if our origin is a secure origin by looking through our
- # hardcoded list of secure origins, as well as any additional ones
- # configured on this PackageFinder instance.
- for secure_origin in self.iter_secure_origins():
- secure_protocol, secure_host, secure_port = secure_origin
- if origin_protocol != secure_protocol and secure_protocol != "*":
- continue
-
- try:
- addr = ipaddress.ip_address(origin_host or "")
- network = ipaddress.ip_network(secure_host)
- except ValueError:
- # We don't have both a valid address or a valid network, so
- # we'll check this origin against hostnames.
- if (
- origin_host
- and origin_host.lower() != secure_host.lower()
- and secure_host != "*"
- ):
- continue
- else:
- # We have a valid address and network, so see if the address
- # is contained within the network.
- if addr not in network:
- continue
-
- # Check to see if the port matches.
- if (
- origin_port != secure_port
- and secure_port != "*"
- and secure_port is not None
- ):
- continue
-
- # If we've gotten here, then this origin matches the current
- # secure origin and we should return True
- return True
-
- # If we've gotten to this point, then the origin isn't secure and we
- # will not accept it as a valid location to search. We will however
- # log a warning that we are ignoring it.
- logger.warning(
- "The repository located at %s is not a trusted or secure host and "
- "is being ignored. If this repository is available via HTTPS we "
- "recommend you use HTTPS instead, otherwise you may silence "
- "this warning and allow it anyway with '--trusted-host %s'.",
- origin_host,
- origin_host,
- )
-
- return False
-
- def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response:
- # Allow setting a default timeout on a session
- kwargs.setdefault("timeout", self.timeout)
- # Allow setting a default proxies on a session
- kwargs.setdefault("proxies", self.proxies)
-
- # Dispatch the actual request
- return super().request(method, url, *args, **kwargs)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py
deleted file mode 100644
index 2cba4b0708032d62b4c1278f99e5db87ed8d90fe..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# SPDX-FileCopyrightText: 2015 Eric Larson
-#
-# SPDX-License-Identifier: Apache-2.0
-
-from __future__ import division
-
-from datetime import datetime
-from pip._vendor.cachecontrol.cache import BaseCache
-
-
-class RedisCache(BaseCache):
-
- def __init__(self, conn):
- self.conn = conn
-
- def get(self, key):
- return self.conn.get(key)
-
- def set(self, key, value, expires=None):
- if not expires:
- self.conn.set(key, value)
- elif isinstance(expires, datetime):
- expires = expires - datetime.utcnow()
- self.conn.setex(key, int(expires.total_seconds()), value)
- else:
- self.conn.setex(key, expires, value)
-
- def delete(self, key):
- self.conn.delete(key)
-
- def clear(self):
- """Helper for clearing all the keys in a database. Use with
- caution!"""
- for key in self.conn.keys():
- self.conn.delete(key)
-
- def close(self):
- """Redis uses connection pooling, no need to close the connection."""
- pass
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py
deleted file mode 100644
index 00c9421d3b0362526b8f90dc01e8db73841e0b61..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py
+++ /dev/null
@@ -1,760 +0,0 @@
-# results.py
-from collections.abc import MutableMapping, Mapping, MutableSequence, Iterator
-import pprint
-from weakref import ref as wkref
-from typing import Tuple, Any
-
-str_type: Tuple[type, ...] = (str, bytes)
-_generator_type = type((_ for _ in ()))
-
-
-class _ParseResultsWithOffset:
- __slots__ = ["tup"]
-
- def __init__(self, p1, p2):
- self.tup = (p1, p2)
-
- def __getitem__(self, i):
- return self.tup[i]
-
- def __getstate__(self):
- return self.tup
-
- def __setstate__(self, *args):
- self.tup = args[0]
-
-
-class ParseResults:
- """Structured parse results, to provide multiple means of access to
- the parsed data:
-
- - as a list (``len(results)``)
- - by list index (``results[0], results[1]``, etc.)
- - by attribute (``results.`` - see :class:`ParserElement.set_results_name`)
-
- Example::
-
- integer = Word(nums)
- date_str = (integer.set_results_name("year") + '/'
- + integer.set_results_name("month") + '/'
- + integer.set_results_name("day"))
- # equivalent form:
- # date_str = (integer("year") + '/'
- # + integer("month") + '/'
- # + integer("day"))
-
- # parse_string returns a ParseResults object
- result = date_str.parse_string("1999/12/31")
-
- def test(s, fn=repr):
- print("{} -> {}".format(s, fn(eval(s))))
- test("list(result)")
- test("result[0]")
- test("result['month']")
- test("result.day")
- test("'month' in result")
- test("'minutes' in result")
- test("result.dump()", str)
-
- prints::
-
- list(result) -> ['1999', '/', '12', '/', '31']
- result[0] -> '1999'
- result['month'] -> '12'
- result.day -> '31'
- 'month' in result -> True
- 'minutes' in result -> False
- result.dump() -> ['1999', '/', '12', '/', '31']
- - day: '31'
- - month: '12'
- - year: '1999'
- """
-
- _null_values: Tuple[Any, ...] = (None, [], "", ())
-
- __slots__ = [
- "_name",
- "_parent",
- "_all_names",
- "_modal",
- "_toklist",
- "_tokdict",
- "__weakref__",
- ]
-
- class List(list):
- """
- Simple wrapper class to distinguish parsed list results that should be preserved
- as actual Python lists, instead of being converted to :class:`ParseResults`:
-
- LBRACK, RBRACK = map(pp.Suppress, "[]")
- element = pp.Forward()
- item = ppc.integer
- element_list = LBRACK + pp.delimited_list(element) + RBRACK
-
- # add parse actions to convert from ParseResults to actual Python collection types
- def as_python_list(t):
- return pp.ParseResults.List(t.as_list())
- element_list.add_parse_action(as_python_list)
-
- element <<= item | element_list
-
- element.run_tests('''
- 100
- [2,3,4]
- [[2, 1],3,4]
- [(2, 1),3,4]
- (2,3,4)
- ''', post_parse=lambda s, r: (r[0], type(r[0])))
-
- prints:
-
- 100
- (100, )
-
- [2,3,4]
- ([2, 3, 4], )
-
- [[2, 1],3,4]
- ([[2, 1], 3, 4], )
-
- (Used internally by :class:`Group` when `aslist=True`.)
- """
-
- def __new__(cls, contained=None):
- if contained is None:
- contained = []
-
- if not isinstance(contained, list):
- raise TypeError(
- "{} may only be constructed with a list,"
- " not {}".format(cls.__name__, type(contained).__name__)
- )
-
- return list.__new__(cls)
-
- def __new__(cls, toklist=None, name=None, **kwargs):
- if isinstance(toklist, ParseResults):
- return toklist
- self = object.__new__(cls)
- self._name = None
- self._parent = None
- self._all_names = set()
-
- if toklist is None:
- self._toklist = []
- elif isinstance(toklist, (list, _generator_type)):
- self._toklist = (
- [toklist[:]]
- if isinstance(toklist, ParseResults.List)
- else list(toklist)
- )
- else:
- self._toklist = [toklist]
- self._tokdict = dict()
- return self
-
- # Performance tuning: we construct a *lot* of these, so keep this
- # constructor as small and fast as possible
- def __init__(
- self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance
- ):
- self._modal = modal
- if name is not None and name != "":
- if isinstance(name, int):
- name = str(name)
- if not modal:
- self._all_names = {name}
- self._name = name
- if toklist not in self._null_values:
- if isinstance(toklist, (str_type, type)):
- toklist = [toklist]
- if asList:
- if isinstance(toklist, ParseResults):
- self[name] = _ParseResultsWithOffset(
- ParseResults(toklist._toklist), 0
- )
- else:
- self[name] = _ParseResultsWithOffset(
- ParseResults(toklist[0]), 0
- )
- self[name]._name = name
- else:
- try:
- self[name] = toklist[0]
- except (KeyError, TypeError, IndexError):
- if toklist is not self:
- self[name] = toklist
- else:
- self._name = name
-
- def __getitem__(self, i):
- if isinstance(i, (int, slice)):
- return self._toklist[i]
- else:
- if i not in self._all_names:
- return self._tokdict[i][-1][0]
- else:
- return ParseResults([v[0] for v in self._tokdict[i]])
-
- def __setitem__(self, k, v, isinstance=isinstance):
- if isinstance(v, _ParseResultsWithOffset):
- self._tokdict[k] = self._tokdict.get(k, list()) + [v]
- sub = v[0]
- elif isinstance(k, (int, slice)):
- self._toklist[k] = v
- sub = v
- else:
- self._tokdict[k] = self._tokdict.get(k, list()) + [
- _ParseResultsWithOffset(v, 0)
- ]
- sub = v
- if isinstance(sub, ParseResults):
- sub._parent = wkref(self)
-
- def __delitem__(self, i):
- if isinstance(i, (int, slice)):
- mylen = len(self._toklist)
- del self._toklist[i]
-
- # convert int to slice
- if isinstance(i, int):
- if i < 0:
- i += mylen
- i = slice(i, i + 1)
- # get removed indices
- removed = list(range(*i.indices(mylen)))
- removed.reverse()
- # fixup indices in token dictionary
- for name, occurrences in self._tokdict.items():
- for j in removed:
- for k, (value, position) in enumerate(occurrences):
- occurrences[k] = _ParseResultsWithOffset(
- value, position - (position > j)
- )
- else:
- del self._tokdict[i]
-
- def __contains__(self, k) -> bool:
- return k in self._tokdict
-
- def __len__(self) -> int:
- return len(self._toklist)
-
- def __bool__(self) -> bool:
- return not not (self._toklist or self._tokdict)
-
- def __iter__(self) -> Iterator:
- return iter(self._toklist)
-
- def __reversed__(self) -> Iterator:
- return iter(self._toklist[::-1])
-
- def keys(self):
- return iter(self._tokdict)
-
- def values(self):
- return (self[k] for k in self.keys())
-
- def items(self):
- return ((k, self[k]) for k in self.keys())
-
- def haskeys(self) -> bool:
- """
- Since ``keys()`` returns an iterator, this method is helpful in bypassing
- code that looks for the existence of any defined results names."""
- return bool(self._tokdict)
-
- def pop(self, *args, **kwargs):
- """
- Removes and returns item at specified index (default= ``last``).
- Supports both ``list`` and ``dict`` semantics for ``pop()``. If
- passed no argument or an integer argument, it will use ``list``
- semantics and pop tokens from the list of parsed tokens. If passed
- a non-integer argument (most likely a string), it will use ``dict``
- semantics and pop the corresponding value from any defined results
- names. A second default return value argument is supported, just as in
- ``dict.pop()``.
-
- Example::
-
- numlist = Word(nums)[...]
- print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321']
-
- def remove_first(tokens):
- tokens.pop(0)
- numlist.add_parse_action(remove_first)
- print(numlist.parse_string("0 123 321")) # -> ['123', '321']
-
- label = Word(alphas)
- patt = label("LABEL") + Word(nums)[1, ...]
- print(patt.parse_string("AAB 123 321").dump())
-
- # Use pop() in a parse action to remove named result (note that corresponding value is not
- # removed from list form of results)
- def remove_LABEL(tokens):
- tokens.pop("LABEL")
- return tokens
- patt.add_parse_action(remove_LABEL)
- print(patt.parse_string("AAB 123 321").dump())
-
- prints::
-
- ['AAB', '123', '321']
- - LABEL: 'AAB'
-
- ['AAB', '123', '321']
- """
- if not args:
- args = [-1]
- for k, v in kwargs.items():
- if k == "default":
- args = (args[0], v)
- else:
- raise TypeError(
- "pop() got an unexpected keyword argument {!r}".format(k)
- )
- if isinstance(args[0], int) or len(args) == 1 or args[0] in self:
- index = args[0]
- ret = self[index]
- del self[index]
- return ret
- else:
- defaultvalue = args[1]
- return defaultvalue
-
- def get(self, key, default_value=None):
- """
- Returns named result matching the given key, or if there is no
- such name, then returns the given ``default_value`` or ``None`` if no
- ``default_value`` is specified.
-
- Similar to ``dict.get()``.
-
- Example::
-
- integer = Word(nums)
- date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
-
- result = date_str.parse_string("1999/12/31")
- print(result.get("year")) # -> '1999'
- print(result.get("hour", "not specified")) # -> 'not specified'
- print(result.get("hour")) # -> None
- """
- if key in self:
- return self[key]
- else:
- return default_value
-
- def insert(self, index, ins_string):
- """
- Inserts new element at location index in the list of parsed tokens.
-
- Similar to ``list.insert()``.
-
- Example::
-
- numlist = Word(nums)[...]
- print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321']
-
- # use a parse action to insert the parse location in the front of the parsed results
- def insert_locn(locn, tokens):
- tokens.insert(0, locn)
- numlist.add_parse_action(insert_locn)
- print(numlist.parse_string("0 123 321")) # -> [0, '0', '123', '321']
- """
- self._toklist.insert(index, ins_string)
- # fixup indices in token dictionary
- for name, occurrences in self._tokdict.items():
- for k, (value, position) in enumerate(occurrences):
- occurrences[k] = _ParseResultsWithOffset(
- value, position + (position > index)
- )
-
- def append(self, item):
- """
- Add single element to end of ``ParseResults`` list of elements.
-
- Example::
-
- numlist = Word(nums)[...]
- print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321']
-
- # use a parse action to compute the sum of the parsed integers, and add it to the end
- def append_sum(tokens):
- tokens.append(sum(map(int, tokens)))
- numlist.add_parse_action(append_sum)
- print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321', 444]
- """
- self._toklist.append(item)
-
- def extend(self, itemseq):
- """
- Add sequence of elements to end of ``ParseResults`` list of elements.
-
- Example::
-
- patt = Word(alphas)[1, ...]
-
- # use a parse action to append the reverse of the matched strings, to make a palindrome
- def make_palindrome(tokens):
- tokens.extend(reversed([t[::-1] for t in tokens]))
- return ''.join(tokens)
- patt.add_parse_action(make_palindrome)
- print(patt.parse_string("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl'
- """
- if isinstance(itemseq, ParseResults):
- self.__iadd__(itemseq)
- else:
- self._toklist.extend(itemseq)
-
- def clear(self):
- """
- Clear all elements and results names.
- """
- del self._toklist[:]
- self._tokdict.clear()
-
- def __getattr__(self, name):
- try:
- return self[name]
- except KeyError:
- if name.startswith("__"):
- raise AttributeError(name)
- return ""
-
- def __add__(self, other) -> "ParseResults":
- ret = self.copy()
- ret += other
- return ret
-
- def __iadd__(self, other) -> "ParseResults":
- if other._tokdict:
- offset = len(self._toklist)
- addoffset = lambda a: offset if a < 0 else a + offset
- otheritems = other._tokdict.items()
- otherdictitems = [
- (k, _ParseResultsWithOffset(v[0], addoffset(v[1])))
- for k, vlist in otheritems
- for v in vlist
- ]
- for k, v in otherdictitems:
- self[k] = v
- if isinstance(v[0], ParseResults):
- v[0]._parent = wkref(self)
-
- self._toklist += other._toklist
- self._all_names |= other._all_names
- return self
-
- def __radd__(self, other) -> "ParseResults":
- if isinstance(other, int) and other == 0:
- # useful for merging many ParseResults using sum() builtin
- return self.copy()
- else:
- # this may raise a TypeError - so be it
- return other + self
-
- def __repr__(self) -> str:
- return "{}({!r}, {})".format(type(self).__name__, self._toklist, self.as_dict())
-
- def __str__(self) -> str:
- return (
- "["
- + ", ".join(
- [
- str(i) if isinstance(i, ParseResults) else repr(i)
- for i in self._toklist
- ]
- )
- + "]"
- )
-
- def _asStringList(self, sep=""):
- out = []
- for item in self._toklist:
- if out and sep:
- out.append(sep)
- if isinstance(item, ParseResults):
- out += item._asStringList()
- else:
- out.append(str(item))
- return out
-
- def as_list(self) -> list:
- """
- Returns the parse results as a nested list of matching tokens, all converted to strings.
-
- Example::
-
- patt = Word(alphas)[1, ...]
- result = patt.parse_string("sldkj lsdkj sldkj")
- # even though the result prints in string-like form, it is actually a pyparsing ParseResults
- print(type(result), result) # -> ['sldkj', 'lsdkj', 'sldkj']
-
- # Use as_list() to create an actual list
- result_list = result.as_list()
- print(type(result_list), result_list) # -> ['sldkj', 'lsdkj', 'sldkj']
- """
- return [
- res.as_list() if isinstance(res, ParseResults) else res
- for res in self._toklist
- ]
-
- def as_dict(self) -> dict:
- """
- Returns the named parse results as a nested dictionary.
-
- Example::
-
- integer = Word(nums)
- date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
-
- result = date_str.parse_string('12/31/1999')
- print(type(result), repr(result)) # -> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]})
-
- result_dict = result.as_dict()
- print(type(result_dict), repr(result_dict)) # -> {'day': '1999', 'year': '12', 'month': '31'}
-
- # even though a ParseResults supports dict-like access, sometime you just need to have a dict
- import json
- print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable
- print(json.dumps(result.as_dict())) # -> {"month": "31", "day": "1999", "year": "12"}
- """
-
- def to_item(obj):
- if isinstance(obj, ParseResults):
- return obj.as_dict() if obj.haskeys() else [to_item(v) for v in obj]
- else:
- return obj
-
- return dict((k, to_item(v)) for k, v in self.items())
-
- def copy(self) -> "ParseResults":
- """
- Returns a new copy of a :class:`ParseResults` object.
- """
- ret = ParseResults(self._toklist)
- ret._tokdict = self._tokdict.copy()
- ret._parent = self._parent
- ret._all_names |= self._all_names
- ret._name = self._name
- return ret
-
- def get_name(self):
- r"""
- Returns the results name for this token expression. Useful when several
- different expressions might match at a particular location.
-
- Example::
-
- integer = Word(nums)
- ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d")
- house_number_expr = Suppress('#') + Word(nums, alphanums)
- user_data = (Group(house_number_expr)("house_number")
- | Group(ssn_expr)("ssn")
- | Group(integer)("age"))
- user_info = user_data[1, ...]
-
- result = user_info.parse_string("22 111-22-3333 #221B")
- for item in result:
- print(item.get_name(), ':', item[0])
-
- prints::
-
- age : 22
- ssn : 111-22-3333
- house_number : 221B
- """
- if self._name:
- return self._name
- elif self._parent:
- par = self._parent()
-
- def find_in_parent(sub):
- return next(
- (
- k
- for k, vlist in par._tokdict.items()
- for v, loc in vlist
- if sub is v
- ),
- None,
- )
-
- return find_in_parent(self) if par else None
- elif (
- len(self) == 1
- and len(self._tokdict) == 1
- and next(iter(self._tokdict.values()))[0][1] in (0, -1)
- ):
- return next(iter(self._tokdict.keys()))
- else:
- return None
-
- def dump(self, indent="", full=True, include_list=True, _depth=0) -> str:
- """
- Diagnostic method for listing out the contents of
- a :class:`ParseResults`. Accepts an optional ``indent`` argument so
- that this string can be embedded in a nested display of other data.
-
- Example::
-
- integer = Word(nums)
- date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
-
- result = date_str.parse_string('1999/12/31')
- print(result.dump())
-
- prints::
-
- ['1999', '/', '12', '/', '31']
- - day: '31'
- - month: '12'
- - year: '1999'
- """
- out = []
- NL = "\n"
- out.append(indent + str(self.as_list()) if include_list else "")
-
- if full:
- if self.haskeys():
- items = sorted((str(k), v) for k, v in self.items())
- for k, v in items:
- if out:
- out.append(NL)
- out.append("{}{}- {}: ".format(indent, (" " * _depth), k))
- if isinstance(v, ParseResults):
- if v:
- out.append(
- v.dump(
- indent=indent,
- full=full,
- include_list=include_list,
- _depth=_depth + 1,
- )
- )
- else:
- out.append(str(v))
- else:
- out.append(repr(v))
- if any(isinstance(vv, ParseResults) for vv in self):
- v = self
- for i, vv in enumerate(v):
- if isinstance(vv, ParseResults):
- out.append(
- "\n{}{}[{}]:\n{}{}{}".format(
- indent,
- (" " * (_depth)),
- i,
- indent,
- (" " * (_depth + 1)),
- vv.dump(
- indent=indent,
- full=full,
- include_list=include_list,
- _depth=_depth + 1,
- ),
- )
- )
- else:
- out.append(
- "\n%s%s[%d]:\n%s%s%s"
- % (
- indent,
- (" " * (_depth)),
- i,
- indent,
- (" " * (_depth + 1)),
- str(vv),
- )
- )
-
- return "".join(out)
-
- def pprint(self, *args, **kwargs):
- """
- Pretty-printer for parsed results as a list, using the
- `pprint `_ module.
- Accepts additional positional or keyword args as defined for
- `pprint.pprint `_ .
-
- Example::
-
- ident = Word(alphas, alphanums)
- num = Word(nums)
- func = Forward()
- term = ident | num | Group('(' + func + ')')
- func <<= ident + Group(Optional(delimited_list(term)))
- result = func.parse_string("fna a,b,(fnb c,d,200),100")
- result.pprint(width=40)
-
- prints::
-
- ['fna',
- ['a',
- 'b',
- ['(', 'fnb', ['c', 'd', '200'], ')'],
- '100']]
- """
- pprint.pprint(self.as_list(), *args, **kwargs)
-
- # add support for pickle protocol
- def __getstate__(self):
- return (
- self._toklist,
- (
- self._tokdict.copy(),
- self._parent is not None and self._parent() or None,
- self._all_names,
- self._name,
- ),
- )
-
- def __setstate__(self, state):
- self._toklist, (self._tokdict, par, inAccumNames, self._name) = state
- self._all_names = set(inAccumNames)
- if par is not None:
- self._parent = wkref(par)
- else:
- self._parent = None
-
- def __getnewargs__(self):
- return self._toklist, self._name
-
- def __dir__(self):
- return dir(type(self)) + list(self.keys())
-
- @classmethod
- def from_dict(cls, other, name=None) -> "ParseResults":
- """
- Helper classmethod to construct a ``ParseResults`` from a ``dict``, preserving the
- name-value relations as results names. If an optional ``name`` argument is
- given, a nested ``ParseResults`` will be returned.
- """
-
- def is_iterable(obj):
- try:
- iter(obj)
- except Exception:
- return False
- else:
- return not isinstance(obj, str_type)
-
- ret = cls([])
- for k, v in other.items():
- if isinstance(v, Mapping):
- ret += cls.from_dict(v, name=k)
- else:
- ret += cls([v], name=k, asList=is_iterable(v))
- if name is not None:
- ret = cls([ret], name=name)
- return ret
-
- asList = as_list
- asDict = as_dict
- getName = get_name
-
-
-MutableMapping.register(ParseResults)
-MutableSequence.register(ParseResults)
diff --git a/spaces/Awesimo/jojogan/e4e/utils/common.py b/spaces/Awesimo/jojogan/e4e/utils/common.py
deleted file mode 100644
index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/utils/common.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from PIL import Image
-import matplotlib.pyplot as plt
-
-
-# Log images
-def log_input_image(x, opts):
- return tensor2im(x)
-
-
-def tensor2im(var):
- # var shape: (3, H, W)
- var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy()
- var = ((var + 1) / 2)
- var[var < 0] = 0
- var[var > 1] = 1
- var = var * 255
- return Image.fromarray(var.astype('uint8'))
-
-
-def vis_faces(log_hooks):
- display_count = len(log_hooks)
- fig = plt.figure(figsize=(8, 4 * display_count))
- gs = fig.add_gridspec(display_count, 3)
- for i in range(display_count):
- hooks_dict = log_hooks[i]
- fig.add_subplot(gs[i, 0])
- if 'diff_input' in hooks_dict:
- vis_faces_with_id(hooks_dict, fig, gs, i)
- else:
- vis_faces_no_id(hooks_dict, fig, gs, i)
- plt.tight_layout()
- return fig
-
-
-def vis_faces_with_id(hooks_dict, fig, gs, i):
- plt.imshow(hooks_dict['input_face'])
- plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input'])))
- fig.add_subplot(gs[i, 1])
- plt.imshow(hooks_dict['target_face'])
- plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']),
- float(hooks_dict['diff_target'])))
- fig.add_subplot(gs[i, 2])
- plt.imshow(hooks_dict['output_face'])
- plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target'])))
-
-
-def vis_faces_no_id(hooks_dict, fig, gs, i):
- plt.imshow(hooks_dict['input_face'], cmap="gray")
- plt.title('Input')
- fig.add_subplot(gs[i, 1])
- plt.imshow(hooks_dict['target_face'])
- plt.title('Target')
- fig.add_subplot(gs[i, 2])
- plt.imshow(hooks_dict['output_face'])
- plt.title('Output')
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py
deleted file mode 100644
index 7323d7d5a86816f337571221313c428238c439f4..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-import cv2
-import torch
-from torch.autograd import Variable, gradcheck
-
-from detectron2.layers.roi_align import ROIAlign
-from detectron2.layers.roi_align_rotated import ROIAlignRotated
-
-logger = logging.getLogger(__name__)
-
-
-class ROIAlignRotatedTest(unittest.TestCase):
- def _box_to_rotated_box(self, box, angle):
- return [
- (box[0] + box[2]) / 2.0,
- (box[1] + box[3]) / 2.0,
- box[2] - box[0],
- box[3] - box[1],
- angle,
- ]
-
- def _rot90(self, img, num):
- num = num % 4 # note: -1 % 4 == 3
- for _ in range(num):
- img = img.transpose(0, 1).flip(0)
- return img
-
- def test_forward_output_0_90_180_270(self):
- for i in range(4):
- # i = 0, 1, 2, 3 corresponding to 0, 90, 180, 270 degrees
- img = torch.arange(25, dtype=torch.float32).reshape(5, 5)
- """
- 0 1 2 3 4
- 5 6 7 8 9
- 10 11 12 13 14
- 15 16 17 18 19
- 20 21 22 23 24
- """
- box = [1, 1, 3, 3]
- rotated_box = self._box_to_rotated_box(box=box, angle=90 * i)
-
- result = self._simple_roi_align_rotated(img=img, box=rotated_box, resolution=(4, 4))
-
- # Here's an explanation for 0 degree case:
- # point 0 in the original input lies at [0.5, 0.5]
- # (the center of bin [0, 1] x [0, 1])
- # point 1 in the original input lies at [1.5, 0.5], etc.
- # since the resolution is (4, 4) that divides [1, 3] x [1, 3]
- # into 4 x 4 equal bins,
- # the top-left bin is [1, 1.5] x [1, 1.5], and its center
- # (1.25, 1.25) lies at the 3/4 position
- # between point 0 and point 1, point 5 and point 6,
- # point 0 and point 5, point 1 and point 6, so it can be calculated as
- # 0.25*(0*0.25+1*0.75)+(5*0.25+6*0.75)*0.75 = 4.5
- result_expected = torch.tensor(
- [
- [4.5, 5.0, 5.5, 6.0],
- [7.0, 7.5, 8.0, 8.5],
- [9.5, 10.0, 10.5, 11.0],
- [12.0, 12.5, 13.0, 13.5],
- ]
- )
- # This is also an upsampled version of [[6, 7], [11, 12]]
-
- # When the box is rotated by 90 degrees CCW,
- # the result would be rotated by 90 degrees CW, thus it's -i here
- result_expected = self._rot90(result_expected, -i)
-
- assert torch.allclose(result, result_expected)
-
- def test_resize(self):
- H, W = 30, 30
- input = torch.rand(H, W) * 100
- box = [10, 10, 20, 20]
- rotated_box = self._box_to_rotated_box(box, angle=0)
- output = self._simple_roi_align_rotated(img=input, box=rotated_box, resolution=(5, 5))
-
- input2x = cv2.resize(input.numpy(), (W // 2, H // 2), interpolation=cv2.INTER_LINEAR)
- input2x = torch.from_numpy(input2x)
- box2x = [x / 2 for x in box]
- rotated_box2x = self._box_to_rotated_box(box2x, angle=0)
- output2x = self._simple_roi_align_rotated(img=input2x, box=rotated_box2x, resolution=(5, 5))
- assert torch.allclose(output2x, output)
-
- def _simple_roi_align_rotated(self, img, box, resolution):
- """
- RoiAlignRotated with scale 1.0 and 0 sample ratio.
- """
- op = ROIAlignRotated(output_size=resolution, spatial_scale=1.0, sampling_ratio=0)
- input = img[None, None, :, :]
-
- rois = [0] + list(box)
- rois = torch.tensor(rois, dtype=torch.float32)[None, :]
- result_cpu = op.forward(input, rois)
- if torch.cuda.is_available():
- result_cuda = op.forward(input.cuda(), rois.cuda())
- assert torch.allclose(result_cpu, result_cuda.cpu())
- return result_cpu[0, 0]
-
- def test_empty_box(self):
- img = torch.rand(5, 5)
- out = self._simple_roi_align_rotated(img, [2, 3, 0, 0, 0], (7, 7))
- self.assertTrue((out == 0).all())
-
- def test_roi_align_rotated_gradcheck_cpu(self):
- dtype = torch.float64
- device = torch.device("cpu")
- roi_align_rotated_op = ROIAlignRotated(
- output_size=(5, 5), spatial_scale=0.5, sampling_ratio=1
- ).to(dtype=dtype, device=device)
- x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True)
- # roi format is (batch index, x_center, y_center, width, height, angle)
- rois = torch.tensor(
- [[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]],
- dtype=dtype,
- device=device,
- )
-
- def func(input):
- return roi_align_rotated_op(input, rois)
-
- assert gradcheck(func, (x,)), "gradcheck failed for RoIAlignRotated CPU"
- assert gradcheck(func, (x.transpose(2, 3),)), "gradcheck failed for RoIAlignRotated CPU"
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_roi_align_rotated_gradient_cuda(self):
- """
- Compute gradients for ROIAlignRotated with multiple bounding boxes on the GPU,
- and compare the result with ROIAlign
- """
- # torch.manual_seed(123)
- dtype = torch.float64
- device = torch.device("cuda")
- pool_h, pool_w = (5, 5)
-
- roi_align = ROIAlign(output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2).to(
- device=device
- )
-
- roi_align_rotated = ROIAlignRotated(
- output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2
- ).to(device=device)
-
- x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True)
- # x_rotated = x.clone() won't work (will lead to grad_fun=CloneBackward)!
- x_rotated = Variable(x.data.clone(), requires_grad=True)
-
- # roi_rotated format is (batch index, x_center, y_center, width, height, angle)
- rois_rotated = torch.tensor(
- [[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]],
- dtype=dtype,
- device=device,
- )
-
- y_rotated = roi_align_rotated(x_rotated, rois_rotated)
- s_rotated = y_rotated.sum()
- s_rotated.backward()
-
- # roi format is (batch index, x1, y1, x2, y2)
- rois = torch.tensor(
- [[0, 0, 0, 9, 9], [0, 0, 5, 4, 9], [0, 5, 5, 9, 9]], dtype=dtype, device=device
- )
-
- y = roi_align(x, rois)
- s = y.sum()
- s.backward()
-
- assert torch.allclose(
- x.grad, x_rotated.grad
- ), "gradients for ROIAlign and ROIAlignRotated mismatch on CUDA"
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py
deleted file mode 100644
index fd713830d36cabc6a0fb4ab4e8cf426a84decdc6..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py
+++ /dev/null
@@ -1,102 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import Optional, Union
-
-from .charsetprober import CharSetProber
-from .codingstatemachine import CodingStateMachine
-from .enums import LanguageFilter, MachineState, ProbingState
-from .escsm import (
- HZ_SM_MODEL,
- ISO2022CN_SM_MODEL,
- ISO2022JP_SM_MODEL,
- ISO2022KR_SM_MODEL,
-)
-
-
-class EscCharSetProber(CharSetProber):
- """
- This CharSetProber uses a "code scheme" approach for detecting encodings,
- whereby easily recognizable escape or shift sequences are relied on to
- identify these encodings.
- """
-
- def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None:
- super().__init__(lang_filter=lang_filter)
- self.coding_sm = []
- if self.lang_filter & LanguageFilter.CHINESE_SIMPLIFIED:
- self.coding_sm.append(CodingStateMachine(HZ_SM_MODEL))
- self.coding_sm.append(CodingStateMachine(ISO2022CN_SM_MODEL))
- if self.lang_filter & LanguageFilter.JAPANESE:
- self.coding_sm.append(CodingStateMachine(ISO2022JP_SM_MODEL))
- if self.lang_filter & LanguageFilter.KOREAN:
- self.coding_sm.append(CodingStateMachine(ISO2022KR_SM_MODEL))
- self.active_sm_count = 0
- self._detected_charset: Optional[str] = None
- self._detected_language: Optional[str] = None
- self._state = ProbingState.DETECTING
- self.reset()
-
- def reset(self) -> None:
- super().reset()
- for coding_sm in self.coding_sm:
- coding_sm.active = True
- coding_sm.reset()
- self.active_sm_count = len(self.coding_sm)
- self._detected_charset = None
- self._detected_language = None
-
- @property
- def charset_name(self) -> Optional[str]:
- return self._detected_charset
-
- @property
- def language(self) -> Optional[str]:
- return self._detected_language
-
- def get_confidence(self) -> float:
- return 0.99 if self._detected_charset else 0.00
-
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
- for c in byte_str:
- for coding_sm in self.coding_sm:
- if not coding_sm.active:
- continue
- coding_state = coding_sm.next_state(c)
- if coding_state == MachineState.ERROR:
- coding_sm.active = False
- self.active_sm_count -= 1
- if self.active_sm_count <= 0:
- self._state = ProbingState.NOT_ME
- return self.state
- elif coding_state == MachineState.ITS_ME:
- self._state = ProbingState.FOUND_IT
- self._detected_charset = coding_sm.get_coding_state_machine()
- self._detected_language = coding_sm.language
- return self.state
-
- return self.state
diff --git a/spaces/CVPR/LIVE/pydiffvg/render_pytorch.py b/spaces/CVPR/LIVE/pydiffvg/render_pytorch.py
deleted file mode 100644
index b776ce67a0cdb587c8bd558fe5060a6d96e51e3c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pydiffvg/render_pytorch.py
+++ /dev/null
@@ -1,870 +0,0 @@
-import torch
-import diffvg
-import pydiffvg
-import time
-from enum import IntEnum
-import warnings
-
-print_timing = False
-
-def set_print_timing(val):
- global print_timing
- print_timing=val
-
-class OutputType(IntEnum):
- color = 1
- sdf = 2
-
-class RenderFunction(torch.autograd.Function):
- """
- The PyTorch interface of diffvg.
- """
- @staticmethod
- def serialize_scene(canvas_width,
- canvas_height,
- shapes,
- shape_groups,
- filter = pydiffvg.PixelFilter(type = diffvg.FilterType.box,
- radius = torch.tensor(0.5)),
- output_type = OutputType.color,
- use_prefiltering = False,
- eval_positions = torch.tensor([])):
- """
- Given a list of shapes, convert them to a linear list of argument,
- so that we can use it in PyTorch.
- """
- num_shapes = len(shapes)
- num_shape_groups = len(shape_groups)
- args = []
- args.append(canvas_width)
- args.append(canvas_height)
- args.append(num_shapes)
- args.append(num_shape_groups)
- args.append(output_type)
- args.append(use_prefiltering)
- args.append(eval_positions.to(pydiffvg.get_device()))
- for shape in shapes:
- use_thickness = False
- if isinstance(shape, pydiffvg.Circle):
- assert(shape.center.is_contiguous())
- args.append(diffvg.ShapeType.circle)
- args.append(shape.radius.cpu())
- args.append(shape.center.cpu())
- elif isinstance(shape, pydiffvg.Ellipse):
- assert(shape.radius.is_contiguous())
- assert(shape.center.is_contiguous())
- args.append(diffvg.ShapeType.ellipse)
- args.append(shape.radius.cpu())
- args.append(shape.center.cpu())
- elif isinstance(shape, pydiffvg.Path):
- assert(shape.num_control_points.is_contiguous())
- assert(shape.points.is_contiguous())
- assert(shape.points.shape[1] == 2)
- assert(torch.isfinite(shape.points).all())
- args.append(diffvg.ShapeType.path)
- args.append(shape.num_control_points.to(torch.int32).cpu())
- args.append(shape.points.cpu())
- if len(shape.stroke_width.shape) > 0 and shape.stroke_width.shape[0] > 1:
- assert(torch.isfinite(shape.stroke_width).all())
- use_thickness = True
- args.append(shape.stroke_width.cpu())
- else:
- args.append(None)
- args.append(shape.is_closed)
- args.append(shape.use_distance_approx)
- elif isinstance(shape, pydiffvg.Polygon):
- assert(shape.points.is_contiguous())
- assert(shape.points.shape[1] == 2)
- args.append(diffvg.ShapeType.path)
- if shape.is_closed:
- args.append(torch.zeros(shape.points.shape[0], dtype = torch.int32))
- else:
- args.append(torch.zeros(shape.points.shape[0] - 1, dtype = torch.int32))
- args.append(shape.points.cpu())
- args.append(None)
- args.append(shape.is_closed)
- args.append(False) # use_distance_approx
- elif isinstance(shape, pydiffvg.Rect):
- assert(shape.p_min.is_contiguous())
- assert(shape.p_max.is_contiguous())
- args.append(diffvg.ShapeType.rect)
- args.append(shape.p_min.cpu())
- args.append(shape.p_max.cpu())
- else:
- assert(False)
- if use_thickness:
- args.append(torch.tensor(0.0))
- else:
- args.append(shape.stroke_width.cpu())
-
- for shape_group in shape_groups:
- assert(shape_group.shape_ids.is_contiguous())
- args.append(shape_group.shape_ids.to(torch.int32).cpu())
- # Fill color
- if shape_group.fill_color is None:
- args.append(None)
- elif isinstance(shape_group.fill_color, torch.Tensor):
- assert(shape_group.fill_color.is_contiguous())
- args.append(diffvg.ColorType.constant)
- args.append(shape_group.fill_color.cpu())
- elif isinstance(shape_group.fill_color, pydiffvg.LinearGradient):
- assert(shape_group.fill_color.begin.is_contiguous())
- assert(shape_group.fill_color.end.is_contiguous())
- assert(shape_group.fill_color.offsets.is_contiguous())
- assert(shape_group.fill_color.stop_colors.is_contiguous())
- args.append(diffvg.ColorType.linear_gradient)
- args.append(shape_group.fill_color.begin.cpu())
- args.append(shape_group.fill_color.end.cpu())
- args.append(shape_group.fill_color.offsets.cpu())
- args.append(shape_group.fill_color.stop_colors.cpu())
- elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient):
- assert(shape_group.fill_color.center.is_contiguous())
- assert(shape_group.fill_color.radius.is_contiguous())
- assert(shape_group.fill_color.offsets.is_contiguous())
- assert(shape_group.fill_color.stop_colors.is_contiguous())
- args.append(diffvg.ColorType.radial_gradient)
- args.append(shape_group.fill_color.center.cpu())
- args.append(shape_group.fill_color.radius.cpu())
- args.append(shape_group.fill_color.offsets.cpu())
- args.append(shape_group.fill_color.stop_colors.cpu())
-
- if shape_group.fill_color is not None:
- # go through the underlying shapes and check if they are all closed
- for shape_id in shape_group.shape_ids:
- if isinstance(shapes[shape_id], pydiffvg.Path):
- if not shapes[shape_id].is_closed:
- warnings.warn("Detected non-closed paths with fill color. This might causes unexpected results.", Warning)
-
- # Stroke color
- if shape_group.stroke_color is None:
- args.append(None)
- elif isinstance(shape_group.stroke_color, torch.Tensor):
- assert(shape_group.stroke_color.is_contiguous())
- args.append(diffvg.ColorType.constant)
- args.append(shape_group.stroke_color.cpu())
- elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient):
- assert(shape_group.stroke_color.begin.is_contiguous())
- assert(shape_group.stroke_color.end.is_contiguous())
- assert(shape_group.stroke_color.offsets.is_contiguous())
- assert(shape_group.stroke_color.stop_colors.is_contiguous())
- assert(torch.isfinite(shape_group.stroke_color.stop_colors).all())
- args.append(diffvg.ColorType.linear_gradient)
- args.append(shape_group.stroke_color.begin.cpu())
- args.append(shape_group.stroke_color.end.cpu())
- args.append(shape_group.stroke_color.offsets.cpu())
- args.append(shape_group.stroke_color.stop_colors.cpu())
- elif isinstance(shape_group.stroke_color, pydiffvg.RadialGradient):
- assert(shape_group.stroke_color.center.is_contiguous())
- assert(shape_group.stroke_color.radius.is_contiguous())
- assert(shape_group.stroke_color.offsets.is_contiguous())
- assert(shape_group.stroke_color.stop_colors.is_contiguous())
- assert(torch.isfinite(shape_group.stroke_color.stop_colors).all())
- args.append(diffvg.ColorType.radial_gradient)
- args.append(shape_group.stroke_color.center.cpu())
- args.append(shape_group.stroke_color.radius.cpu())
- args.append(shape_group.stroke_color.offsets.cpu())
- args.append(shape_group.stroke_color.stop_colors.cpu())
- args.append(shape_group.use_even_odd_rule)
- # Transformation
- args.append(shape_group.shape_to_canvas.contiguous().cpu())
- args.append(filter.type)
- args.append(filter.radius.cpu())
- return args
-
- @staticmethod
- def forward(ctx,
- width,
- height,
- num_samples_x,
- num_samples_y,
- seed,
- background_image,
- *args):
- """
- Forward rendering pass.
- """
- # Unpack arguments
- current_index = 0
- canvas_width = args[current_index]
- current_index += 1
- canvas_height = args[current_index]
- current_index += 1
- num_shapes = args[current_index]
- current_index += 1
- num_shape_groups = args[current_index]
- current_index += 1
- output_type = args[current_index]
- current_index += 1
- use_prefiltering = args[current_index]
- current_index += 1
- eval_positions = args[current_index]
- current_index += 1
- shapes = []
- shape_groups = []
- shape_contents = [] # Important to avoid GC deleting the shapes
- color_contents = [] # Same as above
- for shape_id in range(num_shapes):
- shape_type = args[current_index]
- current_index += 1
- if shape_type == diffvg.ShapeType.circle:
- radius = args[current_index]
- current_index += 1
- center = args[current_index]
- current_index += 1
- shape = diffvg.Circle(radius, diffvg.Vector2f(center[0], center[1]))
- elif shape_type == diffvg.ShapeType.ellipse:
- radius = args[current_index]
- current_index += 1
- center = args[current_index]
- current_index += 1
- shape = diffvg.Ellipse(diffvg.Vector2f(radius[0], radius[1]),
- diffvg.Vector2f(center[0], center[1]))
- elif shape_type == diffvg.ShapeType.path:
- num_control_points = args[current_index]
- current_index += 1
- points = args[current_index]
- current_index += 1
- thickness = args[current_index]
- current_index += 1
- is_closed = args[current_index]
- current_index += 1
- use_distance_approx = args[current_index]
- current_index += 1
- shape = diffvg.Path(diffvg.int_ptr(num_control_points.data_ptr()),
- diffvg.float_ptr(points.data_ptr()),
- diffvg.float_ptr(thickness.data_ptr() if thickness is not None else 0),
- num_control_points.shape[0],
- points.shape[0],
- is_closed,
- use_distance_approx)
- elif shape_type == diffvg.ShapeType.rect:
- p_min = args[current_index]
- current_index += 1
- p_max = args[current_index]
- current_index += 1
- shape = diffvg.Rect(diffvg.Vector2f(p_min[0], p_min[1]),
- diffvg.Vector2f(p_max[0], p_max[1]))
- else:
- assert(False)
- stroke_width = args[current_index]
- current_index += 1
- shapes.append(diffvg.Shape(\
- shape_type, shape.get_ptr(), stroke_width.item()))
- shape_contents.append(shape)
-
- for shape_group_id in range(num_shape_groups):
- shape_ids = args[current_index]
- current_index += 1
- fill_color_type = args[current_index]
- current_index += 1
- if fill_color_type == diffvg.ColorType.constant:
- color = args[current_index]
- current_index += 1
- fill_color = diffvg.Constant(\
- diffvg.Vector4f(color[0], color[1], color[2], color[3]))
- elif fill_color_type == diffvg.ColorType.linear_gradient:
- beg = args[current_index]
- current_index += 1
- end = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- fill_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
- diffvg.Vector2f(end[0], end[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif fill_color_type == diffvg.ColorType.radial_gradient:
- center = args[current_index]
- current_index += 1
- radius = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- fill_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
- diffvg.Vector2f(radius[0], radius[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif fill_color_type is None:
- fill_color = None
- else:
- assert(False)
- stroke_color_type = args[current_index]
- current_index += 1
- if stroke_color_type == diffvg.ColorType.constant:
- color = args[current_index]
- current_index += 1
- stroke_color = diffvg.Constant(\
- diffvg.Vector4f(color[0], color[1], color[2], color[3]))
- elif stroke_color_type == diffvg.ColorType.linear_gradient:
- beg = args[current_index]
- current_index += 1
- end = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- stroke_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
- diffvg.Vector2f(end[0], end[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif stroke_color_type == diffvg.ColorType.radial_gradient:
- center = args[current_index]
- current_index += 1
- radius = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- stroke_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
- diffvg.Vector2f(radius[0], radius[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif stroke_color_type is None:
- stroke_color = None
- else:
- assert(False)
- use_even_odd_rule = args[current_index]
- current_index += 1
- shape_to_canvas = args[current_index]
- current_index += 1
-
- if fill_color is not None:
- color_contents.append(fill_color)
- if stroke_color is not None:
- color_contents.append(stroke_color)
- shape_groups.append(diffvg.ShapeGroup(\
- diffvg.int_ptr(shape_ids.data_ptr()),
- shape_ids.shape[0],
- diffvg.ColorType.constant if fill_color_type is None else fill_color_type,
- diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(),
- diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type,
- diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(),
- use_even_odd_rule,
- diffvg.float_ptr(shape_to_canvas.data_ptr())))
-
- filter_type = args[current_index]
- current_index += 1
- filter_radius = args[current_index]
- current_index += 1
- filt = diffvg.Filter(filter_type, filter_radius)
-
- start = time.time()
- scene = diffvg.Scene(canvas_width, canvas_height,
- shapes, shape_groups, filt, pydiffvg.get_use_gpu(),
- pydiffvg.get_device().index if pydiffvg.get_device().index is not None else -1)
- time_elapsed = time.time() - start
- global print_timing
- if print_timing:
- print('Scene construction, time: %.5f s' % time_elapsed)
-
- if output_type == OutputType.color:
- assert(eval_positions.shape[0] == 0)
- rendered_image = torch.zeros(height, width, 4, device = pydiffvg.get_device())
- else:
- assert(output_type == OutputType.sdf)
- if eval_positions.shape[0] == 0:
- rendered_image = torch.zeros(height, width, 1, device = pydiffvg.get_device())
- else:
- rendered_image = torch.zeros(eval_positions.shape[0], 1, device = pydiffvg.get_device())
-
- if background_image is not None:
- background_image = background_image.to(pydiffvg.get_device())
- if background_image.shape[2] == 3:
- background_image = torch.cat((\
- background_image, torch.ones(background_image.shape[0], background_image.shape[1], 1,
- device = background_image.device)), dim = 2)
- background_image = background_image.contiguous()
- assert(background_image.shape[0] == rendered_image.shape[0])
- assert(background_image.shape[1] == rendered_image.shape[1])
- assert(background_image.shape[2] == 4)
-
- start = time.time()
- diffvg.render(scene,
- diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0),
- diffvg.float_ptr(rendered_image.data_ptr() if output_type == OutputType.color else 0),
- diffvg.float_ptr(rendered_image.data_ptr() if output_type == OutputType.sdf else 0),
- width,
- height,
- num_samples_x,
- num_samples_y,
- seed,
- diffvg.float_ptr(0), # d_background_image
- diffvg.float_ptr(0), # d_render_image
- diffvg.float_ptr(0), # d_render_sdf
- diffvg.float_ptr(0), # d_translation
- use_prefiltering,
- diffvg.float_ptr(eval_positions.data_ptr()),
- eval_positions.shape[0])
- assert(torch.isfinite(rendered_image).all())
- time_elapsed = time.time() - start
- if print_timing:
- print('Forward pass, time: %.5f s' % time_elapsed)
-
- ctx.scene = scene
- ctx.background_image = background_image
- ctx.shape_contents = shape_contents
- ctx.color_contents = color_contents
- ctx.filter = filt
- ctx.width = width
- ctx.height = height
- ctx.num_samples_x = num_samples_x
- ctx.num_samples_y = num_samples_y
- ctx.seed = seed
- ctx.output_type = output_type
- ctx.use_prefiltering = use_prefiltering
- ctx.eval_positions = eval_positions
- return rendered_image
-
- @staticmethod
- def render_grad(grad_img,
- width,
- height,
- num_samples_x,
- num_samples_y,
- seed,
- background_image,
- *args):
- if not grad_img.is_contiguous():
- grad_img = grad_img.contiguous()
- assert(torch.isfinite(grad_img).all())
-
- # Unpack arguments
- current_index = 0
- canvas_width = args[current_index]
- current_index += 1
- canvas_height = args[current_index]
- current_index += 1
- num_shapes = args[current_index]
- current_index += 1
- num_shape_groups = args[current_index]
- current_index += 1
- output_type = args[current_index]
- current_index += 1
- use_prefiltering = args[current_index]
- current_index += 1
- eval_positions = args[current_index]
- current_index += 1
- shapes = []
- shape_groups = []
- shape_contents = [] # Important to avoid GC deleting the shapes
- color_contents = [] # Same as above
- for shape_id in range(num_shapes):
- shape_type = args[current_index]
- current_index += 1
- if shape_type == diffvg.ShapeType.circle:
- radius = args[current_index]
- current_index += 1
- center = args[current_index]
- current_index += 1
- shape = diffvg.Circle(radius, diffvg.Vector2f(center[0], center[1]))
- elif shape_type == diffvg.ShapeType.ellipse:
- radius = args[current_index]
- current_index += 1
- center = args[current_index]
- current_index += 1
- shape = diffvg.Ellipse(diffvg.Vector2f(radius[0], radius[1]),
- diffvg.Vector2f(center[0], center[1]))
- elif shape_type == diffvg.ShapeType.path:
- num_control_points = args[current_index]
- current_index += 1
- points = args[current_index]
- current_index += 1
- thickness = args[current_index]
- current_index += 1
- is_closed = args[current_index]
- current_index += 1
- use_distance_approx = args[current_index]
- current_index += 1
- shape = diffvg.Path(diffvg.int_ptr(num_control_points.data_ptr()),
- diffvg.float_ptr(points.data_ptr()),
- diffvg.float_ptr(thickness.data_ptr() if thickness is not None else 0),
- num_control_points.shape[0],
- points.shape[0],
- is_closed,
- use_distance_approx)
- elif shape_type == diffvg.ShapeType.rect:
- p_min = args[current_index]
- current_index += 1
- p_max = args[current_index]
- current_index += 1
- shape = diffvg.Rect(diffvg.Vector2f(p_min[0], p_min[1]),
- diffvg.Vector2f(p_max[0], p_max[1]))
- else:
- assert(False)
- stroke_width = args[current_index]
- current_index += 1
- shapes.append(diffvg.Shape(\
- shape_type, shape.get_ptr(), stroke_width.item()))
- shape_contents.append(shape)
-
- for shape_group_id in range(num_shape_groups):
- shape_ids = args[current_index]
- current_index += 1
- fill_color_type = args[current_index]
- current_index += 1
- if fill_color_type == diffvg.ColorType.constant:
- color = args[current_index]
- current_index += 1
- fill_color = diffvg.Constant(\
- diffvg.Vector4f(color[0], color[1], color[2], color[3]))
- elif fill_color_type == diffvg.ColorType.linear_gradient:
- beg = args[current_index]
- current_index += 1
- end = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- fill_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
- diffvg.Vector2f(end[0], end[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif fill_color_type == diffvg.ColorType.radial_gradient:
- center = args[current_index]
- current_index += 1
- radius = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- fill_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
- diffvg.Vector2f(radius[0], radius[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif fill_color_type is None:
- fill_color = None
- else:
- assert(False)
- stroke_color_type = args[current_index]
- current_index += 1
- if stroke_color_type == diffvg.ColorType.constant:
- color = args[current_index]
- current_index += 1
- stroke_color = diffvg.Constant(\
- diffvg.Vector4f(color[0], color[1], color[2], color[3]))
- elif stroke_color_type == diffvg.ColorType.linear_gradient:
- beg = args[current_index]
- current_index += 1
- end = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- stroke_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
- diffvg.Vector2f(end[0], end[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif stroke_color_type == diffvg.ColorType.radial_gradient:
- center = args[current_index]
- current_index += 1
- radius = args[current_index]
- current_index += 1
- offsets = args[current_index]
- current_index += 1
- stop_colors = args[current_index]
- current_index += 1
- assert(offsets.shape[0] == stop_colors.shape[0])
- stroke_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
- diffvg.Vector2f(radius[0], radius[1]),
- offsets.shape[0],
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- elif stroke_color_type is None:
- stroke_color = None
- else:
- assert(False)
- use_even_odd_rule = args[current_index]
- current_index += 1
- shape_to_canvas = args[current_index]
- current_index += 1
-
- if fill_color is not None:
- color_contents.append(fill_color)
- if stroke_color is not None:
- color_contents.append(stroke_color)
- shape_groups.append(diffvg.ShapeGroup(\
- diffvg.int_ptr(shape_ids.data_ptr()),
- shape_ids.shape[0],
- diffvg.ColorType.constant if fill_color_type is None else fill_color_type,
- diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(),
- diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type,
- diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(),
- use_even_odd_rule,
- diffvg.float_ptr(shape_to_canvas.data_ptr())))
-
- filter_type = args[current_index]
- current_index += 1
- filter_radius = args[current_index]
- current_index += 1
- filt = diffvg.Filter(filter_type, filter_radius)
-
- scene = diffvg.Scene(canvas_width, canvas_height,
- shapes, shape_groups, filt, pydiffvg.get_use_gpu(),
- pydiffvg.get_device().index if pydiffvg.get_device().index is not None else -1)
-
- if output_type == OutputType.color:
- assert(grad_img.shape[2] == 4)
- else:
- assert(grad_img.shape[2] == 1)
-
- if background_image is not None:
- background_image = background_image.to(pydiffvg.get_device())
- if background_image.shape[2] == 3:
- background_image = torch.cat((\
- background_image, torch.ones(background_image.shape[0], background_image.shape[1], 1,
- device = background_image.device)), dim = 2)
- background_image = background_image.contiguous()
- assert(background_image.shape[0] == rendered_image.shape[0])
- assert(background_image.shape[1] == rendered_image.shape[1])
- assert(background_image.shape[2] == 4)
-
- translation_grad_image = \
- torch.zeros(height, width, 2, device = pydiffvg.get_device())
- start = time.time()
- diffvg.render(scene,
- diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0),
- diffvg.float_ptr(0), # render_image
- diffvg.float_ptr(0), # render_sdf
- width,
- height,
- num_samples_x,
- num_samples_y,
- seed,
- diffvg.float_ptr(0), # d_background_image
- diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.color else 0),
- diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.sdf else 0),
- diffvg.float_ptr(translation_grad_image.data_ptr()),
- use_prefiltering,
- diffvg.float_ptr(eval_positions.data_ptr()),
- eval_positions.shape[0])
- time_elapsed = time.time() - start
- if print_timing:
- print('Gradient pass, time: %.5f s' % time_elapsed)
- assert(torch.isfinite(translation_grad_image).all())
-
- return translation_grad_image
-
- @staticmethod
- def backward(ctx,
- grad_img):
- if not grad_img.is_contiguous():
- grad_img = grad_img.contiguous()
- assert(torch.isfinite(grad_img).all())
-
- scene = ctx.scene
- width = ctx.width
- height = ctx.height
- num_samples_x = ctx.num_samples_x
- num_samples_y = ctx.num_samples_y
- seed = ctx.seed
- output_type = ctx.output_type
- use_prefiltering = ctx.use_prefiltering
- eval_positions = ctx.eval_positions
- background_image = ctx.background_image
-
- if background_image is not None:
- d_background_image = torch.zeros_like(background_image)
- else:
- d_background_image = None
-
- start = time.time()
- diffvg.render(scene,
- diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0),
- diffvg.float_ptr(0), # render_image
- diffvg.float_ptr(0), # render_sdf
- width,
- height,
- num_samples_x,
- num_samples_y,
- seed,
- diffvg.float_ptr(d_background_image.data_ptr() if background_image is not None else 0),
- diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.color else 0),
- diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.sdf else 0),
- diffvg.float_ptr(0), # d_translation
- use_prefiltering,
- diffvg.float_ptr(eval_positions.data_ptr()),
- eval_positions.shape[0])
- time_elapsed = time.time() - start
- global print_timing
- if print_timing:
- print('Backward pass, time: %.5f s' % time_elapsed)
-
- d_args = []
- d_args.append(None) # width
- d_args.append(None) # height
- d_args.append(None) # num_samples_x
- d_args.append(None) # num_samples_y
- d_args.append(None) # seed
- d_args.append(d_background_image)
- d_args.append(None) # canvas_width
- d_args.append(None) # canvas_height
- d_args.append(None) # num_shapes
- d_args.append(None) # num_shape_groups
- d_args.append(None) # output_type
- d_args.append(None) # use_prefiltering
- d_args.append(None) # eval_positions
- for shape_id in range(scene.num_shapes):
- d_args.append(None) # type
- d_shape = scene.get_d_shape(shape_id)
- use_thickness = False
- if d_shape.type == diffvg.ShapeType.circle:
- d_circle = d_shape.as_circle()
- radius = torch.tensor(d_circle.radius)
- assert(torch.isfinite(radius).all())
- d_args.append(radius)
- c = d_circle.center
- c = torch.tensor((c.x, c.y))
- assert(torch.isfinite(c).all())
- d_args.append(c)
- elif d_shape.type == diffvg.ShapeType.ellipse:
- d_ellipse = d_shape.as_ellipse()
- r = d_ellipse.radius
- r = torch.tensor((d_ellipse.radius.x, d_ellipse.radius.y))
- assert(torch.isfinite(r).all())
- d_args.append(r)
- c = d_ellipse.center
- c = torch.tensor((c.x, c.y))
- assert(torch.isfinite(c).all())
- d_args.append(c)
- elif d_shape.type == diffvg.ShapeType.path:
- d_path = d_shape.as_path()
- points = torch.zeros((d_path.num_points, 2))
- thickness = None
- if d_path.has_thickness():
- use_thickness = True
- thickness = torch.zeros(d_path.num_points)
- d_path.copy_to(diffvg.float_ptr(points.data_ptr()), diffvg.float_ptr(thickness.data_ptr()))
- else:
- d_path.copy_to(diffvg.float_ptr(points.data_ptr()), diffvg.float_ptr(0))
- assert(torch.isfinite(points).all())
- if thickness is not None:
- assert(torch.isfinite(thickness).all())
- d_args.append(None) # num_control_points
- d_args.append(points)
- d_args.append(thickness)
- d_args.append(None) # is_closed
- d_args.append(None) # use_distance_approx
- elif d_shape.type == diffvg.ShapeType.rect:
- d_rect = d_shape.as_rect()
- p_min = torch.tensor((d_rect.p_min.x, d_rect.p_min.y))
- p_max = torch.tensor((d_rect.p_max.x, d_rect.p_max.y))
- assert(torch.isfinite(p_min).all())
- assert(torch.isfinite(p_max).all())
- d_args.append(p_min)
- d_args.append(p_max)
- else:
- assert(False)
- if use_thickness:
- d_args.append(None)
- else:
- w = torch.tensor((d_shape.stroke_width))
- assert(torch.isfinite(w).all())
- d_args.append(w)
-
- for group_id in range(scene.num_shape_groups):
- d_shape_group = scene.get_d_shape_group(group_id)
- d_args.append(None) # shape_ids
- d_args.append(None) # fill_color_type
- if d_shape_group.has_fill_color():
- if d_shape_group.fill_color_type == diffvg.ColorType.constant:
- d_constant = d_shape_group.fill_color_as_constant()
- c = d_constant.color
- d_args.append(torch.tensor((c.x, c.y, c.z, c.w)))
- elif d_shape_group.fill_color_type == diffvg.ColorType.linear_gradient:
- d_linear_gradient = d_shape_group.fill_color_as_linear_gradient()
- beg = d_linear_gradient.begin
- d_args.append(torch.tensor((beg.x, beg.y)))
- end = d_linear_gradient.end
- d_args.append(torch.tensor((end.x, end.y)))
- offsets = torch.zeros((d_linear_gradient.num_stops))
- stop_colors = torch.zeros((d_linear_gradient.num_stops, 4))
- d_linear_gradient.copy_to(\
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- assert(torch.isfinite(stop_colors).all())
- d_args.append(offsets)
- d_args.append(stop_colors)
- elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient:
- d_radial_gradient = d_shape_group.fill_color_as_radial_gradient()
- center = d_radial_gradient.center
- d_args.append(torch.tensor((center.x, center.y)))
- radius = d_radial_gradient.radius
- d_args.append(torch.tensor((radius.x, radius.y)))
- offsets = torch.zeros((d_radial_gradient.num_stops))
- stop_colors = torch.zeros((d_radial_gradient.num_stops, 4))
- d_radial_gradient.copy_to(\
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- assert(torch.isfinite(stop_colors).all())
- d_args.append(offsets)
- d_args.append(stop_colors)
- else:
- assert(False)
- d_args.append(None) # stroke_color_type
- if d_shape_group.has_stroke_color():
- if d_shape_group.stroke_color_type == diffvg.ColorType.constant:
- d_constant = d_shape_group.stroke_color_as_constant()
- c = d_constant.color
- d_args.append(torch.tensor((c.x, c.y, c.z, c.w)))
- elif d_shape_group.stroke_color_type == diffvg.ColorType.linear_gradient:
- d_linear_gradient = d_shape_group.stroke_color_as_linear_gradient()
- beg = d_linear_gradient.begin
- d_args.append(torch.tensor((beg.x, beg.y)))
- end = d_linear_gradient.end
- d_args.append(torch.tensor((end.x, end.y)))
- offsets = torch.zeros((d_linear_gradient.num_stops))
- stop_colors = torch.zeros((d_linear_gradient.num_stops, 4))
- d_linear_gradient.copy_to(\
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- assert(torch.isfinite(stop_colors).all())
- d_args.append(offsets)
- d_args.append(stop_colors)
- elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient:
- d_radial_gradient = d_shape_group.stroke_color_as_radial_gradient()
- center = d_radial_gradient.center
- d_args.append(torch.tensor((center.x, center.y)))
- radius = d_radial_gradient.radius
- d_args.append(torch.tensor((radius.x, radius.y)))
- offsets = torch.zeros((d_radial_gradient.num_stops))
- stop_colors = torch.zeros((d_radial_gradient.num_stops, 4))
- d_radial_gradient.copy_to(\
- diffvg.float_ptr(offsets.data_ptr()),
- diffvg.float_ptr(stop_colors.data_ptr()))
- assert(torch.isfinite(stop_colors).all())
- d_args.append(offsets)
- d_args.append(stop_colors)
- else:
- assert(False)
- d_args.append(None) # use_even_odd_rule
- d_shape_to_canvas = torch.zeros((3, 3))
- d_shape_group.copy_to(diffvg.float_ptr(d_shape_to_canvas.data_ptr()))
- assert(torch.isfinite(d_shape_to_canvas).all())
- d_args.append(d_shape_to_canvas)
- d_args.append(None) # filter_type
- d_args.append(torch.tensor(scene.get_d_filter_radius()))
-
- return tuple(d_args)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h b/spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h
deleted file mode 100644
index e50eba76255421812bb1b0c4a355e879eef37492..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file disjoint_tls_pool.h
- * \brief A function wrapping a thread local instance of a \p disjoint_unsynchronized_pool_resource.
- */
-
-#pragma once
-
-#include
-
-#if THRUST_CPP_DIALECT >= 2011
-
-#include
-
-namespace thrust
-{
-namespace mr
-{
-
-/*! \addtogroup memory_management Memory Management
- * \addtogroup memory_resources Memory Resources
- * \ingroup memory_resources
- * \{
- */
-
-/*! Potentially constructs, if not yet created, and then returns the address of a thread-local
- * \p disjoint_unsynchronized_pool_resource,
- *
- * \tparam Upstream the first template argument to the pool template
- * \tparam Bookkeeper the second template argument to the pool template
- * \param upstream the first argument to the constructor, if invoked
- * \param bookkeeper the second argument to the constructor, if invoked
- */
-template
-__host__
-thrust::mr::disjoint_unsynchronized_pool_resource & tls_disjoint_pool(
- Upstream * upstream = NULL,
- Bookkeeper * bookkeeper = NULL)
-{
- static thread_local auto adaptor = [&]{
- assert(upstream && bookkeeper);
- return thrust::mr::disjoint_unsynchronized_pool_resource(upstream, bookkeeper);
- }();
-
- return adaptor;
-}
-
-/*! \}
- */
-
-} // end mr
-} // end thrust
-
-#endif // THRUST_CPP_DIALECT >= 2011
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h
deleted file mode 100644
index 9ef45dfe62e47d9779c4b60839628efd82c2a5e1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits for_each
-#include
-
diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py
deleted file mode 100644
index 2de024ba08c549605a08b64d096f1f0db7b7722a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from kornia.geometry.transform import rotate
-
-
-class LearnableSpatialTransformWrapper(nn.Module):
- def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True):
- super().__init__()
- self.impl = impl
- self.angle = torch.rand(1) * angle_init_range
- if train_angle:
- self.angle = nn.Parameter(self.angle, requires_grad=True)
- self.pad_coef = pad_coef
-
- def forward(self, x):
- if torch.is_tensor(x):
- return self.inverse_transform(self.impl(self.transform(x)), x)
- elif isinstance(x, tuple):
- x_trans = tuple(self.transform(elem) for elem in x)
- y_trans = self.impl(x_trans)
- return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x))
- else:
- raise ValueError(f'Unexpected input type {type(x)}')
-
- def transform(self, x):
- height, width = x.shape[2:]
- pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef)
- x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect')
- x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded))
- return x_padded_rotated
-
- def inverse_transform(self, y_padded_rotated, orig_x):
- height, width = orig_x.shape[2:]
- pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef)
-
- y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated))
- y_height, y_width = y_padded.shape[2:]
- y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w]
- return y
-
-
-if __name__ == '__main__':
- layer = LearnableSpatialTransformWrapper(nn.Identity())
- x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float()
- y = layer(x)
- assert x.shape == y.shape
- assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1])
- print('all ok')
diff --git a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py b/spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py
deleted file mode 100644
index 20a9473388d6404ef0ed316a04ec01f12e153686..0000000000000000000000000000000000000000
--- a/spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import src.constants as constants_utils
-import requests
-from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
-from mosestokenizer import *
-from indicnlp.tokenize import sentence_tokenize
-from googletrans import Translator, constants
-
-
-class TRANSLATOR:
- def __init__(self):
- print()
-
-
- def split_sentences(self, paragraph, language):
- if language == "en":
- with MosesSentenceSplitter(language) as splitter:
- return splitter([paragraph])
- elif language in constants_utils.INDIC_LANGUAGE:
- return sentence_tokenize.sentence_split(paragraph, lang=language)
-
-
- def get_in_hindi(self, payload):
- tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
- model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
- article = self.split_sentences(payload['inputs'], 'en')
- # inputs = tokenizer(payload['input'], return_tensors="pt")
- out_text = ""
- for a in article:
- inputs = tokenizer(a, return_tensors="pt")
- translated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["hin_Deva"], max_length=100)
- translated_sent = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
- out_text = out_text.join(translated_sent)
- return out_text
-
-
- def get_in_indic(self, text, language='Hindi'):
- tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
- model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
- inputs = tokenizer(text, return_tensors="pt")
-
- code = "eng_Latn"
- if language == 'Hindi':
- code= "hin_Deva"
- elif language == 'Marathi':
- code = "mar_Deva"
-
- translated_tokens = model.generate(
- **inputs,
- forced_bos_token_id=tokenizer.lang_code_to_id[code],
- max_length=1000
- )
-
- out_text = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
- return out_text
-
-
- def get_indic_google_translate(self, text, language='Hindi'):
- # Init the Google API translator
- translator = Translator()
- translations = translator.translate(text, dest=constants_utils.INDIC_LANGUAGE.get(language, 'en'))
- return str(translations.text)
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py
deleted file mode 100644
index ae2a8d0e21370129c0182cddc427eb293bbe5982..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py
+++ /dev/null
@@ -1,68 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import json
-from typing import Iterable
-
-from torch.utils.data import Dataset, ConcatDataset
-from torch.utils.data.dataloader import default_collate
-
-
-class BaseDataset(Dataset):
- def __init__(
- self, vis_processor=None, text_processor=None, vis_root=None, ann_paths=[]
- ):
- """
- vis_root (string): Root directory of images (e.g. coco/images/)
- ann_root (string): directory to store the annotation file
- """
- self.vis_root = vis_root
-
- self.annotation = []
- for ann_path in ann_paths:
- self.annotation.extend(json.load(open(ann_path, "r"))['annotations'])
-
- self.vis_processor = vis_processor
- self.text_processor = text_processor
-
- self._add_instance_ids()
-
- def __len__(self):
- return len(self.annotation)
-
- def collater(self, samples):
- return default_collate(samples)
-
- def set_processors(self, vis_processor, text_processor):
- self.vis_processor = vis_processor
- self.text_processor = text_processor
-
- def _add_instance_ids(self, key="instance_id"):
- for idx, ann in enumerate(self.annotation):
- ann[key] = str(idx)
-
-
-class ConcatDataset(ConcatDataset):
- def __init__(self, datasets: Iterable[Dataset]) -> None:
- super().__init__(datasets)
-
- def collater(self, samples):
- # TODO For now only supports datasets with same underlying collater implementations
-
- all_keys = set()
- for s in samples:
- all_keys.update(s)
-
- shared_keys = all_keys
- for s in samples:
- shared_keys = shared_keys & set(s.keys())
-
- samples_shared_keys = []
- for s in samples:
- samples_shared_keys.append({k: s[k] for k in s.keys() if k in shared_keys})
-
- return self.datasets[0].collater(samples_shared_keys)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py
deleted file mode 100644
index bf98eb2c8c25c7446dd91890f49291486222f3b8..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py
+++ /dev/null
@@ -1,283 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# a Tk display interface
-#
-# History:
-# 96-04-08 fl Created
-# 96-09-06 fl Added getimage method
-# 96-11-01 fl Rewritten, removed image attribute and crop method
-# 97-05-09 fl Use PyImagingPaste method instead of image type
-# 97-05-12 fl Minor tweaks to match the IFUNC95 interface
-# 97-05-17 fl Support the "pilbitmap" booster patch
-# 97-06-05 fl Added file= and data= argument to image constructors
-# 98-03-09 fl Added width and height methods to Image classes
-# 98-07-02 fl Use default mode for "P" images without palette attribute
-# 98-07-02 fl Explicitly destroy Tkinter image objects
-# 99-07-24 fl Support multiple Tk interpreters (from Greg Couch)
-# 99-07-26 fl Automatically hook into Tkinter (if possible)
-# 99-08-15 fl Hook uses _imagingtk instead of _imaging
-#
-# Copyright (c) 1997-1999 by Secret Labs AB
-# Copyright (c) 1996-1997 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import tkinter
-from io import BytesIO
-
-from . import Image
-
-# --------------------------------------------------------------------
-# Check for Tkinter interface hooks
-
-_pilbitmap_ok = None
-
-
-def _pilbitmap_check():
- global _pilbitmap_ok
- if _pilbitmap_ok is None:
- try:
- im = Image.new("1", (1, 1))
- tkinter.BitmapImage(data=f"PIL:{im.im.id}")
- _pilbitmap_ok = 1
- except tkinter.TclError:
- _pilbitmap_ok = 0
- return _pilbitmap_ok
-
-
-def _get_image_from_kw(kw):
- source = None
- if "file" in kw:
- source = kw.pop("file")
- elif "data" in kw:
- source = BytesIO(kw.pop("data"))
- if source:
- return Image.open(source)
-
-
-def _pyimagingtkcall(command, photo, id):
- tk = photo.tk
- try:
- tk.call(command, photo, id)
- except tkinter.TclError:
- # activate Tkinter hook
- # may raise an error if it cannot attach to Tkinter
- from . import _imagingtk
-
- _imagingtk.tkinit(tk.interpaddr())
- tk.call(command, photo, id)
-
-
-# --------------------------------------------------------------------
-# PhotoImage
-
-
-class PhotoImage:
- """
- A Tkinter-compatible photo image. This can be used
- everywhere Tkinter expects an image object. If the image is an RGBA
- image, pixels having alpha 0 are treated as transparent.
-
- The constructor takes either a PIL image, or a mode and a size.
- Alternatively, you can use the ``file`` or ``data`` options to initialize
- the photo image object.
-
- :param image: Either a PIL image, or a mode string. If a mode string is
- used, a size must also be given.
- :param size: If the first argument is a mode string, this defines the size
- of the image.
- :keyword file: A filename to load the image from (using
- ``Image.open(file)``).
- :keyword data: An 8-bit string containing image data (as loaded from an
- image file).
- """
-
- def __init__(self, image=None, size=None, **kw):
- # Tk compatibility: file or data
- if image is None:
- image = _get_image_from_kw(kw)
-
- if hasattr(image, "mode") and hasattr(image, "size"):
- # got an image instead of a mode
- mode = image.mode
- if mode == "P":
- # palette mapped data
- image.apply_transparency()
- image.load()
- try:
- mode = image.palette.mode
- except AttributeError:
- mode = "RGB" # default
- size = image.size
- kw["width"], kw["height"] = size
- else:
- mode = image
- image = None
-
- if mode not in ["1", "L", "RGB", "RGBA"]:
- mode = Image.getmodebase(mode)
-
- self.__mode = mode
- self.__size = size
- self.__photo = tkinter.PhotoImage(**kw)
- self.tk = self.__photo.tk
- if image:
- self.paste(image)
-
- def __del__(self):
- name = self.__photo.name
- self.__photo.name = None
- try:
- self.__photo.tk.call("image", "delete", name)
- except Exception:
- pass # ignore internal errors
-
- def __str__(self):
- """
- Get the Tkinter photo image identifier. This method is automatically
- called by Tkinter whenever a PhotoImage object is passed to a Tkinter
- method.
-
- :return: A Tkinter photo image identifier (a string).
- """
- return str(self.__photo)
-
- def width(self):
- """
- Get the width of the image.
-
- :return: The width, in pixels.
- """
- return self.__size[0]
-
- def height(self):
- """
- Get the height of the image.
-
- :return: The height, in pixels.
- """
- return self.__size[1]
-
- def paste(self, im):
- """
- Paste a PIL image into the photo image. Note that this can
- be very slow if the photo image is displayed.
-
- :param im: A PIL image. The size must match the target region. If the
- mode does not match, the image is converted to the mode of
- the bitmap image.
- """
- # convert to blittable
- im.load()
- image = im.im
- if image.isblock() and im.mode == self.__mode:
- block = image
- else:
- block = image.new_block(self.__mode, im.size)
- image.convert2(block, image) # convert directly between buffers
-
- _pyimagingtkcall("PyImagingPhoto", self.__photo, block.id)
-
-
-# --------------------------------------------------------------------
-# BitmapImage
-
-
-class BitmapImage:
- """
- A Tkinter-compatible bitmap image. This can be used everywhere Tkinter
- expects an image object.
-
- The given image must have mode "1". Pixels having value 0 are treated as
- transparent. Options, if any, are passed on to Tkinter. The most commonly
- used option is ``foreground``, which is used to specify the color for the
- non-transparent parts. See the Tkinter documentation for information on
- how to specify colours.
-
- :param image: A PIL image.
- """
-
- def __init__(self, image=None, **kw):
- # Tk compatibility: file or data
- if image is None:
- image = _get_image_from_kw(kw)
-
- self.__mode = image.mode
- self.__size = image.size
-
- if _pilbitmap_check():
- # fast way (requires the pilbitmap booster patch)
- image.load()
- kw["data"] = f"PIL:{image.im.id}"
- self.__im = image # must keep a reference
- else:
- # slow but safe way
- kw["data"] = image.tobitmap()
- self.__photo = tkinter.BitmapImage(**kw)
-
- def __del__(self):
- name = self.__photo.name
- self.__photo.name = None
- try:
- self.__photo.tk.call("image", "delete", name)
- except Exception:
- pass # ignore internal errors
-
- def width(self):
- """
- Get the width of the image.
-
- :return: The width, in pixels.
- """
- return self.__size[0]
-
- def height(self):
- """
- Get the height of the image.
-
- :return: The height, in pixels.
- """
- return self.__size[1]
-
- def __str__(self):
- """
- Get the Tkinter bitmap image identifier. This method is automatically
- called by Tkinter whenever a BitmapImage object is passed to a Tkinter
- method.
-
- :return: A Tkinter bitmap image identifier (a string).
- """
- return str(self.__photo)
-
-
-def getimage(photo):
- """Copies the contents of a PhotoImage to a PIL image memory."""
- im = Image.new("RGBA", (photo.width(), photo.height()))
- block = im.im
-
- _pyimagingtkcall("PyImagingPhotoGet", photo, block.id)
-
- return im
-
-
-def _show(image, title):
- """Helper for the Image.show method."""
-
- class UI(tkinter.Label):
- def __init__(self, master, im):
- if im.mode == "1":
- self.image = BitmapImage(im, foreground="white", master=master)
- else:
- self.image = PhotoImage(im, master=master)
- super().__init__(master, image=self.image, bg="black", bd=0)
-
- if not tkinter._default_root:
- msg = "tkinter not initialized"
- raise OSError(msg)
- top = tkinter.Toplevel()
- if title:
- top.title(title)
- UI(top, image).pack()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py
deleted file mode 100644
index b17e0257b7999e2512f125f8b74d266158f23820..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""Async wrappers for spooled temp files and temp directory objects"""
-
-# Imports
-import asyncio
-from types import coroutine
-
-from ..base import AsyncBase
-from ..threadpool.utils import (
- delegate_to_executor,
- proxy_property_directly,
- cond_delegate_to_executor,
-)
-from functools import partial
-
-
-@delegate_to_executor("fileno", "rollover")
-@cond_delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "readline",
- "readlines",
- "seek",
- "tell",
- "truncate",
-)
-@proxy_property_directly("closed", "encoding", "mode", "name", "newlines")
-class AsyncSpooledTemporaryFile(AsyncBase):
- """Async wrapper for SpooledTemporaryFile class"""
-
- async def _check(self):
- if self._file._rolled:
- return
- max_size = self._file._max_size
- if max_size and self._file.tell() > max_size:
- await self.rollover()
-
- async def write(self, s):
- """Implementation to anticipate rollover"""
- if self._file._rolled:
- cb = partial(self._file.write, s)
- return await self._loop.run_in_executor(self._executor, cb)
- else:
- file = self._file._file # reference underlying base IO object
- rv = file.write(s)
- await self._check()
- return rv
-
- async def writelines(self, iterable):
- """Implementation to anticipate rollover"""
- if self._file._rolled:
- cb = partial(self._file.writelines, iterable)
- return await self._loop.run_in_executor(self._executor, cb)
- else:
- file = self._file._file # reference underlying base IO object
- rv = file.writelines(iterable)
- await self._check()
- return rv
-
-
-@delegate_to_executor("cleanup")
-@proxy_property_directly("name")
-class AsyncTemporaryDirectory:
- """Async wrapper for TemporaryDirectory class"""
-
- def __init__(self, file, loop, executor):
- self._file = file
- self._loop = loop
- self._executor = executor
-
- async def close(self):
- await self.cleanup()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py
deleted file mode 100644
index 3468fc72e54a19cd5bdf2e6d459e848566208df8..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py
+++ /dev/null
@@ -1,122 +0,0 @@
-"""Pen calculating area, center of mass, variance and standard-deviation,
-covariance and correlation, and slant, of glyph shapes."""
-import math
-from fontTools.pens.momentsPen import MomentsPen
-
-__all__ = ["StatisticsPen"]
-
-
-class StatisticsPen(MomentsPen):
-
- """Pen calculating area, center of mass, variance and
- standard-deviation, covariance and correlation, and slant,
- of glyph shapes.
-
- Note that all the calculated values are 'signed'. Ie. if the
- glyph shape is self-intersecting, the values are not correct
- (but well-defined). As such, area will be negative if contour
- directions are clockwise. Moreover, variance might be negative
- if the shapes are self-intersecting in certain ways."""
-
- def __init__(self, glyphset=None):
- MomentsPen.__init__(self, glyphset=glyphset)
- self.__zero()
-
- def _closePath(self):
- MomentsPen._closePath(self)
- self.__update()
-
- def __zero(self):
- self.meanX = 0
- self.meanY = 0
- self.varianceX = 0
- self.varianceY = 0
- self.stddevX = 0
- self.stddevY = 0
- self.covariance = 0
- self.correlation = 0
- self.slant = 0
-
- def __update(self):
-
- area = self.area
- if not area:
- self.__zero()
- return
-
- # Center of mass
- # https://en.wikipedia.org/wiki/Center_of_mass#A_continuous_volume
- self.meanX = meanX = self.momentX / area
- self.meanY = meanY = self.momentY / area
-
- # Var(X) = E[X^2] - E[X]^2
- self.varianceX = varianceX = self.momentXX / area - meanX**2
- self.varianceY = varianceY = self.momentYY / area - meanY**2
-
- self.stddevX = stddevX = math.copysign(abs(varianceX) ** 0.5, varianceX)
- self.stddevY = stddevY = math.copysign(abs(varianceY) ** 0.5, varianceY)
-
- # Covariance(X,Y) = ( E[X.Y] - E[X]E[Y] )
- self.covariance = covariance = self.momentXY / area - meanX * meanY
-
- # Correlation(X,Y) = Covariance(X,Y) / ( stddev(X) * stddev(Y) )
- # https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
- if stddevX * stddevY == 0:
- correlation = float("NaN")
- else:
- correlation = covariance / (stddevX * stddevY)
- self.correlation = correlation if abs(correlation) > 1e-3 else 0
-
- slant = covariance / varianceY if varianceY != 0 else float("NaN")
- self.slant = slant if abs(slant) > 1e-3 else 0
-
-
-def _test(glyphset, upem, glyphs):
- from fontTools.pens.transformPen import TransformPen
- from fontTools.misc.transform import Scale
-
- print("upem", upem)
-
- for glyph_name in glyphs:
- print()
- print("glyph:", glyph_name)
- glyph = glyphset[glyph_name]
- pen = StatisticsPen(glyphset=glyphset)
- transformer = TransformPen(pen, Scale(1.0 / upem))
- glyph.draw(transformer)
- for item in [
- "area",
- "momentX",
- "momentY",
- "momentXX",
- "momentYY",
- "momentXY",
- "meanX",
- "meanY",
- "varianceX",
- "varianceY",
- "stddevX",
- "stddevY",
- "covariance",
- "correlation",
- "slant",
- ]:
- print("%s: %g" % (item, getattr(pen, item)))
-
-
-def main(args):
- if not args:
- return
- filename, glyphs = args[0], args[1:]
- from fontTools.ttLib import TTFont
-
- font = TTFont(filename)
- if not glyphs:
- glyphs = font.getGlyphOrder()
- _test(font.getGlyphSet(), font["head"].unitsPerEm, glyphs)
-
-
-if __name__ == "__main__":
- import sys
-
- main(sys.argv[1:])
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py
deleted file mode 100644
index 5edc86a9cbc9a0b710cfc014a3910f671f791e54..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py
+++ /dev/null
@@ -1,46 +0,0 @@
-""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT)
-tool to store its hinting source data.
-
-TSI5 contains the VTT character groups.
-"""
-from fontTools.misc.textTools import safeEval
-from . import DefaultTable
-import sys
-import array
-
-
-class table_T_S_I__5(DefaultTable.DefaultTable):
- def decompile(self, data, ttFont):
- numGlyphs = ttFont["maxp"].numGlyphs
- assert len(data) == 2 * numGlyphs
- a = array.array("H")
- a.frombytes(data)
- if sys.byteorder != "big":
- a.byteswap()
- self.glyphGrouping = {}
- for i in range(numGlyphs):
- self.glyphGrouping[ttFont.getGlyphName(i)] = a[i]
-
- def compile(self, ttFont):
- glyphNames = ttFont.getGlyphOrder()
- a = array.array("H")
- for i in range(len(glyphNames)):
- a.append(self.glyphGrouping.get(glyphNames[i], 0))
- if sys.byteorder != "big":
- a.byteswap()
- return a.tobytes()
-
- def toXML(self, writer, ttFont):
- names = sorted(self.glyphGrouping.keys())
- for glyphName in names:
- writer.simpletag(
- "glyphgroup", name=glyphName, value=self.glyphGrouping[glyphName]
- )
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "glyphGrouping"):
- self.glyphGrouping = {}
- if name != "glyphgroup":
- return
- self.glyphGrouping[attrs["name"]] = safeEval(attrs["value"])
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py
deleted file mode 100644
index 6b2a8c39678af0f4828ee477e57038d81d02006b..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py
+++ /dev/null
@@ -1,1929 +0,0 @@
-from fontTools.misc.fixedTools import (
- fixedToFloat as fi2fl,
- floatToFixed as fl2fi,
- floatToFixedToStr as fl2str,
- strToFixedToFloat as str2fl,
- ensureVersionIsLong as fi2ve,
- versionToFixed as ve2fi,
-)
-from fontTools.misc.roundTools import nearestMultipleShortestRepr, otRound
-from fontTools.misc.textTools import bytesjoin, tobytes, tostr, pad, safeEval
-from fontTools.ttLib import getSearchRange
-from .otBase import (
- CountReference,
- FormatSwitchingBaseTable,
- OTTableReader,
- OTTableWriter,
- ValueRecordFactory,
-)
-from .otTables import (
- lookupTypes,
- AATStateTable,
- AATState,
- AATAction,
- ContextualMorphAction,
- LigatureMorphAction,
- InsertionMorphAction,
- MorxSubtable,
- ExtendMode as _ExtendMode,
- CompositeMode as _CompositeMode,
- NO_VARIATION_INDEX,
-)
-from itertools import zip_longest
-from functools import partial
-import re
-import struct
-from typing import Optional
-import logging
-
-
-log = logging.getLogger(__name__)
-istuple = lambda t: isinstance(t, tuple)
-
-
-def buildConverters(tableSpec, tableNamespace):
- """Given a table spec from otData.py, build a converter object for each
- field of the table. This is called for each table in otData.py, and
- the results are assigned to the corresponding class in otTables.py."""
- converters = []
- convertersByName = {}
- for tp, name, repeat, aux, descr in tableSpec:
- tableName = name
- if name.startswith("ValueFormat"):
- assert tp == "uint16"
- converterClass = ValueFormat
- elif name.endswith("Count") or name in ("StructLength", "MorphType"):
- converterClass = {
- "uint8": ComputedUInt8,
- "uint16": ComputedUShort,
- "uint32": ComputedULong,
- }[tp]
- elif name == "SubTable":
- converterClass = SubTable
- elif name == "ExtSubTable":
- converterClass = ExtSubTable
- elif name == "SubStruct":
- converterClass = SubStruct
- elif name == "FeatureParams":
- converterClass = FeatureParams
- elif name in ("CIDGlyphMapping", "GlyphCIDMapping"):
- converterClass = StructWithLength
- else:
- if not tp in converterMapping and "(" not in tp:
- tableName = tp
- converterClass = Struct
- else:
- converterClass = eval(tp, tableNamespace, converterMapping)
-
- conv = converterClass(name, repeat, aux, description=descr)
-
- if conv.tableClass:
- # A "template" such as OffsetTo(AType) knowss the table class already
- tableClass = conv.tableClass
- elif tp in ("MortChain", "MortSubtable", "MorxChain"):
- tableClass = tableNamespace.get(tp)
- else:
- tableClass = tableNamespace.get(tableName)
-
- if not conv.tableClass:
- conv.tableClass = tableClass
-
- if name in ["SubTable", "ExtSubTable", "SubStruct"]:
- conv.lookupTypes = tableNamespace["lookupTypes"]
- # also create reverse mapping
- for t in conv.lookupTypes.values():
- for cls in t.values():
- convertersByName[cls.__name__] = Table(name, repeat, aux, cls)
- if name == "FeatureParams":
- conv.featureParamTypes = tableNamespace["featureParamTypes"]
- conv.defaultFeatureParams = tableNamespace["FeatureParams"]
- for cls in conv.featureParamTypes.values():
- convertersByName[cls.__name__] = Table(name, repeat, aux, cls)
- converters.append(conv)
- assert name not in convertersByName, name
- convertersByName[name] = conv
- return converters, convertersByName
-
-
-class _MissingItem(tuple):
- __slots__ = ()
-
-
-try:
- from collections import UserList
-except ImportError:
- from UserList import UserList
-
-
-class _LazyList(UserList):
- def __getslice__(self, i, j):
- return self.__getitem__(slice(i, j))
-
- def __getitem__(self, k):
- if isinstance(k, slice):
- indices = range(*k.indices(len(self)))
- return [self[i] for i in indices]
- item = self.data[k]
- if isinstance(item, _MissingItem):
- self.reader.seek(self.pos + item[0] * self.recordSize)
- item = self.conv.read(self.reader, self.font, {})
- self.data[k] = item
- return item
-
- def __add__(self, other):
- if isinstance(other, _LazyList):
- other = list(other)
- elif isinstance(other, list):
- pass
- else:
- return NotImplemented
- return list(self) + other
-
- def __radd__(self, other):
- if not isinstance(other, list):
- return NotImplemented
- return other + list(self)
-
-
-class BaseConverter(object):
-
- """Base class for converter objects. Apart from the constructor, this
- is an abstract class."""
-
- def __init__(self, name, repeat, aux, tableClass=None, *, description=""):
- self.name = name
- self.repeat = repeat
- self.aux = aux
- self.tableClass = tableClass
- self.isCount = name.endswith("Count") or name in [
- "DesignAxisRecordSize",
- "ValueRecordSize",
- ]
- self.isLookupType = name.endswith("LookupType") or name == "MorphType"
- self.isPropagated = name in [
- "ClassCount",
- "Class2Count",
- "FeatureTag",
- "SettingsCount",
- "VarRegionCount",
- "MappingCount",
- "RegionAxisCount",
- "DesignAxisCount",
- "DesignAxisRecordSize",
- "AxisValueCount",
- "ValueRecordSize",
- "AxisCount",
- "BaseGlyphRecordCount",
- "LayerRecordCount",
- ]
- self.description = description
-
- def readArray(self, reader, font, tableDict, count):
- """Read an array of values from the reader."""
- lazy = font.lazy and count > 8
- if lazy:
- recordSize = self.getRecordSize(reader)
- if recordSize is NotImplemented:
- lazy = False
- if not lazy:
- l = []
- for i in range(count):
- l.append(self.read(reader, font, tableDict))
- return l
- else:
- l = _LazyList()
- l.reader = reader.copy()
- l.pos = l.reader.pos
- l.font = font
- l.conv = self
- l.recordSize = recordSize
- l.extend(_MissingItem([i]) for i in range(count))
- reader.advance(count * recordSize)
- return l
-
- def getRecordSize(self, reader):
- if hasattr(self, "staticSize"):
- return self.staticSize
- return NotImplemented
-
- def read(self, reader, font, tableDict):
- """Read a value from the reader."""
- raise NotImplementedError(self)
-
- def writeArray(self, writer, font, tableDict, values):
- try:
- for i, value in enumerate(values):
- self.write(writer, font, tableDict, value, i)
- except Exception as e:
- e.args = e.args + (i,)
- raise
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- """Write a value to the writer."""
- raise NotImplementedError(self)
-
- def xmlRead(self, attrs, content, font):
- """Read a value from XML."""
- raise NotImplementedError(self)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- """Write a value to XML."""
- raise NotImplementedError(self)
-
- varIndexBasePlusOffsetRE = re.compile(r"VarIndexBase\s*\+\s*(\d+)")
-
- def getVarIndexOffset(self) -> Optional[int]:
- """If description has `VarIndexBase + {offset}`, return the offset else None."""
- m = self.varIndexBasePlusOffsetRE.search(self.description)
- if not m:
- return None
- return int(m.group(1))
-
-
-class SimpleValue(BaseConverter):
- @staticmethod
- def toString(value):
- return value
-
- @staticmethod
- def fromString(value):
- return value
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.simpletag(name, attrs + [("value", self.toString(value))])
- xmlWriter.newline()
-
- def xmlRead(self, attrs, content, font):
- return self.fromString(attrs["value"])
-
-
-class OptionalValue(SimpleValue):
- DEFAULT = None
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- if value != self.DEFAULT:
- attrs.append(("value", self.toString(value)))
- xmlWriter.simpletag(name, attrs)
- xmlWriter.newline()
-
- def xmlRead(self, attrs, content, font):
- if "value" in attrs:
- return self.fromString(attrs["value"])
- return self.DEFAULT
-
-
-class IntValue(SimpleValue):
- @staticmethod
- def fromString(value):
- return int(value, 0)
-
-
-class Long(IntValue):
- staticSize = 4
-
- def read(self, reader, font, tableDict):
- return reader.readLong()
-
- def readArray(self, reader, font, tableDict, count):
- return reader.readLongArray(count)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeLong(value)
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeLongArray(values)
-
-
-class ULong(IntValue):
- staticSize = 4
-
- def read(self, reader, font, tableDict):
- return reader.readULong()
-
- def readArray(self, reader, font, tableDict, count):
- return reader.readULongArray(count)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeULong(value)
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeULongArray(values)
-
-
-class Flags32(ULong):
- @staticmethod
- def toString(value):
- return "0x%08X" % value
-
-
-class VarIndex(OptionalValue, ULong):
- DEFAULT = NO_VARIATION_INDEX
-
-
-class Short(IntValue):
- staticSize = 2
-
- def read(self, reader, font, tableDict):
- return reader.readShort()
-
- def readArray(self, reader, font, tableDict, count):
- return reader.readShortArray(count)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeShort(value)
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeShortArray(values)
-
-
-class UShort(IntValue):
- staticSize = 2
-
- def read(self, reader, font, tableDict):
- return reader.readUShort()
-
- def readArray(self, reader, font, tableDict, count):
- return reader.readUShortArray(count)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeUShort(value)
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeUShortArray(values)
-
-
-class Int8(IntValue):
- staticSize = 1
-
- def read(self, reader, font, tableDict):
- return reader.readInt8()
-
- def readArray(self, reader, font, tableDict, count):
- return reader.readInt8Array(count)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeInt8(value)
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeInt8Array(values)
-
-
-class UInt8(IntValue):
- staticSize = 1
-
- def read(self, reader, font, tableDict):
- return reader.readUInt8()
-
- def readArray(self, reader, font, tableDict, count):
- return reader.readUInt8Array(count)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeUInt8(value)
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeUInt8Array(values)
-
-
-class UInt24(IntValue):
- staticSize = 3
-
- def read(self, reader, font, tableDict):
- return reader.readUInt24()
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeUInt24(value)
-
-
-class ComputedInt(IntValue):
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- if value is not None:
- xmlWriter.comment("%s=%s" % (name, value))
- xmlWriter.newline()
-
-
-class ComputedUInt8(ComputedInt, UInt8):
- pass
-
-
-class ComputedUShort(ComputedInt, UShort):
- pass
-
-
-class ComputedULong(ComputedInt, ULong):
- pass
-
-
-class Tag(SimpleValue):
- staticSize = 4
-
- def read(self, reader, font, tableDict):
- return reader.readTag()
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeTag(value)
-
-
-class GlyphID(SimpleValue):
- staticSize = 2
- typecode = "H"
-
- def readArray(self, reader, font, tableDict, count):
- return font.getGlyphNameMany(
- reader.readArray(self.typecode, self.staticSize, count)
- )
-
- def read(self, reader, font, tableDict):
- return font.getGlyphName(reader.readValue(self.typecode, self.staticSize))
-
- def writeArray(self, writer, font, tableDict, values):
- writer.writeArray(self.typecode, font.getGlyphIDMany(values))
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeValue(self.typecode, font.getGlyphID(value))
-
-
-class GlyphID32(GlyphID):
- staticSize = 4
- typecode = "L"
-
-
-class NameID(UShort):
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.simpletag(name, attrs + [("value", value)])
- if font and value:
- nameTable = font.get("name")
- if nameTable:
- name = nameTable.getDebugName(value)
- xmlWriter.write(" ")
- if name:
- xmlWriter.comment(name)
- else:
- xmlWriter.comment("missing from name table")
- log.warning("name id %d missing from name table" % value)
- xmlWriter.newline()
-
-
-class STATFlags(UShort):
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.simpletag(name, attrs + [("value", value)])
- flags = []
- if value & 0x01:
- flags.append("OlderSiblingFontAttribute")
- if value & 0x02:
- flags.append("ElidableAxisValueName")
- if flags:
- xmlWriter.write(" ")
- xmlWriter.comment(" ".join(flags))
- xmlWriter.newline()
-
-
-class FloatValue(SimpleValue):
- @staticmethod
- def fromString(value):
- return float(value)
-
-
-class DeciPoints(FloatValue):
- staticSize = 2
-
- def read(self, reader, font, tableDict):
- return reader.readUShort() / 10
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.writeUShort(round(value * 10))
-
-
-class BaseFixedValue(FloatValue):
- staticSize = NotImplemented
- precisionBits = NotImplemented
- readerMethod = NotImplemented
- writerMethod = NotImplemented
-
- def read(self, reader, font, tableDict):
- return self.fromInt(getattr(reader, self.readerMethod)())
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- getattr(writer, self.writerMethod)(self.toInt(value))
-
- @classmethod
- def fromInt(cls, value):
- return fi2fl(value, cls.precisionBits)
-
- @classmethod
- def toInt(cls, value):
- return fl2fi(value, cls.precisionBits)
-
- @classmethod
- def fromString(cls, value):
- return str2fl(value, cls.precisionBits)
-
- @classmethod
- def toString(cls, value):
- return fl2str(value, cls.precisionBits)
-
-
-class Fixed(BaseFixedValue):
- staticSize = 4
- precisionBits = 16
- readerMethod = "readLong"
- writerMethod = "writeLong"
-
-
-class F2Dot14(BaseFixedValue):
- staticSize = 2
- precisionBits = 14
- readerMethod = "readShort"
- writerMethod = "writeShort"
-
-
-class Angle(F2Dot14):
- # angles are specified in degrees, and encoded as F2Dot14 fractions of half
- # circle: e.g. 1.0 => 180, -0.5 => -90, -2.0 => -360, etc.
- bias = 0.0
- factor = 1.0 / (1 << 14) * 180 # 0.010986328125
-
- @classmethod
- def fromInt(cls, value):
- return (super().fromInt(value) + cls.bias) * 180
-
- @classmethod
- def toInt(cls, value):
- return super().toInt((value / 180) - cls.bias)
-
- @classmethod
- def fromString(cls, value):
- # quantize to nearest multiples of minimum fixed-precision angle
- return otRound(float(value) / cls.factor) * cls.factor
-
- @classmethod
- def toString(cls, value):
- return nearestMultipleShortestRepr(value, cls.factor)
-
-
-class BiasedAngle(Angle):
- # A bias of 1.0 is used in the representation of start and end angles
- # of COLRv1 PaintSweepGradients to allow for encoding +360deg
- bias = 1.0
-
-
-class Version(SimpleValue):
- staticSize = 4
-
- def read(self, reader, font, tableDict):
- value = reader.readLong()
- return value
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- value = fi2ve(value)
- writer.writeLong(value)
-
- @staticmethod
- def fromString(value):
- return ve2fi(value)
-
- @staticmethod
- def toString(value):
- return "0x%08x" % value
-
- @staticmethod
- def fromFloat(v):
- return fl2fi(v, 16)
-
-
-class Char64(SimpleValue):
- """An ASCII string with up to 64 characters.
-
- Unused character positions are filled with 0x00 bytes.
- Used in Apple AAT fonts in the `gcid` table.
- """
-
- staticSize = 64
-
- def read(self, reader, font, tableDict):
- data = reader.readData(self.staticSize)
- zeroPos = data.find(b"\0")
- if zeroPos >= 0:
- data = data[:zeroPos]
- s = tostr(data, encoding="ascii", errors="replace")
- if s != tostr(data, encoding="ascii", errors="ignore"):
- log.warning('replaced non-ASCII characters in "%s"' % s)
- return s
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- data = tobytes(value, encoding="ascii", errors="replace")
- if data != tobytes(value, encoding="ascii", errors="ignore"):
- log.warning('replacing non-ASCII characters in "%s"' % value)
- if len(data) > self.staticSize:
- log.warning(
- 'truncating overlong "%s" to %d bytes' % (value, self.staticSize)
- )
- data = (data + b"\0" * self.staticSize)[: self.staticSize]
- writer.writeData(data)
-
-
-class Struct(BaseConverter):
- def getRecordSize(self, reader):
- return self.tableClass and self.tableClass.getRecordSize(reader)
-
- def read(self, reader, font, tableDict):
- table = self.tableClass()
- table.decompile(reader, font)
- return table
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- value.compile(writer, font)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- if value is None:
- if attrs:
- # If there are attributes (probably index), then
- # don't drop this even if it's NULL. It will mess
- # up the array indices of the containing element.
- xmlWriter.simpletag(name, attrs + [("empty", 1)])
- xmlWriter.newline()
- else:
- pass # NULL table, ignore
- else:
- value.toXML(xmlWriter, font, attrs, name=name)
-
- def xmlRead(self, attrs, content, font):
- if "empty" in attrs and safeEval(attrs["empty"]):
- return None
- table = self.tableClass()
- Format = attrs.get("Format")
- if Format is not None:
- table.Format = int(Format)
-
- noPostRead = not hasattr(table, "postRead")
- if noPostRead:
- # TODO Cache table.hasPropagated.
- cleanPropagation = False
- for conv in table.getConverters():
- if conv.isPropagated:
- cleanPropagation = True
- if not hasattr(font, "_propagator"):
- font._propagator = {}
- propagator = font._propagator
- assert conv.name not in propagator, (conv.name, propagator)
- setattr(table, conv.name, None)
- propagator[conv.name] = CountReference(table.__dict__, conv.name)
-
- for element in content:
- if isinstance(element, tuple):
- name, attrs, content = element
- table.fromXML(name, attrs, content, font)
- else:
- pass
-
- table.populateDefaults(propagator=getattr(font, "_propagator", None))
-
- if noPostRead:
- if cleanPropagation:
- for conv in table.getConverters():
- if conv.isPropagated:
- propagator = font._propagator
- del propagator[conv.name]
- if not propagator:
- del font._propagator
-
- return table
-
- def __repr__(self):
- return "Struct of " + repr(self.tableClass)
-
-
-class StructWithLength(Struct):
- def read(self, reader, font, tableDict):
- pos = reader.pos
- table = self.tableClass()
- table.decompile(reader, font)
- reader.seek(pos + table.StructLength)
- return table
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- for convIndex, conv in enumerate(value.getConverters()):
- if conv.name == "StructLength":
- break
- lengthIndex = len(writer.items) + convIndex
- if isinstance(value, FormatSwitchingBaseTable):
- lengthIndex += 1 # implicit Format field
- deadbeef = {1: 0xDE, 2: 0xDEAD, 4: 0xDEADBEEF}[conv.staticSize]
-
- before = writer.getDataLength()
- value.StructLength = deadbeef
- value.compile(writer, font)
- length = writer.getDataLength() - before
- lengthWriter = writer.getSubWriter()
- conv.write(lengthWriter, font, tableDict, length)
- assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef"[: conv.staticSize]
- writer.items[lengthIndex] = lengthWriter.getAllData()
-
-
-class Table(Struct):
-
- staticSize = 2
-
- def readOffset(self, reader):
- return reader.readUShort()
-
- def writeNullOffset(self, writer):
- writer.writeUShort(0)
-
- def read(self, reader, font, tableDict):
- offset = self.readOffset(reader)
- if offset == 0:
- return None
- table = self.tableClass()
- reader = reader.getSubReader(offset)
- if font.lazy:
- table.reader = reader
- table.font = font
- else:
- table.decompile(reader, font)
- return table
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- if value is None:
- self.writeNullOffset(writer)
- else:
- subWriter = writer.getSubWriter(offsetSize=self.staticSize)
- subWriter.name = self.name
- if repeatIndex is not None:
- subWriter.repeatIndex = repeatIndex
- writer.writeSubTable(subWriter)
- value.compile(subWriter, font)
-
-
-class LTable(Table):
-
- staticSize = 4
-
- def readOffset(self, reader):
- return reader.readULong()
-
- def writeNullOffset(self, writer):
- writer.writeULong(0)
-
-
-# Table pointed to by a 24-bit, 3-byte long offset
-class Table24(Table):
-
- staticSize = 3
-
- def readOffset(self, reader):
- return reader.readUInt24()
-
- def writeNullOffset(self, writer):
- writer.writeUInt24(0)
-
-
-# TODO Clean / merge the SubTable and SubStruct
-
-
-class SubStruct(Struct):
- def getConverter(self, tableType, lookupType):
- tableClass = self.lookupTypes[tableType][lookupType]
- return self.__class__(self.name, self.repeat, self.aux, tableClass)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- super(SubStruct, self).xmlWrite(xmlWriter, font, value, None, attrs)
-
-
-class SubTable(Table):
- def getConverter(self, tableType, lookupType):
- tableClass = self.lookupTypes[tableType][lookupType]
- return self.__class__(self.name, self.repeat, self.aux, tableClass)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- super(SubTable, self).xmlWrite(xmlWriter, font, value, None, attrs)
-
-
-class ExtSubTable(LTable, SubTable):
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer.Extension = True # actually, mere presence of the field flags it as an Ext Subtable writer.
- Table.write(self, writer, font, tableDict, value, repeatIndex)
-
-
-class FeatureParams(Table):
- def getConverter(self, featureTag):
- tableClass = self.featureParamTypes.get(featureTag, self.defaultFeatureParams)
- return self.__class__(self.name, self.repeat, self.aux, tableClass)
-
-
-class ValueFormat(IntValue):
- staticSize = 2
-
- def __init__(self, name, repeat, aux, tableClass=None, *, description=""):
- BaseConverter.__init__(
- self, name, repeat, aux, tableClass, description=description
- )
- self.which = "ValueFormat" + ("2" if name[-1] == "2" else "1")
-
- def read(self, reader, font, tableDict):
- format = reader.readUShort()
- reader[self.which] = ValueRecordFactory(format)
- return format
-
- def write(self, writer, font, tableDict, format, repeatIndex=None):
- writer.writeUShort(format)
- writer[self.which] = ValueRecordFactory(format)
-
-
-class ValueRecord(ValueFormat):
- def getRecordSize(self, reader):
- return 2 * len(reader[self.which])
-
- def read(self, reader, font, tableDict):
- return reader[self.which].readValueRecord(reader, font)
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- writer[self.which].writeValueRecord(writer, font, value)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- if value is None:
- pass # NULL table, ignore
- else:
- value.toXML(xmlWriter, font, self.name, attrs)
-
- def xmlRead(self, attrs, content, font):
- from .otBase import ValueRecord
-
- value = ValueRecord()
- value.fromXML(None, attrs, content, font)
- return value
-
-
-class AATLookup(BaseConverter):
- BIN_SEARCH_HEADER_SIZE = 10
-
- def __init__(self, name, repeat, aux, tableClass, *, description=""):
- BaseConverter.__init__(
- self, name, repeat, aux, tableClass, description=description
- )
- if issubclass(self.tableClass, SimpleValue):
- self.converter = self.tableClass(name="Value", repeat=None, aux=None)
- else:
- self.converter = Table(
- name="Value", repeat=None, aux=None, tableClass=self.tableClass
- )
-
- def read(self, reader, font, tableDict):
- format = reader.readUShort()
- if format == 0:
- return self.readFormat0(reader, font)
- elif format == 2:
- return self.readFormat2(reader, font)
- elif format == 4:
- return self.readFormat4(reader, font)
- elif format == 6:
- return self.readFormat6(reader, font)
- elif format == 8:
- return self.readFormat8(reader, font)
- else:
- assert False, "unsupported lookup format: %d" % format
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- values = list(
- sorted([(font.getGlyphID(glyph), val) for glyph, val in value.items()])
- )
- # TODO: Also implement format 4.
- formats = list(
- sorted(
- filter(
- None,
- [
- self.buildFormat0(writer, font, values),
- self.buildFormat2(writer, font, values),
- self.buildFormat6(writer, font, values),
- self.buildFormat8(writer, font, values),
- ],
- )
- )
- )
- # We use the format ID as secondary sort key to make the output
- # deterministic when multiple formats have same encoded size.
- dataSize, lookupFormat, writeMethod = formats[0]
- pos = writer.getDataLength()
- writeMethod()
- actualSize = writer.getDataLength() - pos
- assert (
- actualSize == dataSize
- ), "AATLookup format %d claimed to write %d bytes, but wrote %d" % (
- lookupFormat,
- dataSize,
- actualSize,
- )
-
- @staticmethod
- def writeBinSearchHeader(writer, numUnits, unitSize):
- writer.writeUShort(unitSize)
- writer.writeUShort(numUnits)
- searchRange, entrySelector, rangeShift = getSearchRange(
- n=numUnits, itemSize=unitSize
- )
- writer.writeUShort(searchRange)
- writer.writeUShort(entrySelector)
- writer.writeUShort(rangeShift)
-
- def buildFormat0(self, writer, font, values):
- numGlyphs = len(font.getGlyphOrder())
- if len(values) != numGlyphs:
- return None
- valueSize = self.converter.staticSize
- return (
- 2 + numGlyphs * valueSize,
- 0,
- lambda: self.writeFormat0(writer, font, values),
- )
-
- def writeFormat0(self, writer, font, values):
- writer.writeUShort(0)
- for glyphID_, value in values:
- self.converter.write(
- writer, font, tableDict=None, value=value, repeatIndex=None
- )
-
- def buildFormat2(self, writer, font, values):
- segStart, segValue = values[0]
- segEnd = segStart
- segments = []
- for glyphID, curValue in values[1:]:
- if glyphID != segEnd + 1 or curValue != segValue:
- segments.append((segStart, segEnd, segValue))
- segStart = segEnd = glyphID
- segValue = curValue
- else:
- segEnd = glyphID
- segments.append((segStart, segEnd, segValue))
- valueSize = self.converter.staticSize
- numUnits, unitSize = len(segments) + 1, valueSize + 4
- return (
- 2 + self.BIN_SEARCH_HEADER_SIZE + numUnits * unitSize,
- 2,
- lambda: self.writeFormat2(writer, font, segments),
- )
-
- def writeFormat2(self, writer, font, segments):
- writer.writeUShort(2)
- valueSize = self.converter.staticSize
- numUnits, unitSize = len(segments), valueSize + 4
- self.writeBinSearchHeader(writer, numUnits, unitSize)
- for firstGlyph, lastGlyph, value in segments:
- writer.writeUShort(lastGlyph)
- writer.writeUShort(firstGlyph)
- self.converter.write(
- writer, font, tableDict=None, value=value, repeatIndex=None
- )
- writer.writeUShort(0xFFFF)
- writer.writeUShort(0xFFFF)
- writer.writeData(b"\x00" * valueSize)
-
- def buildFormat6(self, writer, font, values):
- valueSize = self.converter.staticSize
- numUnits, unitSize = len(values), valueSize + 2
- return (
- 2 + self.BIN_SEARCH_HEADER_SIZE + (numUnits + 1) * unitSize,
- 6,
- lambda: self.writeFormat6(writer, font, values),
- )
-
- def writeFormat6(self, writer, font, values):
- writer.writeUShort(6)
- valueSize = self.converter.staticSize
- numUnits, unitSize = len(values), valueSize + 2
- self.writeBinSearchHeader(writer, numUnits, unitSize)
- for glyphID, value in values:
- writer.writeUShort(glyphID)
- self.converter.write(
- writer, font, tableDict=None, value=value, repeatIndex=None
- )
- writer.writeUShort(0xFFFF)
- writer.writeData(b"\x00" * valueSize)
-
- def buildFormat8(self, writer, font, values):
- minGlyphID, maxGlyphID = values[0][0], values[-1][0]
- if len(values) != maxGlyphID - minGlyphID + 1:
- return None
- valueSize = self.converter.staticSize
- return (
- 6 + len(values) * valueSize,
- 8,
- lambda: self.writeFormat8(writer, font, values),
- )
-
- def writeFormat8(self, writer, font, values):
- firstGlyphID = values[0][0]
- writer.writeUShort(8)
- writer.writeUShort(firstGlyphID)
- writer.writeUShort(len(values))
- for _, value in values:
- self.converter.write(
- writer, font, tableDict=None, value=value, repeatIndex=None
- )
-
- def readFormat0(self, reader, font):
- numGlyphs = len(font.getGlyphOrder())
- data = self.converter.readArray(reader, font, tableDict=None, count=numGlyphs)
- return {font.getGlyphName(k): value for k, value in enumerate(data)}
-
- def readFormat2(self, reader, font):
- mapping = {}
- pos = reader.pos - 2 # start of table is at UShort for format
- unitSize, numUnits = reader.readUShort(), reader.readUShort()
- assert unitSize >= 4 + self.converter.staticSize, unitSize
- for i in range(numUnits):
- reader.seek(pos + i * unitSize + 12)
- last = reader.readUShort()
- first = reader.readUShort()
- value = self.converter.read(reader, font, tableDict=None)
- if last != 0xFFFF:
- for k in range(first, last + 1):
- mapping[font.getGlyphName(k)] = value
- return mapping
-
- def readFormat4(self, reader, font):
- mapping = {}
- pos = reader.pos - 2 # start of table is at UShort for format
- unitSize = reader.readUShort()
- assert unitSize >= 6, unitSize
- for i in range(reader.readUShort()):
- reader.seek(pos + i * unitSize + 12)
- last = reader.readUShort()
- first = reader.readUShort()
- offset = reader.readUShort()
- if last != 0xFFFF:
- dataReader = reader.getSubReader(0) # relative to current position
- dataReader.seek(pos + offset) # relative to start of table
- data = self.converter.readArray(
- dataReader, font, tableDict=None, count=last - first + 1
- )
- for k, v in enumerate(data):
- mapping[font.getGlyphName(first + k)] = v
- return mapping
-
- def readFormat6(self, reader, font):
- mapping = {}
- pos = reader.pos - 2 # start of table is at UShort for format
- unitSize = reader.readUShort()
- assert unitSize >= 2 + self.converter.staticSize, unitSize
- for i in range(reader.readUShort()):
- reader.seek(pos + i * unitSize + 12)
- glyphID = reader.readUShort()
- value = self.converter.read(reader, font, tableDict=None)
- if glyphID != 0xFFFF:
- mapping[font.getGlyphName(glyphID)] = value
- return mapping
-
- def readFormat8(self, reader, font):
- first = reader.readUShort()
- count = reader.readUShort()
- data = self.converter.readArray(reader, font, tableDict=None, count=count)
- return {font.getGlyphName(first + k): value for (k, value) in enumerate(data)}
-
- def xmlRead(self, attrs, content, font):
- value = {}
- for element in content:
- if isinstance(element, tuple):
- name, a, eltContent = element
- if name == "Lookup":
- value[a["glyph"]] = self.converter.xmlRead(a, eltContent, font)
- return value
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.begintag(name, attrs)
- xmlWriter.newline()
- for glyph, value in sorted(value.items()):
- self.converter.xmlWrite(
- xmlWriter, font, value=value, name="Lookup", attrs=[("glyph", glyph)]
- )
- xmlWriter.endtag(name)
- xmlWriter.newline()
-
-
-# The AAT 'ankr' table has an unusual structure: An offset to an AATLookup
-# followed by an offset to a glyph data table. Other than usual, the
-# offsets in the AATLookup are not relative to the beginning of
-# the beginning of the 'ankr' table, but relative to the glyph data table.
-# So, to find the anchor data for a glyph, one needs to add the offset
-# to the data table to the offset found in the AATLookup, and then use
-# the sum of these two offsets to find the actual data.
-class AATLookupWithDataOffset(BaseConverter):
- def read(self, reader, font, tableDict):
- lookupOffset = reader.readULong()
- dataOffset = reader.readULong()
- lookupReader = reader.getSubReader(lookupOffset)
- lookup = AATLookup("DataOffsets", None, None, UShort)
- offsets = lookup.read(lookupReader, font, tableDict)
- result = {}
- for glyph, offset in offsets.items():
- dataReader = reader.getSubReader(offset + dataOffset)
- item = self.tableClass()
- item.decompile(dataReader, font)
- result[glyph] = item
- return result
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- # We do not work with OTTableWriter sub-writers because
- # the offsets in our AATLookup are relative to our data
- # table, for which we need to provide an offset value itself.
- # It might have been possible to somehow make a kludge for
- # performing this indirect offset computation directly inside
- # OTTableWriter. But this would have made the internal logic
- # of OTTableWriter even more complex than it already is,
- # so we decided to roll our own offset computation for the
- # contents of the AATLookup and associated data table.
- offsetByGlyph, offsetByData, dataLen = {}, {}, 0
- compiledData = []
- for glyph in sorted(value, key=font.getGlyphID):
- subWriter = OTTableWriter()
- value[glyph].compile(subWriter, font)
- data = subWriter.getAllData()
- offset = offsetByData.get(data, None)
- if offset == None:
- offset = dataLen
- dataLen = dataLen + len(data)
- offsetByData[data] = offset
- compiledData.append(data)
- offsetByGlyph[glyph] = offset
- # For calculating the offsets to our AATLookup and data table,
- # we can use the regular OTTableWriter infrastructure.
- lookupWriter = writer.getSubWriter(offsetSize=4)
- lookup = AATLookup("DataOffsets", None, None, UShort)
- lookup.write(lookupWriter, font, tableDict, offsetByGlyph, None)
-
- dataWriter = writer.getSubWriter(offsetSize=4)
- writer.writeSubTable(lookupWriter)
- writer.writeSubTable(dataWriter)
- for d in compiledData:
- dataWriter.writeData(d)
-
- def xmlRead(self, attrs, content, font):
- lookup = AATLookup("DataOffsets", None, None, self.tableClass)
- return lookup.xmlRead(attrs, content, font)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- lookup = AATLookup("DataOffsets", None, None, self.tableClass)
- lookup.xmlWrite(xmlWriter, font, value, name, attrs)
-
-
-class MorxSubtableConverter(BaseConverter):
- _PROCESSING_ORDERS = {
- # bits 30 and 28 of morx.CoverageFlags; see morx spec
- (False, False): "LayoutOrder",
- (True, False): "ReversedLayoutOrder",
- (False, True): "LogicalOrder",
- (True, True): "ReversedLogicalOrder",
- }
-
- _PROCESSING_ORDERS_REVERSED = {val: key for key, val in _PROCESSING_ORDERS.items()}
-
- def __init__(self, name, repeat, aux, tableClass=None, *, description=""):
- BaseConverter.__init__(
- self, name, repeat, aux, tableClass, description=description
- )
-
- def _setTextDirectionFromCoverageFlags(self, flags, subtable):
- if (flags & 0x20) != 0:
- subtable.TextDirection = "Any"
- elif (flags & 0x80) != 0:
- subtable.TextDirection = "Vertical"
- else:
- subtable.TextDirection = "Horizontal"
-
- def read(self, reader, font, tableDict):
- pos = reader.pos
- m = MorxSubtable()
- m.StructLength = reader.readULong()
- flags = reader.readUInt8()
- orderKey = ((flags & 0x40) != 0, (flags & 0x10) != 0)
- m.ProcessingOrder = self._PROCESSING_ORDERS[orderKey]
- self._setTextDirectionFromCoverageFlags(flags, m)
- m.Reserved = reader.readUShort()
- m.Reserved |= (flags & 0xF) << 16
- m.MorphType = reader.readUInt8()
- m.SubFeatureFlags = reader.readULong()
- tableClass = lookupTypes["morx"].get(m.MorphType)
- if tableClass is None:
- assert False, "unsupported 'morx' lookup type %s" % m.MorphType
- # To decode AAT ligatures, we need to know the subtable size.
- # The easiest way to pass this along is to create a new reader
- # that works on just the subtable as its data.
- headerLength = reader.pos - pos
- data = reader.data[reader.pos : reader.pos + m.StructLength - headerLength]
- assert len(data) == m.StructLength - headerLength
- subReader = OTTableReader(data=data, tableTag=reader.tableTag)
- m.SubStruct = tableClass()
- m.SubStruct.decompile(subReader, font)
- reader.seek(pos + m.StructLength)
- return m
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.begintag(name, attrs)
- xmlWriter.newline()
- xmlWriter.comment("StructLength=%d" % value.StructLength)
- xmlWriter.newline()
- xmlWriter.simpletag("TextDirection", value=value.TextDirection)
- xmlWriter.newline()
- xmlWriter.simpletag("ProcessingOrder", value=value.ProcessingOrder)
- xmlWriter.newline()
- if value.Reserved != 0:
- xmlWriter.simpletag("Reserved", value="0x%04x" % value.Reserved)
- xmlWriter.newline()
- xmlWriter.comment("MorphType=%d" % value.MorphType)
- xmlWriter.newline()
- xmlWriter.simpletag("SubFeatureFlags", value="0x%08x" % value.SubFeatureFlags)
- xmlWriter.newline()
- value.SubStruct.toXML(xmlWriter, font)
- xmlWriter.endtag(name)
- xmlWriter.newline()
-
- def xmlRead(self, attrs, content, font):
- m = MorxSubtable()
- covFlags = 0
- m.Reserved = 0
- for eltName, eltAttrs, eltContent in filter(istuple, content):
- if eltName == "CoverageFlags":
- # Only in XML from old versions of fonttools.
- covFlags = safeEval(eltAttrs["value"])
- orderKey = ((covFlags & 0x40) != 0, (covFlags & 0x10) != 0)
- m.ProcessingOrder = self._PROCESSING_ORDERS[orderKey]
- self._setTextDirectionFromCoverageFlags(covFlags, m)
- elif eltName == "ProcessingOrder":
- m.ProcessingOrder = eltAttrs["value"]
- assert m.ProcessingOrder in self._PROCESSING_ORDERS_REVERSED, (
- "unknown ProcessingOrder: %s" % m.ProcessingOrder
- )
- elif eltName == "TextDirection":
- m.TextDirection = eltAttrs["value"]
- assert m.TextDirection in {"Horizontal", "Vertical", "Any"}, (
- "unknown TextDirection %s" % m.TextDirection
- )
- elif eltName == "Reserved":
- m.Reserved = safeEval(eltAttrs["value"])
- elif eltName == "SubFeatureFlags":
- m.SubFeatureFlags = safeEval(eltAttrs["value"])
- elif eltName.endswith("Morph"):
- m.fromXML(eltName, eltAttrs, eltContent, font)
- else:
- assert False, eltName
- m.Reserved = (covFlags & 0xF) << 16 | m.Reserved
- return m
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- covFlags = (value.Reserved & 0x000F0000) >> 16
- reverseOrder, logicalOrder = self._PROCESSING_ORDERS_REVERSED[
- value.ProcessingOrder
- ]
- covFlags |= 0x80 if value.TextDirection == "Vertical" else 0
- covFlags |= 0x40 if reverseOrder else 0
- covFlags |= 0x20 if value.TextDirection == "Any" else 0
- covFlags |= 0x10 if logicalOrder else 0
- value.CoverageFlags = covFlags
- lengthIndex = len(writer.items)
- before = writer.getDataLength()
- value.StructLength = 0xDEADBEEF
- # The high nibble of value.Reserved is actuallly encoded
- # into coverageFlags, so we need to clear it here.
- origReserved = value.Reserved # including high nibble
- value.Reserved = value.Reserved & 0xFFFF # without high nibble
- value.compile(writer, font)
- value.Reserved = origReserved # restore original value
- assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef"
- length = writer.getDataLength() - before
- writer.items[lengthIndex] = struct.pack(">L", length)
-
-
-# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6Tables.html#ExtendedStateHeader
-# TODO: Untangle the implementation of the various lookup-specific formats.
-class STXHeader(BaseConverter):
- def __init__(self, name, repeat, aux, tableClass, *, description=""):
- BaseConverter.__init__(
- self, name, repeat, aux, tableClass, description=description
- )
- assert issubclass(self.tableClass, AATAction)
- self.classLookup = AATLookup("GlyphClasses", None, None, UShort)
- if issubclass(self.tableClass, ContextualMorphAction):
- self.perGlyphLookup = AATLookup("PerGlyphLookup", None, None, GlyphID)
- else:
- self.perGlyphLookup = None
-
- def read(self, reader, font, tableDict):
- table = AATStateTable()
- pos = reader.pos
- classTableReader = reader.getSubReader(0)
- stateArrayReader = reader.getSubReader(0)
- entryTableReader = reader.getSubReader(0)
- actionReader = None
- ligaturesReader = None
- table.GlyphClassCount = reader.readULong()
- classTableReader.seek(pos + reader.readULong())
- stateArrayReader.seek(pos + reader.readULong())
- entryTableReader.seek(pos + reader.readULong())
- if self.perGlyphLookup is not None:
- perGlyphTableReader = reader.getSubReader(0)
- perGlyphTableReader.seek(pos + reader.readULong())
- if issubclass(self.tableClass, LigatureMorphAction):
- actionReader = reader.getSubReader(0)
- actionReader.seek(pos + reader.readULong())
- ligComponentReader = reader.getSubReader(0)
- ligComponentReader.seek(pos + reader.readULong())
- ligaturesReader = reader.getSubReader(0)
- ligaturesReader.seek(pos + reader.readULong())
- numLigComponents = (ligaturesReader.pos - ligComponentReader.pos) // 2
- assert numLigComponents >= 0
- table.LigComponents = ligComponentReader.readUShortArray(numLigComponents)
- table.Ligatures = self._readLigatures(ligaturesReader, font)
- elif issubclass(self.tableClass, InsertionMorphAction):
- actionReader = reader.getSubReader(0)
- actionReader.seek(pos + reader.readULong())
- table.GlyphClasses = self.classLookup.read(classTableReader, font, tableDict)
- numStates = int(
- (entryTableReader.pos - stateArrayReader.pos) / (table.GlyphClassCount * 2)
- )
- for stateIndex in range(numStates):
- state = AATState()
- table.States.append(state)
- for glyphClass in range(table.GlyphClassCount):
- entryIndex = stateArrayReader.readUShort()
- state.Transitions[glyphClass] = self._readTransition(
- entryTableReader, entryIndex, font, actionReader
- )
- if self.perGlyphLookup is not None:
- table.PerGlyphLookups = self._readPerGlyphLookups(
- table, perGlyphTableReader, font
- )
- return table
-
- def _readTransition(self, reader, entryIndex, font, actionReader):
- transition = self.tableClass()
- entryReader = reader.getSubReader(
- reader.pos + entryIndex * transition.staticSize
- )
- transition.decompile(entryReader, font, actionReader)
- return transition
-
- def _readLigatures(self, reader, font):
- limit = len(reader.data)
- numLigatureGlyphs = (limit - reader.pos) // 2
- return font.getGlyphNameMany(reader.readUShortArray(numLigatureGlyphs))
-
- def _countPerGlyphLookups(self, table):
- # Somewhat annoyingly, the morx table does not encode
- # the size of the per-glyph table. So we need to find
- # the maximum value that MorphActions use as index
- # into this table.
- numLookups = 0
- for state in table.States:
- for t in state.Transitions.values():
- if isinstance(t, ContextualMorphAction):
- if t.MarkIndex != 0xFFFF:
- numLookups = max(numLookups, t.MarkIndex + 1)
- if t.CurrentIndex != 0xFFFF:
- numLookups = max(numLookups, t.CurrentIndex + 1)
- return numLookups
-
- def _readPerGlyphLookups(self, table, reader, font):
- pos = reader.pos
- lookups = []
- for _ in range(self._countPerGlyphLookups(table)):
- lookupReader = reader.getSubReader(0)
- lookupReader.seek(pos + reader.readULong())
- lookups.append(self.perGlyphLookup.read(lookupReader, font, {}))
- return lookups
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- glyphClassWriter = OTTableWriter()
- self.classLookup.write(
- glyphClassWriter, font, tableDict, value.GlyphClasses, repeatIndex=None
- )
- glyphClassData = pad(glyphClassWriter.getAllData(), 2)
- glyphClassCount = max(value.GlyphClasses.values()) + 1
- glyphClassTableOffset = 16 # size of STXHeader
- if self.perGlyphLookup is not None:
- glyphClassTableOffset += 4
-
- glyphClassTableOffset += self.tableClass.actionHeaderSize
- actionData, actionIndex = self.tableClass.compileActions(font, value.States)
- stateArrayData, entryTableData = self._compileStates(
- font, value.States, glyphClassCount, actionIndex
- )
- stateArrayOffset = glyphClassTableOffset + len(glyphClassData)
- entryTableOffset = stateArrayOffset + len(stateArrayData)
- perGlyphOffset = entryTableOffset + len(entryTableData)
- perGlyphData = pad(self._compilePerGlyphLookups(value, font), 4)
- if actionData is not None:
- actionOffset = entryTableOffset + len(entryTableData)
- else:
- actionOffset = None
-
- ligaturesOffset, ligComponentsOffset = None, None
- ligComponentsData = self._compileLigComponents(value, font)
- ligaturesData = self._compileLigatures(value, font)
- if ligComponentsData is not None:
- assert len(perGlyphData) == 0
- ligComponentsOffset = actionOffset + len(actionData)
- ligaturesOffset = ligComponentsOffset + len(ligComponentsData)
-
- writer.writeULong(glyphClassCount)
- writer.writeULong(glyphClassTableOffset)
- writer.writeULong(stateArrayOffset)
- writer.writeULong(entryTableOffset)
- if self.perGlyphLookup is not None:
- writer.writeULong(perGlyphOffset)
- if actionOffset is not None:
- writer.writeULong(actionOffset)
- if ligComponentsOffset is not None:
- writer.writeULong(ligComponentsOffset)
- writer.writeULong(ligaturesOffset)
- writer.writeData(glyphClassData)
- writer.writeData(stateArrayData)
- writer.writeData(entryTableData)
- writer.writeData(perGlyphData)
- if actionData is not None:
- writer.writeData(actionData)
- if ligComponentsData is not None:
- writer.writeData(ligComponentsData)
- if ligaturesData is not None:
- writer.writeData(ligaturesData)
-
- def _compileStates(self, font, states, glyphClassCount, actionIndex):
- stateArrayWriter = OTTableWriter()
- entries, entryIDs = [], {}
- for state in states:
- for glyphClass in range(glyphClassCount):
- transition = state.Transitions[glyphClass]
- entryWriter = OTTableWriter()
- transition.compile(entryWriter, font, actionIndex)
- entryData = entryWriter.getAllData()
- assert (
- len(entryData) == transition.staticSize
- ), "%s has staticSize %d, " "but actually wrote %d bytes" % (
- repr(transition),
- transition.staticSize,
- len(entryData),
- )
- entryIndex = entryIDs.get(entryData)
- if entryIndex is None:
- entryIndex = len(entries)
- entryIDs[entryData] = entryIndex
- entries.append(entryData)
- stateArrayWriter.writeUShort(entryIndex)
- stateArrayData = pad(stateArrayWriter.getAllData(), 4)
- entryTableData = pad(bytesjoin(entries), 4)
- return stateArrayData, entryTableData
-
- def _compilePerGlyphLookups(self, table, font):
- if self.perGlyphLookup is None:
- return b""
- numLookups = self._countPerGlyphLookups(table)
- assert len(table.PerGlyphLookups) == numLookups, (
- "len(AATStateTable.PerGlyphLookups) is %d, "
- "but the actions inside the table refer to %d"
- % (len(table.PerGlyphLookups), numLookups)
- )
- writer = OTTableWriter()
- for lookup in table.PerGlyphLookups:
- lookupWriter = writer.getSubWriter(offsetSize=4)
- self.perGlyphLookup.write(lookupWriter, font, {}, lookup, None)
- writer.writeSubTable(lookupWriter)
- return writer.getAllData()
-
- def _compileLigComponents(self, table, font):
- if not hasattr(table, "LigComponents"):
- return None
- writer = OTTableWriter()
- for component in table.LigComponents:
- writer.writeUShort(component)
- return writer.getAllData()
-
- def _compileLigatures(self, table, font):
- if not hasattr(table, "Ligatures"):
- return None
- writer = OTTableWriter()
- for glyphName in table.Ligatures:
- writer.writeUShort(font.getGlyphID(glyphName))
- return writer.getAllData()
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.begintag(name, attrs)
- xmlWriter.newline()
- xmlWriter.comment("GlyphClassCount=%s" % value.GlyphClassCount)
- xmlWriter.newline()
- for g, klass in sorted(value.GlyphClasses.items()):
- xmlWriter.simpletag("GlyphClass", glyph=g, value=klass)
- xmlWriter.newline()
- for stateIndex, state in enumerate(value.States):
- xmlWriter.begintag("State", index=stateIndex)
- xmlWriter.newline()
- for glyphClass, trans in sorted(state.Transitions.items()):
- trans.toXML(
- xmlWriter,
- font=font,
- attrs={"onGlyphClass": glyphClass},
- name="Transition",
- )
- xmlWriter.endtag("State")
- xmlWriter.newline()
- for i, lookup in enumerate(value.PerGlyphLookups):
- xmlWriter.begintag("PerGlyphLookup", index=i)
- xmlWriter.newline()
- for glyph, val in sorted(lookup.items()):
- xmlWriter.simpletag("Lookup", glyph=glyph, value=val)
- xmlWriter.newline()
- xmlWriter.endtag("PerGlyphLookup")
- xmlWriter.newline()
- if hasattr(value, "LigComponents"):
- xmlWriter.begintag("LigComponents")
- xmlWriter.newline()
- for i, val in enumerate(getattr(value, "LigComponents")):
- xmlWriter.simpletag("LigComponent", index=i, value=val)
- xmlWriter.newline()
- xmlWriter.endtag("LigComponents")
- xmlWriter.newline()
- self._xmlWriteLigatures(xmlWriter, font, value, name, attrs)
- xmlWriter.endtag(name)
- xmlWriter.newline()
-
- def _xmlWriteLigatures(self, xmlWriter, font, value, name, attrs):
- if not hasattr(value, "Ligatures"):
- return
- xmlWriter.begintag("Ligatures")
- xmlWriter.newline()
- for i, g in enumerate(getattr(value, "Ligatures")):
- xmlWriter.simpletag("Ligature", index=i, glyph=g)
- xmlWriter.newline()
- xmlWriter.endtag("Ligatures")
- xmlWriter.newline()
-
- def xmlRead(self, attrs, content, font):
- table = AATStateTable()
- for eltName, eltAttrs, eltContent in filter(istuple, content):
- if eltName == "GlyphClass":
- glyph = eltAttrs["glyph"]
- value = eltAttrs["value"]
- table.GlyphClasses[glyph] = safeEval(value)
- elif eltName == "State":
- state = self._xmlReadState(eltAttrs, eltContent, font)
- table.States.append(state)
- elif eltName == "PerGlyphLookup":
- lookup = self.perGlyphLookup.xmlRead(eltAttrs, eltContent, font)
- table.PerGlyphLookups.append(lookup)
- elif eltName == "LigComponents":
- table.LigComponents = self._xmlReadLigComponents(
- eltAttrs, eltContent, font
- )
- elif eltName == "Ligatures":
- table.Ligatures = self._xmlReadLigatures(eltAttrs, eltContent, font)
- table.GlyphClassCount = max(table.GlyphClasses.values()) + 1
- return table
-
- def _xmlReadState(self, attrs, content, font):
- state = AATState()
- for eltName, eltAttrs, eltContent in filter(istuple, content):
- if eltName == "Transition":
- glyphClass = safeEval(eltAttrs["onGlyphClass"])
- transition = self.tableClass()
- transition.fromXML(eltName, eltAttrs, eltContent, font)
- state.Transitions[glyphClass] = transition
- return state
-
- def _xmlReadLigComponents(self, attrs, content, font):
- ligComponents = []
- for eltName, eltAttrs, _eltContent in filter(istuple, content):
- if eltName == "LigComponent":
- ligComponents.append(safeEval(eltAttrs["value"]))
- return ligComponents
-
- def _xmlReadLigatures(self, attrs, content, font):
- ligs = []
- for eltName, eltAttrs, _eltContent in filter(istuple, content):
- if eltName == "Ligature":
- ligs.append(eltAttrs["glyph"])
- return ligs
-
-
-class CIDGlyphMap(BaseConverter):
- def read(self, reader, font, tableDict):
- numCIDs = reader.readUShort()
- result = {}
- for cid, glyphID in enumerate(reader.readUShortArray(numCIDs)):
- if glyphID != 0xFFFF:
- result[cid] = font.getGlyphName(glyphID)
- return result
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- items = {cid: font.getGlyphID(glyph) for cid, glyph in value.items()}
- count = max(items) + 1 if items else 0
- writer.writeUShort(count)
- for cid in range(count):
- writer.writeUShort(items.get(cid, 0xFFFF))
-
- def xmlRead(self, attrs, content, font):
- result = {}
- for eName, eAttrs, _eContent in filter(istuple, content):
- if eName == "CID":
- result[safeEval(eAttrs["cid"])] = eAttrs["glyph"].strip()
- return result
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.begintag(name, attrs)
- xmlWriter.newline()
- for cid, glyph in sorted(value.items()):
- if glyph is not None and glyph != 0xFFFF:
- xmlWriter.simpletag("CID", cid=cid, glyph=glyph)
- xmlWriter.newline()
- xmlWriter.endtag(name)
- xmlWriter.newline()
-
-
-class GlyphCIDMap(BaseConverter):
- def read(self, reader, font, tableDict):
- glyphOrder = font.getGlyphOrder()
- count = reader.readUShort()
- cids = reader.readUShortArray(count)
- if count > len(glyphOrder):
- log.warning(
- "GlyphCIDMap has %d elements, "
- "but the font has only %d glyphs; "
- "ignoring the rest" % (count, len(glyphOrder))
- )
- result = {}
- for glyphID in range(min(len(cids), len(glyphOrder))):
- cid = cids[glyphID]
- if cid != 0xFFFF:
- result[glyphOrder[glyphID]] = cid
- return result
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- items = {
- font.getGlyphID(g): cid
- for g, cid in value.items()
- if cid is not None and cid != 0xFFFF
- }
- count = max(items) + 1 if items else 0
- writer.writeUShort(count)
- for glyphID in range(count):
- writer.writeUShort(items.get(glyphID, 0xFFFF))
-
- def xmlRead(self, attrs, content, font):
- result = {}
- for eName, eAttrs, _eContent in filter(istuple, content):
- if eName == "CID":
- result[eAttrs["glyph"]] = safeEval(eAttrs["value"])
- return result
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.begintag(name, attrs)
- xmlWriter.newline()
- for glyph, cid in sorted(value.items()):
- if cid is not None and cid != 0xFFFF:
- xmlWriter.simpletag("CID", glyph=glyph, value=cid)
- xmlWriter.newline()
- xmlWriter.endtag(name)
- xmlWriter.newline()
-
-
-class DeltaValue(BaseConverter):
- def read(self, reader, font, tableDict):
- StartSize = tableDict["StartSize"]
- EndSize = tableDict["EndSize"]
- DeltaFormat = tableDict["DeltaFormat"]
- assert DeltaFormat in (1, 2, 3), "illegal DeltaFormat"
- nItems = EndSize - StartSize + 1
- nBits = 1 << DeltaFormat
- minusOffset = 1 << nBits
- mask = (1 << nBits) - 1
- signMask = 1 << (nBits - 1)
-
- DeltaValue = []
- tmp, shift = 0, 0
- for i in range(nItems):
- if shift == 0:
- tmp, shift = reader.readUShort(), 16
- shift = shift - nBits
- value = (tmp >> shift) & mask
- if value & signMask:
- value = value - minusOffset
- DeltaValue.append(value)
- return DeltaValue
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- StartSize = tableDict["StartSize"]
- EndSize = tableDict["EndSize"]
- DeltaFormat = tableDict["DeltaFormat"]
- DeltaValue = value
- assert DeltaFormat in (1, 2, 3), "illegal DeltaFormat"
- nItems = EndSize - StartSize + 1
- nBits = 1 << DeltaFormat
- assert len(DeltaValue) == nItems
- mask = (1 << nBits) - 1
-
- tmp, shift = 0, 16
- for value in DeltaValue:
- shift = shift - nBits
- tmp = tmp | ((value & mask) << shift)
- if shift == 0:
- writer.writeUShort(tmp)
- tmp, shift = 0, 16
- if shift != 16:
- writer.writeUShort(tmp)
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.simpletag(name, attrs + [("value", value)])
- xmlWriter.newline()
-
- def xmlRead(self, attrs, content, font):
- return safeEval(attrs["value"])
-
-
-class VarIdxMapValue(BaseConverter):
- def read(self, reader, font, tableDict):
- fmt = tableDict["EntryFormat"]
- nItems = tableDict["MappingCount"]
-
- innerBits = 1 + (fmt & 0x000F)
- innerMask = (1 << innerBits) - 1
- outerMask = 0xFFFFFFFF - innerMask
- outerShift = 16 - innerBits
-
- entrySize = 1 + ((fmt & 0x0030) >> 4)
- readArray = {
- 1: reader.readUInt8Array,
- 2: reader.readUShortArray,
- 3: reader.readUInt24Array,
- 4: reader.readULongArray,
- }[entrySize]
-
- return [
- (((raw & outerMask) << outerShift) | (raw & innerMask))
- for raw in readArray(nItems)
- ]
-
- def write(self, writer, font, tableDict, value, repeatIndex=None):
- fmt = tableDict["EntryFormat"]
- mapping = value
- writer["MappingCount"].setValue(len(mapping))
-
- innerBits = 1 + (fmt & 0x000F)
- innerMask = (1 << innerBits) - 1
- outerShift = 16 - innerBits
-
- entrySize = 1 + ((fmt & 0x0030) >> 4)
- writeArray = {
- 1: writer.writeUInt8Array,
- 2: writer.writeUShortArray,
- 3: writer.writeUInt24Array,
- 4: writer.writeULongArray,
- }[entrySize]
-
- writeArray(
- [
- (((idx & 0xFFFF0000) >> outerShift) | (idx & innerMask))
- for idx in mapping
- ]
- )
-
-
-class VarDataValue(BaseConverter):
- def read(self, reader, font, tableDict):
- values = []
-
- regionCount = tableDict["VarRegionCount"]
- wordCount = tableDict["NumShorts"]
-
- # https://github.com/fonttools/fonttools/issues/2279
- longWords = bool(wordCount & 0x8000)
- wordCount = wordCount & 0x7FFF
-
- if longWords:
- readBigArray, readSmallArray = reader.readLongArray, reader.readShortArray
- else:
- readBigArray, readSmallArray = reader.readShortArray, reader.readInt8Array
-
- n1, n2 = min(regionCount, wordCount), max(regionCount, wordCount)
- values.extend(readBigArray(n1))
- values.extend(readSmallArray(n2 - n1))
- if n2 > regionCount: # Padding
- del values[regionCount:]
-
- return values
-
- def write(self, writer, font, tableDict, values, repeatIndex=None):
- regionCount = tableDict["VarRegionCount"]
- wordCount = tableDict["NumShorts"]
-
- # https://github.com/fonttools/fonttools/issues/2279
- longWords = bool(wordCount & 0x8000)
- wordCount = wordCount & 0x7FFF
-
- (writeBigArray, writeSmallArray) = {
- False: (writer.writeShortArray, writer.writeInt8Array),
- True: (writer.writeLongArray, writer.writeShortArray),
- }[longWords]
-
- n1, n2 = min(regionCount, wordCount), max(regionCount, wordCount)
- writeBigArray(values[:n1])
- writeSmallArray(values[n1:regionCount])
- if n2 > regionCount: # Padding
- writer.writeSmallArray([0] * (n2 - regionCount))
-
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.simpletag(name, attrs + [("value", value)])
- xmlWriter.newline()
-
- def xmlRead(self, attrs, content, font):
- return safeEval(attrs["value"])
-
-
-class LookupFlag(UShort):
- def xmlWrite(self, xmlWriter, font, value, name, attrs):
- xmlWriter.simpletag(name, attrs + [("value", value)])
- flags = []
- if value & 0x01:
- flags.append("rightToLeft")
- if value & 0x02:
- flags.append("ignoreBaseGlyphs")
- if value & 0x04:
- flags.append("ignoreLigatures")
- if value & 0x08:
- flags.append("ignoreMarks")
- if value & 0x10:
- flags.append("useMarkFilteringSet")
- if value & 0xFF00:
- flags.append("markAttachmentType[%i]" % (value >> 8))
- if flags:
- xmlWriter.comment(" ".join(flags))
- xmlWriter.newline()
-
-
-class _UInt8Enum(UInt8):
- enumClass = NotImplemented
-
- def read(self, reader, font, tableDict):
- return self.enumClass(super().read(reader, font, tableDict))
-
- @classmethod
- def fromString(cls, value):
- return getattr(cls.enumClass, value.upper())
-
- @classmethod
- def toString(cls, value):
- return cls.enumClass(value).name.lower()
-
-
-class ExtendMode(_UInt8Enum):
- enumClass = _ExtendMode
-
-
-class CompositeMode(_UInt8Enum):
- enumClass = _CompositeMode
-
-
-converterMapping = {
- # type class
- "int8": Int8,
- "int16": Short,
- "uint8": UInt8,
- "uint16": UShort,
- "uint24": UInt24,
- "uint32": ULong,
- "char64": Char64,
- "Flags32": Flags32,
- "VarIndex": VarIndex,
- "Version": Version,
- "Tag": Tag,
- "GlyphID": GlyphID,
- "GlyphID32": GlyphID32,
- "NameID": NameID,
- "DeciPoints": DeciPoints,
- "Fixed": Fixed,
- "F2Dot14": F2Dot14,
- "Angle": Angle,
- "BiasedAngle": BiasedAngle,
- "struct": Struct,
- "Offset": Table,
- "LOffset": LTable,
- "Offset24": Table24,
- "ValueRecord": ValueRecord,
- "DeltaValue": DeltaValue,
- "VarIdxMapValue": VarIdxMapValue,
- "VarDataValue": VarDataValue,
- "LookupFlag": LookupFlag,
- "ExtendMode": ExtendMode,
- "CompositeMode": CompositeMode,
- "STATFlags": STATFlags,
- # AAT
- "CIDGlyphMap": CIDGlyphMap,
- "GlyphCIDMap": GlyphCIDMap,
- "MortChain": StructWithLength,
- "MortSubtable": StructWithLength,
- "MorxChain": StructWithLength,
- "MorxSubtable": MorxSubtableConverter,
- # "Template" types
- "AATLookup": lambda C: partial(AATLookup, tableClass=C),
- "AATLookupWithDataOffset": lambda C: partial(AATLookupWithDataOffset, tableClass=C),
- "STXHeader": lambda C: partial(STXHeader, tableClass=C),
- "OffsetTo": lambda C: partial(Table, tableClass=C),
- "LOffsetTo": lambda C: partial(LTable, tableClass=C),
- "LOffset24To": lambda C: partial(Table24, tableClass=C),
-}
diff --git a/spaces/DaleChen/AutoGPT/autogpt/__init__.py b/spaces/DaleChen/AutoGPT/autogpt/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Dipl0/Dipl0-pepe-diffuser/README.md b/spaces/Dipl0/Dipl0-pepe-diffuser/README.md
deleted file mode 100644
index 3218bbdc2d8a7953d72781161960d1494253fc4f..0000000000000000000000000000000000000000
--- a/spaces/Dipl0/Dipl0-pepe-diffuser/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dipl0 Pepe Diffuser
-emoji: 🦀
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/manipulate.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/manipulate.py
deleted file mode 100644
index e1a2480caad8016fea0c06f0bfe521b25f084436..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/manipulate.py
+++ /dev/null
@@ -1,278 +0,0 @@
-
-
-import os
-import os.path
-import pickle
-import numpy as np
-import tensorflow as tf
-from dnnlib import tflib
-from global_directions.utils.visualizer import HtmlPageVisualizer
-
-
-def Vis(bname,suffix,out,rownames=None,colnames=None):
- num_images=out.shape[0]
- step=out.shape[1]
-
- if colnames is None:
- colnames=[f'Step {i:02d}' for i in range(1, step + 1)]
- if rownames is None:
- rownames=[str(i) for i in range(num_images)]
-
-
- visualizer = HtmlPageVisualizer(
- num_rows=num_images, num_cols=step + 1, viz_size=256)
- visualizer.set_headers(
- ['Name'] +colnames)
-
- for i in range(num_images):
- visualizer.set_cell(i, 0, text=rownames[i])
-
- for i in range(num_images):
- for k in range(step):
- image=out[i,k,:,:,:]
- visualizer.set_cell(i, 1+k, image=image)
-
- # Save results.
- visualizer.save(f'./html/'+bname+'_'+suffix+'.html')
-
-
-
-
-def LoadData(img_path):
- tmp=img_path+'S'
- with open(tmp, "rb") as fp: #Pickling
- s_names,all_s=pickle.load( fp)
- dlatents=all_s
-
- pindexs=[]
- mindexs=[]
- for i in range(len(s_names)):
- name=s_names[i]
- if not('ToRGB' in name):
- mindexs.append(i)
- else:
- pindexs.append(i)
-
- tmp=img_path+'S_mean_std'
- with open(tmp, "rb") as fp: #Pickling
- m,std=pickle.load( fp)
-
- return dlatents,s_names,mindexs,pindexs,m,std
-
-
-def LoadModel(model_path,model_name):
- # Initialize TensorFlow.
- tflib.init_tf()
- tmp=os.path.join(model_path,model_name)
- with open(tmp, 'rb') as f:
- _, _, Gs = pickle.load(f)
- Gs.print_layers()
- return Gs
-
-def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False):
- """Convert a minibatch of images from float32 to uint8 with configurable dynamic range.
- Can be used as an output transformation for Network.run().
- """
- if nchw_to_nhwc:
- images = np.transpose(images, [0, 2, 3, 1])
-
- scale = 255 / (drange[1] - drange[0])
- images = images * scale + (0.5 - drange[0] * scale)
-
- np.clip(images, 0, 255, out=images)
- images=images.astype('uint8')
- return images
-
-
-def convert_images_from_uint8(images, drange=[-1,1], nhwc_to_nchw=False):
- """Convert a minibatch of images from uint8 to float32 with configurable dynamic range.
- Can be used as an input transformation for Network.run().
- """
- if nhwc_to_nchw:
- images=np.rollaxis(images, 3, 1)
- return images/ 255 *(drange[1] - drange[0])+ drange[0]
-
-
-class Manipulator():
- def __init__(self,dataset_name='ffhq'):
- self.file_path='./'
- self.img_path=self.file_path+'npy/'+dataset_name+'/'
- self.model_path=self.file_path+'model/'
- self.dataset_name=dataset_name
- self.model_name=dataset_name+'.pkl'
-
- self.alpha=[0] #manipulation strength
- self.num_images=10
- self.img_index=0 #which image to start
- self.viz_size=256
- self.manipulate_layers=None #which layer to manipulate, list
-
- self.dlatents,self.s_names,self.mindexs,self.pindexs,self.code_mean,self.code_std=LoadData(self.img_path)
-
- self.sess=tf.InteractiveSession()
- init = tf.global_variables_initializer()
- self.sess.run(init)
- self.Gs=LoadModel(self.model_path,self.model_name)
- self.num_layers=len(self.dlatents)
-
- self.Vis=Vis
- self.noise_constant={}
-
- for i in range(len(self.s_names)):
- tmp1=self.s_names[i].split('/')
- if not 'ToRGB' in tmp1:
- tmp1[-1]='random_normal:0'
- size=int(tmp1[1].split('x')[0])
- tmp1='/'.join(tmp1)
- tmp=(1,1,size,size)
- self.noise_constant[tmp1]=np.random.random(tmp)
-
- tmp=self.Gs.components.synthesis.input_shape[1]
- d={}
- d['G_synthesis_1/dlatents_in:0']=np.zeros([1,tmp,512])
- names=list(self.noise_constant.keys())
- tmp=tflib.run(names,d)
- for i in range(len(names)):
- self.noise_constant[names[i]]=tmp[i]
-
- self.fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
- self.img_size=self.Gs.output_shape[-1]
-
- def GenerateImg(self,codes):
-
-
- num_images,step=codes[0].shape[:2]
-
-
- out=np.zeros((num_images,step,self.img_size,self.img_size,3),dtype='uint8')
- for i in range(num_images):
- for k in range(step):
- d={}
- for m in range(len(self.s_names)):
- d[self.s_names[m]]=codes[m][i,k][None,:] #need to change
- d['G_synthesis_1/4x4/Const/Shape:0']=np.array([1,18, 512], dtype=np.int32)
- d.update(self.noise_constant)
- img=tflib.run('G_synthesis_1/images_out:0', d)
- image=convert_images_to_uint8(img, nchw_to_nhwc=True)
- out[i,k,:,:,:]=image[0]
- return out
-
-
-
- def MSCode(self,dlatent_tmp,boundary_tmp):
-
- step=len(self.alpha)
- dlatent_tmp1=[tmp.reshape((self.num_images,-1)) for tmp in dlatent_tmp]
- dlatent_tmp2=[np.tile(tmp[:,None],(1,step,1)) for tmp in dlatent_tmp1] # (10, 7, 512)
-
- l=np.array(self.alpha)
- l=l.reshape(
- [step if axis == 1 else 1 for axis in range(dlatent_tmp2[0].ndim)])
-
- if type(self.manipulate_layers)==int:
- tmp=[self.manipulate_layers]
- elif type(self.manipulate_layers)==list:
- tmp=self.manipulate_layers
- elif self.manipulate_layers is None:
- tmp=np.arange(len(boundary_tmp))
- else:
- raise ValueError('manipulate_layers is wrong')
-
- for i in tmp:
- dlatent_tmp2[i]+=l*boundary_tmp[i]
-
- codes=[]
- for i in range(len(dlatent_tmp2)):
- tmp=list(dlatent_tmp[i].shape)
- tmp.insert(1,step)
- codes.append(dlatent_tmp2[i].reshape(tmp))
- return codes
-
-
- def EditOne(self,bname,dlatent_tmp=None):
- if dlatent_tmp==None:
- dlatent_tmp=[tmp[self.img_index:(self.img_index+self.num_images)] for tmp in self.dlatents]
-
- boundary_tmp=[]
- for i in range(len(self.boundary)):
- tmp=self.boundary[i]
- if len(tmp)<=bname:
- boundary_tmp.append([])
- else:
- boundary_tmp.append(tmp[bname])
-
- codes=self.MSCode(dlatent_tmp,boundary_tmp)
-
- out=self.GenerateImg(codes)
- return codes,out
-
- def EditOneC(self,cindex,dlatent_tmp=None):
- if dlatent_tmp==None:
- dlatent_tmp=[tmp[self.img_index:(self.img_index+self.num_images)] for tmp in self.dlatents]
-
- boundary_tmp=[[] for i in range(len(self.dlatents))]
-
- #'only manipulate 1 layer and one channel'
- assert len(self.manipulate_layers)==1
-
- ml=self.manipulate_layers[0]
- tmp=dlatent_tmp[ml].shape[1] #ada
- tmp1=np.zeros(tmp)
- tmp1[cindex]=self.code_std[ml][cindex] #1
- boundary_tmp[ml]=tmp1
-
- codes=self.MSCode(dlatent_tmp,boundary_tmp)
- out=self.GenerateImg(codes)
- return codes,out
-
-
- def W2S(self,dlatent_tmp):
-
- all_s = self.sess.run(
- self.s_names,
- feed_dict={'G_synthesis_1/dlatents_in:0': dlatent_tmp})
- return all_s
-
-
-
-
-
-
-
-
-#%%
-if __name__ == "__main__":
-
-
- M=Manipulator(dataset_name='ffhq')
-
-
- #%%
- M.alpha=[-5,0,5]
- M.num_images=20
- lindex,cindex=6,501
-
- M.manipulate_layers=[lindex]
- codes,out=M.EditOneC(cindex) #dlatent_tmp
- tmp=str(M.manipulate_layers)+'_'+str(cindex)
- M.Vis(tmp,'c',out)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/bias_act.h b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/bias_act.h
deleted file mode 100644
index 60b81c6058d54638a6d74a13046fa388442d767d..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/bias_act.h
+++ /dev/null
@@ -1,38 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct bias_act_kernel_params
-{
- const void* x; // [sizeX]
- const void* b; // [sizeB] or NULL
- const void* xref; // [sizeX] or NULL
- const void* yref; // [sizeX] or NULL
- const void* dy; // [sizeX] or NULL
- void* y; // [sizeX]
-
- int grad;
- int act;
- float alpha;
- float gain;
- float clamp;
-
- int sizeX;
- int sizeB;
- int stepB;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template void* choose_bias_act_kernel(const bias_act_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py b/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py
deleted file mode 100644
index 69e3f0910e5c553cc59a3067ac02881a720a474f..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/PP_HumanSeg/pretrained_model/download_pretrained_model.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# coding: utf8
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import sys
-import os
-
-LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
-TEST_PATH = os.path.join(LOCAL_PATH, "../../../", "test")
-sys.path.append(TEST_PATH)
-
-from paddleseg.utils.download import download_file_and_uncompress
-
-model_urls = {
- "pphumanseg_lite_portrait_398x224":
- "https://paddleseg.bj.bcebos.com/dygraph/ppseg/ppseg_lite_portrait_398x224.tar.gz",
- "deeplabv3p_resnet50_os8_humanseg_512x512_100k":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/deeplabv3p_resnet50_os8_humanseg_512x512_100k.zip",
- "fcn_hrnetw18_small_v1_humanseg_192x192":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/fcn_hrnetw18_small_v1_humanseg_192x192.zip",
- "pphumanseg_lite_generic_human_192x192":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/train/pphumanseg_lite_generic_192x192.zip",
-}
-
-if __name__ == "__main__":
- for model_name, url in model_urls.items():
- download_file_and_uncompress(
- url=url,
- savepath=LOCAL_PATH,
- extrapath=LOCAL_PATH,
- extraname=model_name)
-
- print("Pretrained model download success!")
diff --git a/spaces/DynoKevin/img-cap-for-vision-mate/README.md b/spaces/DynoKevin/img-cap-for-vision-mate/README.md
deleted file mode 100644
index a45b40cedae3acbcf0ea690ea4fe5426f3010c0e..0000000000000000000000000000000000000000
--- a/spaces/DynoKevin/img-cap-for-vision-mate/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Img Cap For Vision Mate
-emoji: 👀
-colorFrom: green
-colorTo: green
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Eddycrack864/Applio-Inference/tools/torchgate/utils.py b/spaces/Eddycrack864/Applio-Inference/tools/torchgate/utils.py
deleted file mode 100644
index dc97d45a399c112c76e80cdd8c73cfebaf3ef6ad..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/tools/torchgate/utils.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import torch
-from torch.types import Number
-
-
-@torch.no_grad()
-def amp_to_db(x: torch.Tensor, eps=torch.finfo(torch.float64).eps, top_db=40) -> torch.Tensor:
- """
- Convert the input tensor from amplitude to decibel scale.
-
- Arguments:
- x {[torch.Tensor]} -- [Input tensor.]
-
- Keyword Arguments:
- eps {[float]} -- [Small value to avoid numerical instability.]
- (default: {torch.finfo(torch.float64).eps})
- top_db {[float]} -- [threshold the output at ``top_db`` below the peak]
- ` (default: {40})
-
- Returns:
- [torch.Tensor] -- [Output tensor in decibel scale.]
- """
- x_db = 20 * torch.log10(x.abs() + eps)
- return torch.max(x_db, (x_db.max(-1).values - top_db).unsqueeze(-1))
-
-
-@torch.no_grad()
-def temperature_sigmoid(x: torch.Tensor, x0: float, temp_coeff: float) -> torch.Tensor:
- """
- Apply a sigmoid function with temperature scaling.
-
- Arguments:
- x {[torch.Tensor]} -- [Input tensor.]
- x0 {[float]} -- [Parameter that controls the threshold of the sigmoid.]
- temp_coeff {[float]} -- [Parameter that controls the slope of the sigmoid.]
-
- Returns:
- [torch.Tensor] -- [Output tensor after applying the sigmoid with temperature scaling.]
- """
- return torch.sigmoid((x - x0) / temp_coeff)
-
-
-@torch.no_grad()
-def linspace(start: Number, stop: Number, num: int = 50, endpoint: bool = True, **kwargs) -> torch.Tensor:
- """
- Generate a linearly spaced 1-D tensor.
-
- Arguments:
- start {[Number]} -- [The starting value of the sequence.]
- stop {[Number]} -- [The end value of the sequence, unless `endpoint` is set to False.
- In that case, the sequence consists of all but the last of ``num + 1``
- evenly spaced samples, so that `stop` is excluded. Note that the step
- size changes when `endpoint` is False.]
-
- Keyword Arguments:
- num {[int]} -- [Number of samples to generate. Default is 50. Must be non-negative.]
- endpoint {[bool]} -- [If True, `stop` is the last sample. Otherwise, it is not included.
- Default is True.]
- **kwargs -- [Additional arguments to be passed to the underlying PyTorch `linspace` function.]
-
- Returns:
- [torch.Tensor] -- [1-D tensor of `num` equally spaced samples from `start` to `stop`.]
- """
- if endpoint:
- return torch.linspace(start, stop, num, **kwargs)
- else:
- return torch.linspace(start, stop, num + 1, **kwargs)[:-1]
diff --git a/spaces/EinsteinCoder/sf-voicebot/app.py b/spaces/EinsteinCoder/sf-voicebot/app.py
deleted file mode 100644
index f76e4c62d982f273f8cf028e765b885465a41e8c..0000000000000000000000000000000000000000
--- a/spaces/EinsteinCoder/sf-voicebot/app.py
+++ /dev/null
@@ -1,207 +0,0 @@
-
-from flask import Flask, request, render_template
-from twilio.twiml.voice_response import VoiceResponse, Gather
-import openai
-import csv
-import os
-import requests
-from simple_salesforce import Salesforce
-from langchain import OpenAI
-from langchain.chat_models import ChatOpenAI
-from langchain.chains import LLMChain, ConversationChain
-from langchain import PromptTemplate
-from langchain import HuggingFaceHub
-from langchain.chains.conversation.memory import (ConversationBufferMemory,
- ConversationSummaryMemory,
- ConversationBufferWindowMemory,
- ConversationKGMemory,ConversationSummaryBufferMemory)
-
-app = Flask(__name__)
-
-os.environ['OPENAI_API_KEY'] = os.environ.get("OPENAI_KEY")
-openai.api_key = os.environ.get("OPENAI_KEY")
-
-# Set up the LangChain
-
-template = """Answer the question based on the context below.
-Context: You are Lisa, a loyal helpful service agent, appointed for SuperFoods Petcare Company.
-Your goal is to ask one question at a time and provide a friendly conversational responses to the customer.
-- For Complaints: Ask the product or brnad name they have purchased and when they bought it.
-- Next, ask the customer if he wants a refund or return the product.
-- For Returns: Tell him about the 10-day return policy, after which it's non-returnable.
-- For Refunds: Ask about the mode of refund he wants and clarify him the refunds will happen within 2-3 business days. Do not ask for Bank Details from the customer.
-For all complaints, a case for will be created, and the caller will be notified over his registered Email or WhatsApp.
-Do not answer anything outside your role or context, and apologize for any unknown questions.
-
-Past Conversations: {chat_history}
-Human: {input}
-AI:
-
-"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "input"],
- template=template
-)
-
-llm35 = ChatOpenAI(
- temperature=0,
- model_name='gpt-3.5-turbo',
- max_tokens=256
-)
-
-llm30 = OpenAI(
- temperature=0,
- max_tokens=256,
- frequency_penalty=0
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-conversations = ConversationChain(
- prompt=prompt,
- llm=llm30,
- memory=memory,
- verbose=False
-)
-
-# Set up the Salesforce API
-
-#sf_user = os.environ.get("SF_USER")
-#sf_pwd = os.environ.get("SF_PWD")
-#sf_token = os.environ.get("SF_TOKEN")
-#sf_instance = os.environ.get("SF_INSTANCE")
-#sf = Salesforce(username=sf_user, password=sf_pwd, security_token=sf_token,instance_url=sf_instance)
-#print(sf.headers)
-#print("Successfully Connected to Salesforce")
-
-conversation_id = ''
-
-# Define a function to handle incoming calls
-def handle_incoming_call():
- response = VoiceResponse()
- gather = Gather(input='speech', speechTimeout='auto', action='/process_input')
- gather.say("Welcome to the SuperFood Customer Services !")
- gather.pause(length=1)
- gather.say("Hi, I am Lisa, from customer desk")
- gather.pause(length=0)
- gather.say("May i know who i am talking to?")
- response.append(gather)
- return str(response)
-
-# Define a route to handle incoming calls
-@app.route("/incoming_call", methods=["POST"])
-def incoming_call():
- return handle_incoming_call()
-
-# Define a route to handle user input
-@app.route('/process_input', methods=['POST'])
-def process_input():
- user_input = request.form['SpeechResult']
- print("Rob : " +user_input)
- conversation_id = request.form['CallSid']
- #print("Conversation Id: " + conversation_id)
-
- if user_input.lower() in ['thank you', 'thanks.', 'bye.', 'goodbye.','no thanks.','no, thank you.','i m good.','no, i m good.','same to you.','no, thanks.','thank you.']:
- response = VoiceResponse()
- response.say("Thank you for using our service. Goodbye!")
-
- response.hangup()
- print("Hanged-up")
-
- create_case(conversations.memory.buffer,conversation_id)
-
- memory.clear()
-
- print("Case created successfully !!")
-
- else:
- response = VoiceResponse()
- ai_response=conversations.predict(input=user_input)
- response.say(ai_response)
- print("Bot: " + ai_response)
- gather = Gather(input='speech', speechTimeout='auto', action='/process_input')
- response.append(gather)
-
- return str(response)
-
-# For Case Summary and Subject
-
-
-def get_case_summary(conv_detail):
- #chatresponse_desc = openai.ChatCompletion.create(
- #model="gpt-3.5-turbo",
- #temperature=0,
- #max_tokens=128,
- #messages=[
- # {"role": "system", "content": "You are an Text Summarizer."},
- # {"role": "user", "content": "You need to summarise the conversation between an agent and customer mentioned below. Remember to keep the Product Name, Customer Tone and other key elements from the convsersation"},
- # {"role": "user", "content": conv_detail}
- #]
- #)
- #case_desc = chatresponse_desc.choices[0].message.content
- chatresponse_desc = openai.Completion.create(
- model = 'text-davinci-003',
- prompt = 'You need to summarise the problem as told by the customer. Remember to keep the Product Name and other key points discussed from the conversation.Here is the conversation between service agent and the customer: ' + conv_detail,
- temperature = 0,
- top_p =1,
- best_of=1,
- max_tokens=256
- )
- case_desc = chatresponse_desc.choices[0].text.strip()
- return case_desc
-
-def get_case_subject(conv_detail):
- #chatresponse_subj = openai.ChatCompletion.create(
- #model="gpt-3.5-turbo",
- #temperature=0,
- #max_tokens=32,
- #messages=[
- # {"role": "system", "content": "You are an Text Summarizer."},
- # {"role": "user", "content": "You need to summarise the conversation between an agent and customer in 15 words mentioned below for case subject."},
- # {"role": "user", "content": conv_detail}
- #]
- #)
- #case_subj = chatresponse_subj.choices[0].message.content
- chatresponse_subj = openai.Completion.create(
- model = 'text-davinci-003',
- prompt = 'Summarise the conversation between an agent and customer in 10 words mentioned below for Case Subject. Here is the conversation: ' + conv_detail,
- temperature = 0,
- top_p =1,
- best_of=1,
- max_tokens=256
- )
- case_subj = chatresponse_subj.choices[0].text.strip()
- return case_subj
-
-# Define a function to create a case record in Salesforce
-def create_case(conv_hist,conv_id):
- sf_user = os.environ.get("SF_USER")
- sf_pwd = os.environ.get("SF_PWD")
- sf_token = os.environ.get("SF_TOKEN")
- sf_instance = os.environ.get("SF_INSTANCE")
- session = requests.Session()
- sf = Salesforce(username=sf_user, password=sf_pwd, security_token=sf_token,instance_url=sf_instance,session=session)
-
- desc = get_case_summary(conv_hist)
- subj = get_case_subject(conv_hist)
-
- case_data = {
- 'Subject': 'Voice Bot Case: ' + subj ,
- 'Description': desc,
- 'Status': 'New',
- 'Origin': 'Voice Bot',
- 'Voice_Call_Conversation__c': conv_hist ,
- 'Voice_Call_Id__c': conv_id,
- 'ContactId': '003B000000NLHQ1IAP'
- }
- sf.Case.create(case_data)
-
-
-@app.route('/')
-def index():
- return """Flask Server running with Twilio Voice & ChatGPT integrated with Salesforce for Case Creation. Call the registered Twilio # to talk to the AI Voice Bot."""
-
-if __name__ == '__main__':
- app.run(debug=False,host='0.0.0.0',port=5050)
- uvicorn.run(app,host='0.0.0.0', port=5050)
\ No newline at end of file
diff --git a/spaces/Ekimetrics/Biomap/biomap/inference.py b/spaces/Ekimetrics/Biomap/biomap/inference.py
deleted file mode 100644
index 796e273d5986b1e20801e4d808fb1c5283b69719..0000000000000000000000000000000000000000
--- a/spaces/Ekimetrics/Biomap/biomap/inference.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import torch.multiprocessing
-import torchvision.transforms as T
-from utils import transform_to_pil
-import logging
-
-preprocess = T.Compose(
- [
- T.ToPILImage(),
- T.Resize((320, 320)),
- # T.CenterCrop(224),
- T.ToTensor(),
- T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
- ]
- )
-
-import numpy as np
-def inference(images, model):
- logging.info("Inference on Images")
- x = torch.stack([preprocess(image) for image in images]).cpu()
-
- with torch.no_grad():
- _, code = model.net(x)
- linear_pred = model.linear_probe(x, code)
- linear_pred = linear_pred.argmax(1)
- outputs = [{
- "img": x[i].detach().cpu(),
- "linear_preds": linear_pred[i].detach().cpu(),
- } for i in range(x.shape[0])]
-
- # water to natural green
- for output in outputs:
- output["linear_preds"] = torch.where(output["linear_preds"] == 5, 3, output["linear_preds"])
- return outputs
-
-
-if __name__ == "__main__":
- import hydra
- from model import LitUnsupervisedSegmenter
- from utils_gee import extract_img, transform_ee_img
- import os
- latitude = 2.98
- longitude = 48.81
- start_date = '2020-03-20'
- end_date = '2020-04-20'
-
- location = [float(latitude), float(longitude)]
- # Extract img numpy from earth engine and transform it to PIL img
- img = extract_img(location, start_date, end_date)
- image = transform_ee_img(
- img, max=0.3
- ) # max value is the value from numpy file that will be equal to 255
- print("image loaded")
- # Initialize hydra with configs
- hydra.initialize(config_path="configs", job_name="corine")
- cfg = hydra.compose(config_name="my_train_config.yml")
-
- # Load the model
-
- model_path = os.path.join(os.path.dirname(__file__), "checkpoint/model/model.pt")
- saved_state_dict = torch.load(model_path, map_location=torch.device("cpu"))
-
- nbclasses = cfg.dir_dataset_n_classes
-
- model = LitUnsupervisedSegmenter(nbclasses, cfg)
- print("model initialized")
- model.load_state_dict(saved_state_dict)
- print("model loaded")
- # img.save("output/image.png")
- inference([image], model)
-
- inference([image,image], model)
-
\ No newline at end of file
diff --git a/spaces/EleutherAI/VQGAN_CLIP/CLIP/data/yfcc100m.md b/spaces/EleutherAI/VQGAN_CLIP/CLIP/data/yfcc100m.md
deleted file mode 100644
index 575c54bc4bab3972878291c8d227a313c9fc766e..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/CLIP/data/yfcc100m.md
+++ /dev/null
@@ -1,14 +0,0 @@
-# The YFCC100M Subset
-
-In the paper, we performed a dataset ablation using a subset of the YFCC100M dataset and showed that the performance remained largely similar.
-
-The subset contains 14,829,396 images, about 15% of the full dataset, which have been filtered to only keep those with natural languag titles and/or descriptions in English.
-
-We provide the list of (line number, photo identifier, photo hash) of each image contained in this subset. These correspond to the first three columns in the dataset's metadata TSV file.
-
-```
-wget https://openaipublic.azureedge.net/clip/data/yfcc100m_subset_data.tsv.bz2
-bunzip2 yfcc100m_subset_data.tsv.bz2
-```
-
-Use of the underlying media files is subject to the Creative Commons licenses chosen by their creators/uploaders. For more information about the YFCC100M dataset, visit [the official website](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/).
\ No newline at end of file
diff --git a/spaces/EronSamez/RVC_HFmeu/go-applio-manager-recode.bat b/spaces/EronSamez/RVC_HFmeu/go-applio-manager-recode.bat
deleted file mode 100644
index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/go-applio-manager-recode.bat
+++ /dev/null
@@ -1,322 +0,0 @@
-@echo off
-title Applio Installer
-
-::: _ _ _____ _
-::: /\ | (_) | __ \ | |
-::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___
-::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \
-::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/
-::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___|
-::: | | | |
-::: |_| |_|
-:::
-:::
-
-setlocal
-set "branch=applio-recode"
-set "runtime=runtime-recode"
-set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip"
-set "fixesFolder=fixes"
-set "localFixesPy=local_fixes.py"
-set "principal=%cd%"
-set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main"
-set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main"
-
-:menu
-for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
-
-echo [1] Reinstall Applio
-echo [2] Update Applio
-echo [3] Update Applio + Runtime
-echo.
-
-set /p choice=Select an option:
-set choice=%choice: =%
-
-if "%choice%"=="1" (
- cls
- echo Starting Applio Reinstaller...
- echo.
- goto reinstaller
- pause
- cls
- goto menu
-
-)
-
-if "%choice%"=="2" (
- cls
- echo Starting Applio Updater...
- echo.
- goto updater
- pause
- cls
- goto menu
-)
-
-if "%choice%"=="3" (
- cls
- echo Updating Applio + Runtime...
- echo.
- goto updaterRuntime
- pause
- cls
- goto menu
-
-)
-
-cls
-echo Invalid option. Please enter a number from 1 to 3.
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-:reinstaller
-
-echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing.
-echo.
-echo Step-by-step guide: https://rentry.org/appliolocal
-echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe
-echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe
-echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe
-echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts
-echo.
-pause
-cls
-
-echo Downloading ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Proceeding to download the models...
-echo.
-
-echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
-pause
-cls
-
-echo Downloading models in the assets folder...
-cd "assets"
-echo.
-echo Downloading the "pretrained" folder...
-cd "pretrained"
-curl -LJO "%URL_BASE%/pretrained/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the "pretrained_v2" folder...
-cd "pretrained_v2"
-curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth"
-curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the hubert_base.pt file...
-cd "hubert"
-curl -LJO "%URL_BASE%/hubert_base.pt"
-cd ".."
-echo.
-cls
-
-
-echo Downloading the rmvpe.pt file...
-cd "rmvpe"
-curl -LJO "%URL_BASE%/rmvpe.pt"
-echo.
-cls
-
-echo Downloading the rmvpe.onnx file...
-curl -LJO "%URL_BASE%/rmvpe.onnx"
-cd ".."
-cd ".."
-echo.
-cls
-
-echo Downloading the rest of the large files
-
-echo Downloading the "uvr5_weights" folder...
-cd "uvr5_weights"
-curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth"
-curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth"
-cd ".."
-echo.
-cls
-
-echo Downloading the ffmpeg.exe file...
-curl -LJO "%URL_BASE%/ffmpeg.exe"
-echo.
-cls
-
-echo Downloading the ffprobe.exe file...
-curl -LJO "%URL_BASE%/ffprobe.exe"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del %runtime%.zip
-echo.
-cls
-
-echo Downloads completed!
-echo.
-
-echo Checking if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The "%localFixesPy%" file was not found in the "Fixes" folder.
-)
-echo.
-
-echo Fixes Applied!
-echo.
-
-echo Applio has been reinstalled!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updater
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
-
-
-:updaterRuntime
-
-echo Downloading the ZIP file...
-powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }"
-echo.
-
-echo Extracting ZIP file...
-powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }"
-echo.
-
-echo Copying folder and file structure from subdirectory to main directory...
-robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E
-echo.
-
-echo Deleting contents of the subdirectory (files and folders)...
-rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q
-echo.
-
-echo Cleaning up...
-del "%principal%\repo.zip"
-echo.
-cls
-
-echo Downloading the runtime.zip file...
-curl -LJO "%URL_EXTRA%/%runtime%.zip"
-echo.
-cls
-echo Extracting the runtime.zip file, this might take a while...
-powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'"
-del runtime.zip
-echo.
-cls
-
-echo Verifying if the local_fixes.py file exists in the Fixes folder...
-if exist "%fixesFolder%\%localFixesPy%" (
- echo Running the file...
- runtime\python.exe "%fixesFolder%\%localFixesPy%"
-) else (
- echo The file "%localFixesPy%" was not found in the "Fixes" folder.
-)
-echo.
-
-echo Applio has been updated!
-echo.
-echo Press 'Enter' to access the main menu...
-pause>nul
-cls
-goto menu
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py
deleted file mode 100644
index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_123821KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_123821KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 32)
- self.stg1_high_band_net = BaseASPPNet(2, 32)
-
- self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(16, 32)
-
- self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(32, 64)
-
- self.out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/ErtugrulDemir/TextSummarizing/app.py b/spaces/ErtugrulDemir/TextSummarizing/app.py
deleted file mode 100644
index d07099e6f508132a4cb2fe8557bb4e60e5654c60..0000000000000000000000000000000000000000
--- a/spaces/ErtugrulDemir/TextSummarizing/app.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import gradio as gr
-import pickle
-import pickle
-from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM, pipeline
-
-# File Paths
-model_path = 'fine_tuned_sum'
-tokenizer_path = "tokenizer"
-examples_path = "examples.pkl"
-
-# Load the tokenizer
-tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
-
-# Load the fine-tuned BERT model
-seq2seq_model = TFAutoModelForSeq2SeqLM.from_pretrained(model_path)
-
-# loading the examples
-with open('examples.pkl', 'rb') as f: examples = pickle.load(f)
-
-# Creating the pipeline
-sum_params = {
- "model":seq2seq_model,
- "tokenizer":tokenizer,
- "framework":"tf",
-}
-
-summarizer = pipeline("summarization", **sum_params)
-# Load the model
-# Define a function to make predictions with the model
-def summarize(text):
-
- # defining the params
- prms = {
- "min_length":5,
- "max_length":128
- }
- return summarizer(text,**prms)[0]["summary_text"]
-
-# GUI Component
-# defining the params
-if_p = {
- "fn":summarize,
- "inputs":gr.inputs.Textbox(label="Text"),
- "outputs":gr.outputs.Textbox(label="Output"),
- "title":"Fine-tuned 't5-small' model for text summarization",
- "description":"Write something to summarization text",
- "examples":examples
-}
-
-# Create a Gradio interface instance
-demo = gr.Interface(**if_p)
-
-# Launching the demo
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/CONTRIBUTING.md b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/CONTRIBUTING.md
deleted file mode 100644
index 63d0b751e8a00b606ddff92e2524faa3c90a63b0..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/CONTRIBUTING.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Contributing to ImageBind
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-We actively welcome your pull requests.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Meta's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to Omnivore, you agree that your contributions will be licensed
-under the [LICENSE](LICENSE) file in the root directory of this source tree.
diff --git a/spaces/Fengbinbin/gpt-academic/docs/self_analysis.md b/spaces/Fengbinbin/gpt-academic/docs/self_analysis.md
deleted file mode 100644
index c88e1e41217eb13a30269f933586f6c241fab38d..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/docs/self_analysis.md
+++ /dev/null
@@ -1,256 +0,0 @@
-# chatgpt-academic项目自译解报告
-(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄)
-
-## 对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。
-
-整体概括:
-
-该程序是一个基于自然语言处理和机器学习的科学论文辅助工具,主要功能包括聊天机器人、批量总结PDF文档、批量翻译PDF文档、生成函数注释、解析项目源代码等。程序基于 Gradio 构建 Web 服务,并集成了代理和自动更新功能,提高了用户的使用体验。
-
-文件功能表格:
-
-| 文件名 | 文件功能 |
-| --- | --- |
-| check_proxy.py | 用于检查代理的正确性和可用性 |
-| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 |
-| config.py | 用于全局配置的类 |
-| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 |
-| core_functional.py | 包含一些TextFunctional类和基础功能函数 |
-| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 |
-| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 |
-| theme.py | 包含一些预设置主题的颜色 |
-| toolbox.py | 提供了一些有用的工具函数 |
-| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 |
-| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 |
-| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 |
-| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 |
-| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 |
-| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 |
-| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 |
-| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 |
-| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 |
-| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 |
-| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 |
-| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 |
-| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 |
-| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 |
-| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 |
-| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 |
-| request_llm\bridge_all.py | 处理与LLM的交互 |
-| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 |
-| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 |
-| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 |
-
-
-
-## [0/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\check_proxy.py
-
-该文件主要包括四个函数:check_proxy、backup_and_download、patch_and_restart 和 auto_update。其中,check_proxy 函数用于检查代理是否可用;backup_and_download 用于进行一键更新备份和下载;patch_and_restart 是一键更新协议的重要函数,用于覆盖和重启;auto_update 函数用于查询版本和用户意见,并自动进行一键更新。该文件主要使用了 requests、json、shutil、zipfile、distutils、subprocess 等 Python 标准库和 toolbox 和 colorful 两个第三方库。
-
-## [1/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\colorful.py
-
-该程序文件实现了一些打印文本的函数,使其具有不同的颜色输出。当系统为Linux时直接跳过,否则使用colorama库来实现颜色输出。程序提供了深色和亮色两种颜色输出方式,同时也提供了对打印函数的别名。对于不是终端输出的情况,对所有的打印函数进行重复定义,以便在重定向时能够避免打印错误日志。
-
-## [2/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config.py
-
-该程序文件是一个配置文件,其主要功能是提供使用API密钥等信息,以及对程序的体验进行优化,例如定义对话框高度、布局等。还包含一些其他的设置,例如设置并行使用的线程数、重试次数限制等等。
-
-## [3/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config_private.py
-
-这是一个名为config_private.py的Python文件,它用于配置API_KEY和代理信息。API_KEY是一个私密密钥,用于访问某些受保护的API。USE_PROXY变量设置为True以应用代理,proxies变量配置了代理网络的地址和协议。在使用该文件时,需要填写正确的API_KEY和代理信息。
-
-## [4/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\core_functional.py
-
-该文件是一个Python模块,名为"core_functional.py"。模块中定义了一个字典,包含了各种核心功能的配置信息,如英语学术润色、中文学术润色、查找语法错误等。每个功能都包含一些前言和后语,在前言中描述了该功能的任务和要求,在后语中提供一些附加信息。此外,有些功能还定义了一些特定的处理函数和按钮颜色。
-
-## [5/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functional.py
-
-这是一个Python程序文件,文件名是crazy_functional.py。它导入了一个名为HotReload的工具箱,并定义了一个名为get_crazy_functions()的函数。这个函数包括三个部分的插件组,分别是已经编写完成的第一组插件、已经测试但距离完美状态还差一点点的第二组插件和尚未充分测试的第三组插件。每个插件都有一个名称、一个按钮颜色、一个函数和一个是否加入下拉菜单中的标志位。这些插件提供了多种功能,包括生成函数注释、解析项目源代码、批量翻译PDF文档、谷歌检索、PDF文档内容理解和Latex文档的全文润色、翻译等功能。其中第三组插件可能还存在一定的bug。
-
-## [6/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\main.py
-
-该Python脚本代码实现了一个用于交互式对话的Chatbot机器人。它使用了Gradio框架来构建一个Web界面,并在此基础之上嵌入了一个文本输入框和与Chatbot进行交互的其他控件,包括提交、重置、停止和清除按钮、选择框和滑块等。此外,它还包括了一些类和函数和一些用于编程分析的工具和方法。整个程序文件的结构清晰,注释丰富,并提供了很多技术细节,使得开发者可以很容易地在其基础上进行二次开发、修改、扩展和集成。
-
-## [7/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\theme.py
-
-该程序文件名为theme.py,主要功能为调节Gradio的全局样式。在该文件中,调节了Gradio的主题颜色、字体、阴影、边框、渐变等等样式。同时,该文件还添加了一些高级CSS样式,比如调整表格单元格的背景和边框,设定聊天气泡的圆角、最大宽度和阴影等等。如果CODE_HIGHLIGHT为True,则还进行了代码高亮显示。
-
-## [8/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\toolbox.py
-
-这是一个名为`toolbox.py`的源代码文件。该文件包含了一系列工具函数和装饰器,用于聊天Bot的开发和调试。其中有一些功能包括将输入参数进行重组、捕捉函数中的异常并记录到历史记录中、生成Markdown格式的聊天记录报告等。该文件中还包含了一些与转换Markdown文本相关的函数。
-
-## [9/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\crazy_utils.py
-
-这是一个Python程序文件 `crazy_utils.py`,它包含了两个函数:
-
-- `input_clipping(inputs, history, max_token_limit)`:这个函数接收三个参数,inputs 是一个字符串,history 是一个列表,max_token_limit 是一个整数。它使用 `tiktoken` 、`numpy` 和 `toolbox` 模块,处理输入文本和历史记录,将其裁剪到指定的最大标记数,避免输入过长导致的性能问题。如果 inputs 长度不超过 max_token_limit 的一半,则只裁剪历史;否则,同时裁剪输入和历史。
-- `request_gpt_model_in_new_thread_with_ui_alive(inputs, inputs_show_user, llm_kwargs, chatbot, history, sys_prompt, refresh_interval=0.2, handle_token_exceed=True, retry_times_at_unknown_error=2)`:这个函数接收八个参数,其中后三个是列表类型,其他为标量或句柄等。它提供对话窗口和刷新控制,执行 `predict_no_ui_long_connection` 方法,将输入数据发送至 GPT 模型并获取结果,如果子任务出错,返回相应的错误信息,否则返回结果。
-
-## [10/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文润色.py
-
-这是一个名为"crazy_functions\Latex全文润色.py"的程序文件,其中包含了两个函数"Latex英文润色"和"Latex中文润色",以及其他辅助函数。这些函数能够对 Latex 项目进行润色处理,其中 "多文件润色" 函数是一个主要函数,它调用了其他辅助函数用于读取和处理 Latex 项目中的文件。函数使用了多线程和机器学习模型进行自然语言处理,对文件进行简化和排版来满足学术标准。注释已删除并可以在函数内部查找。
-
-## [11/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文翻译.py
-
-这个程序文件包括一个用于对整个Latex项目进行翻译的函数 `Latex英译中` 和一个用于将中文翻译为英文的函数 `Latex中译英`。这两个函数都会尝试导入依赖库 tiktoken, 若无法导入则会提示用户安装。`Latex英译中` 函数会对 Latex 项目中的文件进行分离并去除注释,然后运行多线程翻译。`Latex中译英` 也做同样的事情,只不过是将中文翻译为英文。这个程序文件还包括其他一些帮助函数。
-
-## [12/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\__init__.py
-
-这是一个 Python 包,包名为 `crazy_functions`,在 `__init__.py` 文件中定义了一些函数,包含以下函数:
-
-- `crazy_addition(a, b)`:对两个数进行加法运算,并将结果返回。
-- `crazy_multiplication(a, b)`:对两个数进行乘法运算,并将结果返回。
-- `crazy_subtraction(a, b)`:对两个数进行减法运算,并将结果返回。
-- `crazy_division(a, b)`:对两个数进行除法运算,并将结果返回。
-- `crazy_factorial(n)`:计算 `n` 的阶乘并返回结果。
-
-这些函数可能会有一些奇怪或者不符合常规的实现方式(由函数名可以看出来),所以这个包的名称为 `crazy_functions`,可能是暗示这些函数会有一些“疯狂”的实现方式。
-
-## [13/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\下载arxiv论文翻译摘要.py
-
-该程序实现了一个名为“下载arxiv论文并翻译摘要”的函数插件,作者是“binary-husky”。该函数的功能是,在输入一篇arxiv论文的链接后,提取摘要、下载PDF文档、翻译摘要为中文,并将翻译结果保存到文件中。程序使用了一些Python库,如requests、pdfminer和beautifulsoup4等。程序入口是名为“下载arxiv论文并翻译摘要”的函数,其中使用了自定义的辅助函数download_arxiv_和get_name。程序中还使用了其他非函数的辅助函数和变量,如update_ui、CatchException、report_exception和get_conf等。
-
-## [14/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\代码重写为全英文_多线程.py
-
-该文件是一个多线程Python脚本,包含多个函数和利用第三方库进行的API请求。主要功能是将给定文件夹内的Python代码文件中所有中文转化为英文,然后输出转化后的英文代码。重要的功能和步骤包括:
-
-1. 清空历史,以免输入溢出
-2. 尝试导入依赖,如果缺少依赖,则给出安装建议
-3. 集合文件
-4. 显示随意内容以防卡顿的感觉
-5. Token限制下的截断与处理
-6. 多线程操作请求转换中文变为英文的代码
-7. 所有线程同时开始执行任务函数
-8. 循环轮询各个线程是否执行完毕
-9. 把结果写入文件
-10. 备份一个文件
-
-## [15/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\总结word文档.py
-
-这是一个名为"总结word文档.py"的程序文件,使用python编写。该文件导入了"toolbox"和"crazy_utils"模块,实现了解析docx格式和doc格式的文件的功能。该文件包含了一个名为"解析docx"的函数,通过对文件内容应用自然语言处理技术,生成文章片段的中英文概述。具体实现过程中,该函数使用了"docx"模块和"win32com.client"模块来实现对docx和doc格式文件的解析,同时使用了"request_gpt_model_in_new_thread_with_ui_alive"函数来向GPT模型发起请求。最后,该文件还实现了一个名为"总结word文档"的函数来批量总结Word文档。
-
-## [16/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量Markdown翻译.py
-
-这个程序文件实现了一个批量Markdown翻译功能,可以将一个源代码项目中的Markdown文本翻译成指定语言(目前支持中<-英和英<-中)。程序主要分为三个函数,`PaperFileGroup`类用于处理长文本的拆分,`多文件翻译`是主要函数调用了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`函数进行多线程翻译并输出结果,`Markdown英译中`和`Markdown中译外`分别是英译中和中译英的入口函数,用于解析项目路径和调用翻译函数。程序依赖于tiktoken等库实现。
-
-## [17/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档.py
-
-这是一个名为“批量总结PDF文档”的Python脚本,包含了多个函数。其中有一个函数名为“clean_text”,可以对PDF提取出的原始文本进行清洗和格式化处理,将连字转换为其基本形式,并根据heuristic规则判断换行符是否是段落分隔,并相应地进行替换。另一个函数名为“解析PDF”,可以接收一个PDF文件清单,并对清单中的每一个PDF进行解析,提取出文本并调用“clean_text”函数进行清洗和格式化处理,然后向用户发送一个包含文章简介信息的问题并等待用户回答。最后,该脚本也包含一个名为“批量总结PDF文档”的主函数,其中调用了“解析PDF”函数来完成对PDF文件的批量处理。
-
-## [18/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档pdfminer.py
-
-这个文件是一个Python模块,文件名为pdfminer.py,它定义了一个函数批量总结PDF文档。该函数接受一些参数,然后尝试导入pdfminer和beautifulsoup4库。该函数将读取pdf文件或tex文件中的内容,对其进行分析,并使用GPT模型进行自然语言摘要。文件中还有一个辅助函数readPdf,用于读取pdf文件中的内容。
-
-## [19/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量翻译PDF文档_多线程.py
-
-这是一个Python脚本,文件名是crazy_functions\批量翻译PDF文档_多线程.py。该脚本提供了一个名为“批量翻译PDF文档”的函数,可以批量翻译PDF文件并生成报告文件。该函数使用了多个模块和函数(如toolbox、crazy_utils、update_ui等),使用了Python的异常处理和多线程功能,还使用了一些文本处理函数和第三方库(如fitz和tiktoken)。在函数执行过程中,它会进行一些参数检查、读取和清理PDF文本、递归地切割PDF文件、获取文章meta信息、多线程翻译、整理报告格式等操作,并更新UI界面和生成报告文件。
-
-## [20/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\理解PDF文档内容.py
-
-这是一个解析PDF文件内容的Python程序,程序文件名为"理解PDF文档内容.py",程序主要由5个步骤组成:第0步是切割PDF文件;第1步是从摘要中提取高价值信息,放到history中;第2步是迭代地历遍整个文章,提取精炼信息;第3步是整理history;第4步是设置一个token上限,防止回答时Token溢出。程序主要用到了Python中的各种模块和函数库,如:toolbox, tiktoken, pymupdf等。
-
-## [21/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\生成函数注释.py
-
-这是一个名为"生成函数注释"的函数,带有一个装饰器"@CatchException",可以捕获异常。该函数接受文件路径、参数和聊天机器人等参数,用于对多个Python或C++文件进行函数注释,使用了"toolbox"和"crazy_utils"模块中的函数。该函数会逐个读取指定文件中的内容,并使用聊天机器人进行交互,向用户请求注释信息,然后将生成的注释与原文件内容一起输出到一个markdown表格中。最后,该函数返回一个字符串,指示任务是否已完成。另外还包含一个名为"批量生成函数注释"的函数,它与"生成函数注释"函数一起用于批量处理多个文件。
-
-## [22/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\解析项目源代码.py
-
-这个程序文件实现了对一个源代码项目进行分析的功能。其中,函数`解析项目本身`、`解析一个Python项目`、`解析一个C项目的头文件`、`解析一个C项目`、`解析一个Java项目`和`解析前端项目`分别用于解析不同类型的项目。函数`解析源代码新`实现了对每一个源代码文件的分析,并将分析结果汇总,同时还实现了分组和迭代处理,提高了效率。最后,函数`write_results_to_file`将所有分析结果写入文件。中间,还用到了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`和`request_gpt_model_in_new_thread_with_ui_alive`来完成请求和响应,并用`update_ui`实时更新界面。
-
-## [23/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\询问多个大语言模型.py
-
-这是一个Python程序,文件名为"crazy_functions\询问多个大语言模型.py"。该程序实现了一个同时向多个大语言模型询问的功能,接收用户输入文本以及模型参数,向ChatGPT和ChatGLM模型发出请求,并将对话记录显示在聊天框中,同时刷新界面。
-
-## [24/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\读文章写摘要.py
-
-该程序文件是一个Python模块,文件名为"读文章写摘要.py",主要包含两个函数:"解析Paper"和"读文章写摘要"。其中,"解析Paper"函数接受文件路径、参数等参数,逐个打印文件内容并使用GPT模型生成对该文件的摘要;"读文章写摘要"函数则接受一段文本内容和参数,将该文本内容及其所有.tex文件逐个传递给"解析Paper"函数进行处理,并使用GPT模型生成文章的中英文摘要。文件还导入了一些工具函数,如异常处理、信息上报和文件写入等。
-
-## [25/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\谷歌检索小助手.py
-
-该文件代码包含了一个名为`get_meta_information`的函数和一个名为`谷歌检索小助手`的装饰器函数,用于从谷歌学术中抓取文章元信息,并从用户提供的搜索页面中分析所有文章的相关信息。该文件使用了许多第三方库,如requests、arxiv、BeautifulSoup等。其中`get_meta_information`函数中还定义了一个名为`string_similar`的辅助函数,用于比较字符串相似度。
-
-## [26/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\高级功能函数模板.py
-
-该程序文件是一个 Python 模块,包含一个名为“高阶功能模板函数”的函数。该函数接受多个参数,其中包括输入文本、GPT 模型参数、插件模型参数、聊天显示框、聊天历史等。 该函数的主要功能是根据输入文本,使用 GPT 模型生成一些问题,并等待用户回答这些问题(使用 Markdown 格式),然后将用户回答加入到聊天历史中,并更新聊天显示框。该函数还包含了一些异常处理和多线程的相关操作。该程序文件还引用了另一个 Python 模块中的两个函数,分别为“CatchException”和“update_ui”,并且还引用了一个名为“request_gpt_model_in_new_thread_with_ui_alive”的自定义函数。
-
-## [27/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_all.py
-
-这个文件是用来处理与LLM的交互的。包含两个函数,一个是 predict_no_ui_long_connection 用来处理长文本的输出,可以多线程调用;另一个是 predict 用来处理基础的对话功能。这个文件会导入其他文件中定义的方法进行调用,具体调用哪个方法取决于传入的参数。函数中还有一些装饰器和管理多线程的逻辑。
-
-## [28/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatglm.py
-
-这个程序文件实现了一个使用ChatGLM模型进行聊天的功能。具体实现过程是:首先进行初始化,然后使用GetGLMHandle类进行ChatGLM模型的加载和运行。predict_no_ui_long_connection函数用于多线程聊天,而predict函数用于单线程聊天,它们的不同之处在于前者不会更新UI界面,后者会。这个文件还导入了其他模块和库,例如transformers、time、importlib等,并使用了多进程Pipe。
-
-## [29/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatgpt.py
-
-这个程序文件是用于对话生成的,主要包含三个函数:predict、predict_no_ui、predict_no_ui_long_connection。其中,predict是用于普通对话的函数,具备完备的交互功能,但不具备多线程能力;predict_no_ui是高级实验性功能模块调用的函数,参数简单,可以多线程并行,方便实现复杂的功能逻辑;predict_no_ui_long_connection解决了predict_no_ui在处理长文档时容易断开连接的问题,同样支持多线程。程序中还包含一些常量和工具函数,用于整合信息,选择LLM模型,生成http请求,发送请求,接收响应等。它需要配置一个config文件,包含代理网址、API等敏感信息。
-
-## [30/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_tgui.py
-
-该程序文件实现了一个基于Websockets的文本生成服务和对话功能。其中,有三个函数:`run()`、`predict()`和`predict_no_ui_long_connection()`。`run()`函数用于连接到Websocket服务并生成文本结果;`predict()`函数用于将用户输入作为文本生成的输入,同时在UI上显示对话历史记录,并在不断更新UI的过程中不断更新生成的文本输出;`predict_no_ui_long_connection()`函数与`predict()`函数类似,但没有UI,并在一段时间内返回单个生成的文本。整个程序还引入了多个Python模块来完成相关功能,例如`asyncio`、`websockets`、`json`等等。
-
-## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py)。
-
-程序功能概括:该程序是一个聊天机器人,可以通过 Web 界面与用户进行交互。它包含了丰富的功能,如文本润色、翻译、代码重写、在线查找等,并且支持多线程处理。用户可以通过 Gradio 框架提供的 Web 界面进行交互,程序还提供了一些调试工具,如toolbox 模块,方便程序开发和调试。
-
-下表概述了每个文件的功能:
-
-| 文件名 | 功能 |
-| ----------------------------------------------------------- | ------------------------------------------------------------ |
-| check_proxy.py | 检查代理是否可用 |
-| colorful.py | 用于打印文本的字体颜色输出模块 |
-| config.py | 用于程序中的各种设置,如并行线程数量和重试次数的限制等 |
-| config_private.py | 配置API_KEY和代理信息的文件 |
-| core_functional.py | 包含具体的文本处理功能的模块 |
-| crazy_functional.py | 包括各种插件函数的模块,提供了多种文本处理功能 |
-| main.py | 包含 Chatbot 机器人主程序的模块 |
-| theme.py | 用于调节全局样式的模块 |
-| toolbox.py | 包含工具函数和装饰器,用于聊天Bot的开发和调试 |
-| crazy_functions\crazy_utils.py | 包含一些辅助函数,如文本裁剪和消息捕捉等 |
-| crazy_functions\Latex全文润色.py | 对 Latex 项目进行润色处理的功能模块 |
-| crazy_functions\Latex全文翻译.py | 对 Latex 项目进行翻译的功能模块 |
-| crazy_functions\__init__.py | 定义一些奇特的数学函数等 |
-| crazy_functions\下载arxiv论文翻译摘要.py | 下载 Arxiv 论文并翻译摘要的功能模块 |
-| crazy_functions\代码重写为全英文_多线程.py | 将Python程序中所有中文转化为英文的功能模块 |
-| crazy_functions\总结word文档.py | 解析 docx 和 doc 格式的文件,生成文章片段的中英文概述的功能模块 |
-
-## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py, crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_tgui.py)。
-
-根据以上分析,整个程序是一个集成了多个有用工具和功能的文本处理和生成工具,提供了多种在不同场景下使用的功能,包括但不限于对话生成、文本摘要、PDF文件批量处理、代码翻译和实用工具等。主要的Python模块包括"toolbox.py"、"config.py"、"core_functional.py"和"crazy_functional.py"等,并且还使用了许多第三方库和模块实现相关功能。以下是每个程序文件的功能:
-
-| 文件名 | 文件功能 |
-| --- | --- |
-| check_proxy.py | 用于检查代理的正确性和可用性 |
-| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 |
-| config.py | 用于全局配置的类 |
-| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 |
-| core_functional.py | 包含一些TextFunctional类和基础功能函数 |
-| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 |
-| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 |
-| theme.py | 包含一些预设置主题的颜色 |
-| toolbox.py | 提供了一些有用的工具函数 |
-| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 |
-| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 |
-| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 |
-| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 |
-| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 |
-| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 |
-| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 |
-| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 |
-| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 |
-| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 |
-| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 |
-| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 |
-| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 |
-| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 |
-| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 |
-| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 |
-| request_llm\bridge_all.py | 处理与LLM的交互 |
-| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 |
-| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 |
-| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 |
-
diff --git a/spaces/Flux9665/SpeechCloning/app.py b/spaces/Flux9665/SpeechCloning/app.py
deleted file mode 100644
index 0fc145d458c676d3dc679ba5841ffcdedf063860..0000000000000000000000000000000000000000
--- a/spaces/Flux9665/SpeechCloning/app.py
+++ /dev/null
@@ -1,276 +0,0 @@
-import os
-
-import gradio as gr
-import numpy as np
-import soundfile
-import soundfile as sf
-import torch
-from tqdm import tqdm
-
-os.system("git clone --branch v2.5 https://github.com/DigitalPhonetics/IMS-Toucan.git toucan_codebase")
-os.system("mv toucan_codebase/* .")
-
-from run_model_downloader import download_models
-
-download_models()
-
-from Preprocessing.TextFrontend import ArticulatoryCombinedTextFrontend
-from Preprocessing.AudioPreprocessor import AudioPreprocessor
-from TrainingInterfaces.Text_to_Spectrogram.AutoAligner.Aligner import Aligner
-from TrainingInterfaces.Text_to_Spectrogram.FastSpeech2.DurationCalculator import DurationCalculator
-from InferenceInterfaces.UtteranceCloner import UtteranceCloner
-from Preprocessing.articulatory_features import get_feature_to_index_lookup
-
-
-def float2pcm(sig, dtype='int16'):
- """
- https://gist.github.com/HudsonHuang/fbdf8e9af7993fe2a91620d3fb86a182
- """
- sig = np.asarray(sig)
- if sig.dtype.kind != 'f':
- raise TypeError("'sig' must be a float array")
- dtype = np.dtype(dtype)
- if dtype.kind not in 'iu':
- raise TypeError("'dtype' must be an integer type")
- i = np.iinfo(dtype)
- abs_max = 2 ** (i.bits - 1)
- offset = i.min + abs_max
- return (sig * abs_max + offset).clip(i.min, i.max).astype(dtype)
-
-
-class TTS_Interface:
-
- def __init__(self):
- self.device = "cuda" if torch.cuda.is_available() else "cpu"
-
- self.utterance_cloner = UtteranceCloner(model_id="Meta", device=self.device)
- self.speaker_path_lookup = {
- "Voice 1": "reference_audios/voice_1.flac",
- "Voice 2": "reference_audios/voice_2.wav",
- "Voice 3": "reference_audios/voice_3.wav",
- }
- self.acoustic_model = Aligner()
- self.acoustic_model.load_state_dict(torch.load("Models/Aligner/aligner.pt", map_location='cpu')["asr_model"])
- self.acoustic_model = self.acoustic_model.to(self.device)
- self.dc = DurationCalculator(reduction_factor=1)
- self.tf = ArticulatoryCombinedTextFrontend(language="en")
- example_audio, sr = soundfile.read("reference_audios/clone_me_5.wav")
- self.ap = AudioPreprocessor(input_sr=sr, output_sr=16000, )
-
- ## finetune aligner
- steps = 10
- tokens = list() # we need an ID sequence for training rather than a sequence of phonological features
- for vector in self.tf.string_to_tensor(
- "Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better."):
- if vector[get_feature_to_index_lookup()["word-boundary"]] == 0:
- # we don't include word boundaries when performing alignment, since they are not always present in audio.
- for phone in self.tf.phone_to_vector:
- if vector.numpy().tolist()[13:] == self.tf.phone_to_vector[phone][13:]:
- # the first 12 dimensions are for modifiers, so we ignore those when trying to find the phoneme in the ID lookup
- tokens.append(self.tf.phone_to_id[phone])
- # this is terribly inefficient, but it's fine
- break
- tokens = torch.LongTensor(tokens).squeeze().to(self.device)
- tokens_len = torch.LongTensor([len(tokens)]).to(self.device)
- mel = self.ap.audio_to_mel_spec_tensor(example_audio, normalize=True).transpose(0, 1).unsqueeze(0).to(self.device)
- mel.requires_grad = True
- mel_len = torch.LongTensor([len(mel[0])]).to(self.device)
- # actual fine-tuning starts here
- optim_asr = torch.optim.SGD(self.acoustic_model.parameters(), lr=0.1)
- self.acoustic_model.train()
- for _ in tqdm(list(range(steps))):
- pred = self.acoustic_model(mel)
- loss = self.acoustic_model.ctc_loss(pred.transpose(0, 1).log_softmax(2), tokens, mel_len, tokens_len)
- optim_asr.zero_grad()
- loss.backward()
- torch.nn.utils.clip_grad_norm_(self.acoustic_model.parameters(), 1.0)
- optim_asr.step()
- self.acoustic_model.eval()
- ## done finetuning
-
- reference_audio = "reference_audios/clone_me_5.wav"
- prompt = "Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better."
- text_list = prompt.replace(".", ".|").replace("?", "?|").replace("!", "!|").split("|")
- # we don't split on the punctuation marks because we want to retain them.
-
- self.split_audio(reference_audio, text_list)
- # at this point, split_1.wav, split_2.wav and split_3.wav should exist.
-
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_1.flac")
- self.part_1_voice_1 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_1.wav",
- reference_transcription=text_list[0],
- clone_speaker_identity=False,
- lang="en")
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_2.wav")
- self.part_1_voice_2 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_1.wav",
- reference_transcription=text_list[0],
- clone_speaker_identity=False,
- lang="en")
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_3.wav")
- self.part_1_voice_3 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_1.wav",
- reference_transcription=text_list[0],
- clone_speaker_identity=False,
- lang="en")
-
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_1.flac")
- self.part_2_voice_1 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_2.wav",
- reference_transcription=text_list[1],
- clone_speaker_identity=False,
- lang="en")
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_2.wav")
- self.part_2_voice_2 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_2.wav",
- reference_transcription=text_list[1],
- clone_speaker_identity=False,
- lang="en")
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_3.wav")
- self.part_2_voice_3 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_2.wav",
- reference_transcription=text_list[1],
- clone_speaker_identity=False,
- lang="en")
-
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_1.flac")
- self.part_3_voice_1 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_3.wav",
- reference_transcription=text_list[2],
- clone_speaker_identity=False,
- lang="en")
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_2.wav")
- self.part_3_voice_2 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_3.wav",
- reference_transcription=text_list[2],
- clone_speaker_identity=False,
- lang="en")
- self.utterance_cloner.tts.set_utterance_embedding("reference_audios/voice_3.wav")
- self.part_3_voice_3 = self.utterance_cloner.clone_utterance(path_to_reference_audio="split_3.wav",
- reference_transcription=text_list[2],
- clone_speaker_identity=False,
- lang="en")
-
- def read(self, _, speaker_1, speaker_2, speaker_3):
- reference_audio = "reference_audios/clone_me_5.wav"
-
- if speaker_1 == "Voice 1":
- part_1 = self.part_1_voice_1
- elif speaker_1 == "Voice 2":
- part_1 = self.part_1_voice_2
- elif speaker_1 == "Voice 3":
- part_1 = self.part_1_voice_3
-
- if speaker_2 == "Voice 1":
- part_2 = self.part_2_voice_1
- elif speaker_2 == "Voice 2":
- part_2 = self.part_2_voice_2
- elif speaker_2 == "Voice 3":
- part_2 = self.part_2_voice_3
-
- if speaker_3 == "Voice 1":
- part_3 = self.part_3_voice_1
- elif speaker_3 == "Voice 2":
- part_3 = self.part_3_voice_2
- elif speaker_3 == "Voice 3":
- part_3 = self.part_3_voice_3
-
- return "alignment.png", \
- reference_audio, \
- self.speaker_path_lookup["Voice 1"], \
- self.speaker_path_lookup["Voice 2"], \
- self.speaker_path_lookup["Voice 3"], \
- (24000, float2pcm(torch.cat([torch.tensor(part_1), torch.tensor(part_2), torch.tensor(part_3)], dim=0).numpy()))
-
- def split_audio(self, path_to_audio, text_list):
- # extract audio
- audio, sr = sf.read(path_to_audio)
- ap = AudioPreprocessor(input_sr=sr, output_sr=16000, melspec_buckets=80, hop_length=256, n_fft=1024, cut_silence=False)
- norm_wave = ap.audio_to_wave_tensor(normalize=True, audio=audio)
- melspec = ap.audio_to_mel_spec_tensor(audio=norm_wave, normalize=False, explicit_sampling_rate=16000).transpose(0, 1)
-
- # extract phonemes
- lines = list()
- self.tf.use_word_boundaries = False # this causes problems when splitting otherwise
- for segment in text_list:
- if segment.strip() != "":
- lines.append(self.tf.string_to_tensor(segment, handle_missing=False).squeeze())
- self.tf.use_word_boundaries = True
-
- # postprocess phonemes: [~ sentence ~ #] --> [sentence ~] except for the first one, which is [~ sentence ~]
- processed_lines = list()
- for index, line in enumerate(lines):
- if index == 0:
- processed_lines.append(line[:-1])
- else:
- processed_lines.append(line[1:-1])
- lines = processed_lines
- joined_phonemes = torch.cat(lines, dim=0)
-
- # get durations of each phone in audio as average of an ensemble
- alignment_paths = list()
- ensemble_of_durations = list()
- for ensemble in range(1):
- alignment_paths.append(self.acoustic_model.inference(mel=melspec.to(self.device),
- tokens=joined_phonemes.to(self.device),
- save_img_for_debug="alignment.png" if ensemble == 0 else None,
- return_ctc=False))
- for alignment_path in alignment_paths:
- ensemble_of_durations.append(self.dc(torch.LongTensor(alignment_path), vis=None).squeeze())
- durations = list()
- for i, _ in enumerate(ensemble_of_durations[0]):
- duration_of_phone = list()
- for ensemble_member in ensemble_of_durations:
- duration_of_phone.append(ensemble_member.squeeze()[i])
- durations.append(sum(duration_of_phone) / len(duration_of_phone))
-
- # cut audio according to duration sum of each line in transcript
- line_lens = [len(x) for x in lines]
- index = 0
- segment_durations = list()
- for num_phones in line_lens:
- segment_durations.append(sum(durations[index: index + num_phones]))
- index += num_phones
- spec_to_wave_factor = len(norm_wave) / sum(segment_durations)
- wave_segment_lens = [int(x * spec_to_wave_factor) for x in segment_durations]
- start_index = 0
- wave_segments = list()
- for index, segment_len in enumerate(wave_segment_lens):
- if index == len(wave_segment_lens) - 1:
- wave_segments.append(norm_wave[start_index:])
- else:
- wave_segments.append(norm_wave[start_index: start_index + segment_len])
- start_index += segment_len
-
- # write the audio segments into new files
- for index, wave_segment in enumerate(wave_segments):
- sf.write(f"split_{index + 1}.wav", wave_segment, 16000)
-
-
-meta_model = TTS_Interface()
-article = "
This is still a work in progress, models will be exchanged for better ones as soon as they are done. More diverse training data can help with more exact cloning. For example we are still trying to incorporate more singing data.
- )
-}
diff --git a/spaces/KPCGD/bingo/src/lib/bots/bing/sr.ts b/spaces/KPCGD/bingo/src/lib/bots/bing/sr.ts
deleted file mode 100644
index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/lib/bots/bing/sr.ts
+++ /dev/null
@@ -1,106 +0,0 @@
-// @ts-ignore
-const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? (
- // @ts-ignore
- window.SpeechRecognition ||
- window.webkitSpeechRecognition ||
- // @ts-ignore
- window.mozSpeechRecognition ||
- // @ts-ignore
- window.msSpeechRecognition ||
- // @ts-ignore
- window.oSpeechRecognition
-) as typeof webkitSpeechRecognition : undefined
-
-type subscriber = (msg: string, command?: string) => void
-
-export class SR {
- recognition?: SpeechRecognition
- onchange?: subscriber
- transcript: boolean = false
- listening: boolean = false
- private commandsRe?: RegExp
- constructor(commands: string[]) {
- this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined
- if (!this.recognition) {
- return
- }
- this.configuration('zh-CN')
- if (commands.length) {
- this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`)
- }
- this.recognition.onresult = this.speechRecognition
- this.recognition.onerror = (err) => {
- console.log('err', err.error)
- this.stop()
- }
- this.recognition.onend = () => {
- if (this.recognition && this.listening) {
- this.recognition.start()
- }
- }
- }
-
- speechRecognition = (event: SpeechRecognitionEvent) => {
- if (!this.listening) return
- for (var i = event.resultIndex; i < event.results.length; i++) {
- let result = event.results[i]
- if (result.isFinal) {
- var alt = result[0]
- const text = alt.transcript.trim()
- if (this.commandsRe && this.commandsRe.test(text)) {
- return this.onchange?.('', RegExp.$1)
- }
- if (!this.transcript) return
- this.onchange?.(text)
- }
- }
- }
-
- private configuration = async (lang: string = 'zh-CN') => {
- return new Promise((resolve) => {
- if (this.recognition) {
- this.recognition.continuous = true
- this.recognition.lang = lang
- this.recognition.onstart = resolve
- }
- })
- }
-
- start = async () => {
- if (this.recognition && !this.listening) {
- await this.recognition.start()
- this.transcript = true
- this.listening = true
- }
- }
-
- stop = () => {
- if (this.recognition) {
- this.recognition.stop()
- this.transcript = false
- this.listening = false
- }
- }
-
-
- pause = () => {
- if (this.recognition) {
- this.transcript = false
- }
- }
-
- resume = () => {
- if (this.recognition) {
- this.transcript = true
- }
- }
-
- abort = () => {
- if (this.recognition && this.transcript) {
- this.recognition.abort()
- this.transcript = false
- this.listening = false
- }
- }
-}
-
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups.py b/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups.py
deleted file mode 100644
index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/backups.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import shutil
-import hashlib
-import time
-import base64
-
-
-
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- weights_exist = False
- for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH):
- for filename in files:
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- print(f'Imported file from Google Drive backup: {filename}')
- elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'):
- weights_exist = True
- weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights')))
- weights_folderpath = os.path.dirname(weights_filepath)
- if not os.path.exists(weights_folderpath):
- os.makedirs(weights_folderpath)
- print(f'Created weights folder: {weights_folderpath}', flush=True)
- shutil.copy2(filepath, weights_filepath) # copy file with metadata
- print(f'Imported file from weights: {filename}')
- if weights_exist:
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("No weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def get_md5_hash(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-
-def copy_weights_folder_to_drive():
- destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights')
- try:
- if not os.path.exists(destination_folder):
- os.makedirs(destination_folder)
-
- num_copied = 0
- for filename in os.listdir(WEIGHTS_FOLDER):
- if filename.endswith('.pth'):
- source_file = os.path.join(WEIGHTS_FOLDER, filename)
- destination_file = os.path.join(destination_folder, filename)
- if not os.path.exists(destination_file):
- shutil.copy2(source_file, destination_file)
- num_copied += 1
- print(f"Copied {filename} to Google Drive!")
-
- if num_copied == 0:
- print("No new finished models found for copying.")
- else:
- print(f"Finished copying {num_copied} files to Google Drive!")
-
- except Exception as e:
- print(f"An error occurred while copying weights: {str(e)}")
- # You can log the error or take appropriate actions here.
-
-def backup_files():
- print("\nStarting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
-
- while True:
- try:
- updated = False # flag to check if any files were updated
- last_backup_timestamps = {}
-
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except FileNotFoundError:
- pass # File does not exist yet, which is fine
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- shutil.copy2(filepath, backup_filepath) # copy file with metadata
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- if last_backup_timestamp is None:
- print(f'Backed up file: {filename}')
- else:
- print(f'Updating backed up file: {filename}')
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- os.remove(backup_filepath)
- print(f'Deleted file: {filepath}')
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
- sleep_time = 15
- else:
- sleep_time = 0.1
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
-
- time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups
-
- except Exception as e:
- print(f"An error occurred: {str(e)}")
- # You can log the error or take appropriate actions here.
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/option.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/option.py
deleted file mode 100644
index f66c600b84e0404c7937bacf8653776ce9be74c0..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg2mel/train/option.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Default parameters which will be imported by solver
-default_hparas = {
- 'GRAD_CLIP': 5.0, # Grad. clip threshold
- 'PROGRESS_STEP': 100, # Std. output refresh freq.
- # Decode steps for objective validation (step = ratio*input_txt_len)
- 'DEV_STEP_RATIO': 1.2,
- # Number of examples (alignment/text) to show in tensorboard
- 'DEV_N_EXAMPLE': 4,
- 'TB_FLUSH_FREQ': 180 # Update frequency of tensorboard (secs)
-}
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py
deleted file mode 100644
index f883e25cd2c981d8a469ff5d965a2dceeb2d963e..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/gui/guidml.py
+++ /dev/null
@@ -1,710 +0,0 @@
-"""
-0416后的更新:
- 引入config中half
- 重建npy而不用填写
- v2支持
- 无f0模型支持
- 修复
-
- int16:
- 增加无索引支持
- f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好
-"""
-import os, sys, traceback, re
-
-import json
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from assets.configs.config import Config
-
-Config = Config()
-
-import torch_directml
-import PySimpleGUI as sg
-import sounddevice as sd
-import noisereduce as nr
-import numpy as np
-from fairseq import checkpoint_utils
-import librosa, torch, pyworld, faiss, time, threading
-import torch.nn.functional as F
-import torchaudio.transforms as tat
-import scipy.signal as signal
-
-
-# import matplotlib.pyplot as plt
-from lib.infer.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from assets.i18n.i18n import I18nAuto
-
-i18n = I18nAuto()
-device = torch_directml.device(torch_directml.default_device())
-current_dir = os.getcwd()
-
-
-class RVC:
- def __init__(
- self, key, hubert_path, pth_path, index_path, npy_path, index_rate
- ) -> None:
- """
- 初始化
- """
- try:
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.sr = 16000
- self.window = 160
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- # self.big_npy = np.load(npy_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- model_path = hubert_path
- print("load model(s) from {}".format(model_path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
- )
- self.model = models[0]
- self.model = self.model.to(device)
- if Config.is_half:
- self.model = self.model.half()
- else:
- self.model = self.model.float()
- self.model.eval()
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if Config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- except:
- print(traceback.format_exc())
-
- def get_f0(self, x, f0_up_key, inp_f0=None):
- x_pad = 1
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def infer(self, feats: torch.Tensor) -> np.ndarray:
- """
- 推理函数
- """
- audio = feats.clone().cpu().numpy()
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- if Config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- inputs = {
- "source": feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if self.version == "v1" else 12,
- }
- torch.cuda.synchronize()
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
-
- ####索引优化
- try:
- if (
- hasattr(self, "index")
- and hasattr(self, "big_npy")
- and self.index_rate != 0
- ):
- npy = feats[0].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if Config.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
- + (1 - self.index_rate) * feats
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- torch.cuda.synchronize()
- print(feats.shape)
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(audio, self.f0_up_key)
- p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
- else:
- pitch, pitchf = None, None
- p_len = min(feats.shape[1], 13000) # 太大了爆显存
- torch.cuda.synchronize()
- # print(feats.shape,pitch.shape)
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
- p_len = torch.LongTensor([p_len]).to(device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
- )
- torch.cuda.synchronize()
- return infered_audio
-
-
-class GUIConfig:
- def __init__(self) -> None:
- self.hubert_path: str = ""
- self.pth_path: str = ""
- self.index_path: str = ""
- self.npy_path: str = ""
- self.pitch: int = 12
- self.samplerate: int = 44100
- self.block_time: float = 1.0 # s
- self.buffer_num: int = 1
- self.threhold: int = -30
- self.crossfade_time: float = 0.08
- self.extra_time: float = 0.04
- self.I_noise_reduce = False
- self.O_noise_reduce = False
- self.index_rate = 0.3
-
-
-class GUI:
- def __init__(self) -> None:
- self.config = GUIConfig()
- self.flag_vc = False
-
- self.launcher()
-
- def load(self):
- (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- ) = self.get_devices()
- try:
- with open("values1.json", "r") as j:
- data = json.load(j)
- except:
- with open("values1.json", "w") as j:
- data = {
- "pth_path": "",
- "index_path": "",
- "sg_input_device": input_devices[
- input_devices_indices.index(sd.default.device[0])
- ],
- "sg_output_device": output_devices[
- output_devices_indices.index(sd.default.device[1])
- ],
- "threhold": "-45",
- "pitch": "0",
- "index_rate": "0",
- "block_time": "1",
- "crossfade_length": "0.04",
- "extra_time": "1",
- }
- return data
-
- def launcher(self):
- data = self.load()
- sg.theme("LightBlue3")
- input_devices, output_devices, _, _ = self.get_devices()
- layout = [
- [
- sg.Frame(
- title=i18n("Load model"),
- layout=[
- [
- sg.Input(
- default_text="hubert_base.pt",
- key="hubert_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Hubert Model"),
- initial_folder=os.path.join(os.getcwd()),
- file_types=(("pt files", "*.pt"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("pth_path", ""),
- key="pth_path",
- ),
- sg.FileBrowse(
- i18n("Select the .pth file"),
- initial_folder=os.path.join(os.getcwd(), "weights"),
- file_types=(("weight files", "*.pth"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("index_path", ""),
- key="index_path",
- ),
- sg.FileBrowse(
- i18n("Select the .index file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("index files", "*.index"),),
- ),
- ],
- [
- sg.Input(
- default_text="你不需要填写这个You don't need write this.",
- key="npy_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Select the .npy file"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=(("feature files", "*.npy"),),
- ),
- ],
- ],
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Input device")),
- sg.Combo(
- input_devices,
- key="sg_input_device",
- default_value=data.get("sg_input_device", ""),
- ),
- ],
- [
- sg.Text(i18n("Output device")),
- sg.Combo(
- output_devices,
- key="sg_output_device",
- default_value=data.get("sg_output_device", ""),
- ),
- ],
- ],
- title=i18n("Audio device (please use the same type of driver)"),
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Response threshold")),
- sg.Slider(
- range=(-60, 0),
- key="threhold",
- resolution=1,
- orientation="h",
- default_value=data.get("threhold", ""),
- ),
- ],
- [
- sg.Text(i18n("Pitch settings")),
- sg.Slider(
- range=(-24, 24),
- key="pitch",
- resolution=1,
- orientation="h",
- default_value=data.get("pitch", ""),
- ),
- ],
- [
- sg.Text(i18n("Index Rate")),
- sg.Slider(
- range=(0.0, 1.0),
- key="index_rate",
- resolution=0.01,
- orientation="h",
- default_value=data.get("index_rate", ""),
- ),
- ],
- ],
- title=i18n("General settings"),
- ),
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("Sample length")),
- sg.Slider(
- range=(0.1, 3.0),
- key="block_time",
- resolution=0.1,
- orientation="h",
- default_value=data.get("block_time", ""),
- ),
- ],
- [
- sg.Text(i18n("Fade length")),
- sg.Slider(
- range=(0.01, 0.15),
- key="crossfade_length",
- resolution=0.01,
- orientation="h",
- default_value=data.get("crossfade_length", ""),
- ),
- ],
- [
- sg.Text(i18n("Extra推理时长")),
- sg.Slider(
- range=(0.05, 3.00),
- key="extra_time",
- resolution=0.01,
- orientation="h",
- default_value=data.get("extra_time", ""),
- ),
- ],
- [
- sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"),
- sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"),
- ],
- ],
- title=i18n("Performance settings"),
- ),
- ],
- [
- sg.Button(i18n("开始音频Convert"), key="start_vc"),
- sg.Button(i18n("停止音频Convert"), key="stop_vc"),
- sg.Text(i18n("Inference time (ms):")),
- sg.Text("0", key="infer_time"),
- ],
- ]
- self.window = sg.Window("RVC - GUI", layout=layout)
- self.event_handler()
-
- def event_handler(self):
- while True:
- event, values = self.window.read()
- if event == sg.WINDOW_CLOSED:
- self.flag_vc = False
- exit()
- if event == "start_vc" and self.flag_vc == False:
- if self.set_values(values) == True:
- print("using_cuda:" + str(torch.cuda.is_available()))
- self.start_vc()
- settings = {
- "pth_path": values["pth_path"],
- "index_path": values["index_path"],
- "sg_input_device": values["sg_input_device"],
- "sg_output_device": values["sg_output_device"],
- "threhold": values["threhold"],
- "pitch": values["pitch"],
- "index_rate": values["index_rate"],
- "block_time": values["block_time"],
- "crossfade_length": values["crossfade_length"],
- "extra_time": values["extra_time"],
- }
- with open("values1.json", "w") as j:
- json.dump(settings, j)
- if event == "stop_vc" and self.flag_vc == True:
- self.flag_vc = False
-
- def set_values(self, values):
- if len(values["pth_path"].strip()) == 0:
- sg.popup(i18n("Select the pth file"))
- return False
- if len(values["index_path"].strip()) == 0:
- sg.popup(i18n("Select the index file"))
- return False
- pattern = re.compile("[^\x00-\x7F]+")
- if pattern.findall(values["hubert_path"]):
- sg.popup(i18n("The hubert model path must not contain Chinese characters"))
- return False
- if pattern.findall(values["pth_path"]):
- sg.popup(i18n("The pth file path must not contain Chinese characters."))
- return False
- if pattern.findall(values["index_path"]):
- sg.popup(i18n("The index file path must not contain Chinese characters."))
- return False
- self.set_devices(values["sg_input_device"], values["sg_output_device"])
- self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
- self.config.pth_path = values["pth_path"]
- self.config.index_path = values["index_path"]
- self.config.npy_path = values["npy_path"]
- self.config.threhold = values["threhold"]
- self.config.pitch = values["pitch"]
- self.config.block_time = values["block_time"]
- self.config.crossfade_time = values["crossfade_length"]
- self.config.extra_time = values["extra_time"]
- self.config.I_noise_reduce = values["I_noise_reduce"]
- self.config.O_noise_reduce = values["O_noise_reduce"]
- self.config.index_rate = values["index_rate"]
- return True
-
- def start_vc(self):
- torch.cuda.empty_cache()
- self.flag_vc = True
- self.block_frame = int(self.config.block_time * self.config.samplerate)
- self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
- self.sola_search_frame = int(0.012 * self.config.samplerate)
- self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
- self.extra_frame = int(self.config.extra_time * self.config.samplerate)
- self.rvc = None
- self.rvc = RVC(
- self.config.pitch,
- self.config.hubert_path,
- self.config.pth_path,
- self.config.index_path,
- self.config.npy_path,
- self.config.index_rate,
- )
- self.input_wav: np.ndarray = np.zeros(
- self.extra_frame
- + self.crossfade_frame
- + self.sola_search_frame
- + self.block_frame,
- dtype="float32",
- )
- self.output_wav: torch.Tensor = torch.zeros(
- self.block_frame, device=device, dtype=torch.float32
- )
- self.sola_buffer: torch.Tensor = torch.zeros(
- self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_in_window: torch.Tensor = torch.linspace(
- 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
- self.resampler1 = tat.Resample(
- orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
- )
- self.resampler2 = tat.Resample(
- orig_freq=self.rvc.tgt_sr,
- new_freq=self.config.samplerate,
- dtype=torch.float32,
- )
- thread_vc = threading.Thread(target=self.soundinput)
- thread_vc.start()
-
- def soundinput(self):
- """
- 接受音频输入
- """
- with sd.Stream(
- channels=2,
- callback=self.audio_callback,
- blocksize=self.block_frame,
- samplerate=self.config.samplerate,
- dtype="float32",
- ):
- while self.flag_vc:
- time.sleep(self.config.block_time)
- print("Audio block passed.")
- print("ENDing VC")
-
- def audio_callback(
- self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
- ):
- """
- 音频处理
- """
- start_time = time.perf_counter()
- indata = librosa.to_mono(indata.T)
- if self.config.I_noise_reduce:
- indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
-
- """noise gate"""
- frame_length = 2048
- hop_length = 1024
- rms = librosa.feature.rms(
- y=indata, frame_length=frame_length, hop_length=hop_length
- )
- db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
- # print(rms.shape,db.shape,db)
- for i in range(db_threhold.shape[0]):
- if db_threhold[i]:
- indata[i * hop_length : (i + 1) * hop_length] = 0
- self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
-
- # infer
- print("input_wav:" + str(self.input_wav.shape))
- # print('infered_wav:'+str(infer_wav.shape))
- infer_wav: torch.Tensor = self.resampler2(
- self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
- )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
- device
- )
- print("infer_wav:" + str(infer_wav.shape))
-
- # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
- cor_nom = F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
- self.sola_buffer[None, None, :],
- )
- cor_den = torch.sqrt(
- F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
- ** 2,
- torch.ones(1, 1, self.crossfade_frame, device=device),
- )
- + 1e-8
- )
- sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
- print("sola offset: " + str(int(sola_offset)))
-
- # crossfade
- self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
- self.output_wav[: self.crossfade_frame] *= self.fade_in_window
- self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
- if sola_offset < self.sola_search_frame:
- self.sola_buffer[:] = (
- infer_wav[
- -self.sola_search_frame
- - self.crossfade_frame
- + sola_offset : -self.sola_search_frame
- + sola_offset
- ]
- * self.fade_out_window
- )
- else:
- self.sola_buffer[:] = (
- infer_wav[-self.crossfade_frame :] * self.fade_out_window
- )
-
- if self.config.O_noise_reduce:
- outdata[:] = np.tile(
- nr.reduce_noise(
- y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
- ),
- (2, 1),
- ).T
- else:
- outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
- total_time = time.perf_counter() - start_time
- self.window["infer_time"].update(int(total_time * 1000))
- print("infer time:" + str(total_time))
-
- def get_devices(self, update: bool = True):
- """获取设备列表"""
- if update:
- sd._terminate()
- sd._initialize()
- devices = sd.query_devices()
- hostapis = sd.query_hostapis()
- for hostapi in hostapis:
- for device_idx in hostapi["devices"]:
- devices[device_idx]["hostapi_name"] = hostapi["name"]
- input_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_output_channels"] > 0
- ]
- input_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_output_channels"] > 0
- ]
- return (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- )
-
- def set_devices(self, input_device, output_device):
- """设置输出设备"""
- (
- input_devices,
- output_devices,
- input_device_indices,
- output_device_indices,
- ) = self.get_devices()
- sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
- sd.default.device[1] = output_device_indices[
- output_devices.index(output_device)
- ]
- print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
- print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
-
-
-gui = GUI()
diff --git a/spaces/Libra7578/Image-to-video/app.py b/spaces/Libra7578/Image-to-video/app.py
deleted file mode 100644
index 27939ccec58d8caa167c54383e5d7fbdb5114ad3..0000000000000000000000000000000000000000
--- a/spaces/Libra7578/Image-to-video/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from huggingface_hub import snapshot_download
-
-from modelscope.pipelines import pipeline
-from modelscope.outputs import OutputKeys
-import pathlib
-
-model_dir = pathlib.Path('weights')
-snapshot_download('damo-vilab/modelscope-damo-text-to-video-synthesis',
- repo_type='model', local_dir=model_dir)
-
-pipe = pipeline('text-to-video-synthesis', model_dir.as_posix())
-test_text = {
- 'text': 'A panda eating bamboo on a rock.',
- }
-output_video_path = pipe(test_text,)[OutputKeys.OUTPUT_VIDEO]
-print('output_video_path:', output_video_path)
-
diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
deleted file mode 100644
index fc65f5c8e9337c91ae34d6ec1a98ad1accc1ea0c..0000000000000000000000000000000000000000
--- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
+++ /dev/null
@@ -1,149 +0,0 @@
-from toolbox import update_ui, promote_file_to_downloadzone, gen_time_str
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-from .crazy_utils import read_and_clean_pdf_text
-from .crazy_utils import input_clipping
-
-
-
-def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- file_write_buffer = []
- for file_name in file_manifest:
- print('begin analysis on:', file_name)
- ############################## <第 0 步,切割PDF> ##################################
- # 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割)
- # 的长度必须小于 2500 个 Token
- file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF
- file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
- page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
-
- TOKEN_LIMIT_PER_FRAGMENT = 2500
-
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
- # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
- paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
-
- ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
- final_results = []
- final_results.append(paper_meta)
-
- ############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ##################################
- i_say_show_user = f'首先你在中文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示
- chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
-
- iteration_results = []
- last_iteration_result = paper_meta # 初始值是摘要
- MAX_WORD_TOTAL = 4096 * 0.7
- n_fragment = len(paper_fragments)
- if n_fragment >= 20: print('文章极长,不能达到预期效果')
- for i in range(n_fragment):
- NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
- i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}"
- i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}"
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
- llm_kwargs, chatbot,
- history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
- sys_prompt="Extract the main idea of this section with Chinese." # 提示
- )
- iteration_results.append(gpt_say)
- last_iteration_result = gpt_say
-
- ############################## <第 3 步,整理history,提取总结> ##################################
- final_results.extend(iteration_results)
- final_results.append(f'Please conclude this paper discussed above。')
- # This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py
- NUM_OF_WORD = 1000
- i_say = """
-1. Mark the title of the paper (with Chinese translation)
-2. list all the authors' names (use English)
-3. mark the first author's affiliation (output Chinese translation only)
-4. mark the keywords of this article (use English)
-5. link to the paper, Github code link (if available, fill in Github:None if not)
-6. summarize according to the following four points.Be sure to use Chinese answers (proper nouns need to be marked in English)
- - (1):What is the research background of this article?
- - (2):What are the past methods? What are the problems with them? Is the approach well motivated?
- - (3):What is the research methodology proposed in this paper?
- - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
-Follow the format of the output that follows:
-1. Title: xxx\n\n
-2. Authors: xxx\n\n
-3. Affiliation: xxx\n\n
-4. Keywords: xxx\n\n
-5. Urls: xxx or xxx , xxx \n\n
-6. Summary: \n\n
- - (1):xxx;\n
- - (2):xxx;\n
- - (3):xxx;\n
- - (4):xxx.\n\n
-Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible,
-do not have too much repetitive information, numerical values using the original numbers.
- """
- # This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py
- file_write_buffer.extend(final_results)
- i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000)
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say, inputs_show_user='开始最终总结',
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results,
- sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters"
- )
- final_results.append(gpt_say)
- file_write_buffer.extend([i_say, gpt_say])
- ############################## <第 4 步,设置一个token上限> ##################################
- _, final_results = input_clipping("", final_results, max_token_limit=3200)
- yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了
-
- res = write_results_to_file(file_write_buffer, file_name=gen_time_str())
- promote_file_to_downloadzone(res.split('\t')[-1], chatbot=chatbot)
- yield from update_ui(chatbot=chatbot, history=final_results) # 刷新界面
-
-
-@CatchException
-def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/MUmairAB/English-to-French/README.md b/spaces/MUmairAB/English-to-French/README.md
deleted file mode 100644
index 4d2749f56236bf33ce8d0ecc812daaf46061ed42..0000000000000000000000000000000000000000
--- a/spaces/MUmairAB/English-to-French/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: English To French
-emoji: 🏢
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MVV/3dTopDenoising/models/__init__.py b/spaces/MVV/3dTopDenoising/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/bert_gen.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/bert_gen.py
deleted file mode 100644
index 25cd7d97bafa02c514d0e1a34621546eac10da53..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/bert_gen.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import torch
-from multiprocessing import Pool
-import commons
-import utils
-from tqdm import tqdm
-from text import cleaned_text_to_sequence, get_bert
-import argparse
-import torch.multiprocessing as mp
-
-
-def process_line(line):
- rank = mp.current_process()._identity
- rank = rank[0] if len(rank) > 0 else 0
- if torch.cuda.is_available():
- gpu_id = rank % torch.cuda.device_count()
- device = torch.device(f"cuda:{gpu_id}")
- wav_path, _, language_str, text, phones, tone, word2ph = line.strip().split("|")
- phone = phones.split(" ")
- tone = [int(i) for i in tone.split(" ")]
- word2ph = [int(i) for i in word2ph.split(" ")]
- word2ph = [i for i in word2ph]
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
-
- bert_path = wav_path.replace(".wav", ".bert.pt")
-
- try:
- bert = torch.load(bert_path)
- assert bert.shape[-1] == len(phone)
- except Exception:
- bert = get_bert(text, word2ph, language_str, device)
- assert bert.shape[-1] == len(phone)
- torch.save(bert, bert_path)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("-c", "--config", type=str, default="configs/config.json")
- parser.add_argument("--num_processes", type=int, default=2)
- args = parser.parse_args()
- config_path = args.config
- hps = utils.get_hparams_from_file(config_path)
- lines = []
- with open(hps.data.training_files, encoding="utf-8") as f:
- lines.extend(f.readlines())
-
- with open(hps.data.validation_files, encoding="utf-8") as f:
- lines.extend(f.readlines())
-
- num_processes = args.num_processes
- with Pool(processes=num_processes) as pool:
- for _ in tqdm(pool.imap_unordered(process_line, lines), total=len(lines)):
- pass
diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/app.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/app.py
deleted file mode 100644
index 81d4a717300e3ae7151392717786d1916be0a8c6..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/app.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-logging.getLogger('urllib3').setLevel(logging.WARNING)
-from text import text_to_sequence
-import numpy as np
-from scipy.io import wavfile
-import torch
-import json
-import commons
-import utils
-import sys
-import pathlib
-import onnxruntime as ort
-import gradio as gr
-import argparse
-import time
-import os
-from scipy.io.wavfile import write
-
-def is_japanese(string):
- for ch in string:
- if ord(ch) > 0x3040 and ord(ch) < 0x30FF:
- return True
- return False
-
-def is_english(string):
- import re
- pattern = re.compile('^[A-Za-z0-9.,:;!?()_*"\' ]+$')
- if pattern.fullmatch(string):
- return True
- else:
- return False
-
-def to_numpy(tensor: torch.Tensor):
- return tensor.detach().cpu().numpy() if tensor.requires_grad \
- else tensor.detach().numpy()
-
-def get_symbols_from_json(path):
- assert os.path.isfile(path)
- with open(path, 'r') as f:
- data = json.load(f)
- return data['symbols']
-
-def sle(language,text):
- text = text.replace('\n','。').replace(' ',',')
- if language == "中文":
- tts_input1 = "[ZH]" + text + "[ZH]"
- return tts_input1
- elif language == "自动":
- tts_input1 = f"[JA]{text}[JA]" if is_japanese(text) else f"[ZH]{text}[ZH]"
- return tts_input1
- elif language == "日文":
- tts_input1 = "[JA]" + text + "[JA]"
- return tts_input1
- elif language == "英文":
- tts_input1 = "[EN]" + text + "[EN]"
- return tts_input1
- elif language == "手动":
- return text
-
-def get_text(text,hps_ms):
- text_norm = text_to_sequence(text,hps_ms.symbols,hps_ms.data.text_cleaners)
- if hps_ms.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-def create_tts_fn(ort_sess, speaker_id):
- def tts_fn(text , language, n_scale= 0.667,n_scale_w = 0.8, l_scale = 1 ):
- text =sle(language,text)
- seq = text_to_sequence(text,hps.symbols, cleaner_names=hps.data.text_cleaners)
- if hps.data.add_blank:
- seq = commons.intersperse(seq, 0)
- with torch.no_grad():
- x = np.array([seq], dtype=np.int64)
- x_len = np.array([x.shape[1]], dtype=np.int64)
- sid = np.array([speaker_id], dtype=np.int64)
- scales = np.array([n_scale, n_scale_w, l_scale], dtype=np.float32)
- scales.resize(1, 3)
- ort_inputs = {
- 'input': x,
- 'input_lengths': x_len,
- 'scales': scales,
- 'sid': sid
- }
- t1 = time.time()
- audio = np.squeeze(ort_sess.run(None, ort_inputs))
- audio *= 32767.0 / max(0.01, np.max(np.abs(audio))) * 0.6
- audio = np.clip(audio, -32767.0, 32767.0)
- t2 = time.time()
- spending_time = "推理时间:"+str(t2-t1)+"s"
- print(spending_time)
- return (hps.data.sampling_rate, audio)
- return tts_fn
-
-
-if __name__ == '__main__':
- symbols = get_symbols_from_json('checkpoints/ShojoKageki/config.json')
- hps = utils.get_hparams_from_file('checkpoints/ShojoKageki/config.json')
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
- models = []
- schools = ["ShojoKageki-Nijigasaki","ShojoKageki","Nijigasaki"]
- lan = ["中文","日文","自动","手动"]
- with open("checkpoints/info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for i in models_info:
- school = models_info[i]
- speakers = school["speakers"]
- checkpoint = school["checkpoint"]
- phone_dict = {
- symbol: i for i, symbol in enumerate(symbols)
- }
- ort_sess = ort.InferenceSession(checkpoint)
- content = []
- for j in speakers:
- sid = int(speakers[j]['sid'])
- title = school
- example = speakers[j]['speech']
- name = speakers[j]["name"]
- content.append((sid, name, title, example, create_tts_fn(ort_sess, sid)))
- models.append(content)
-
- with gr.Blocks() as app:
- gr.Markdown(
- "#
vits-models\n"
- )
- with gr.Tabs():
- for i in schools:
- with gr.TabItem(i):
- for (sid, name, title, example, tts_fn) in models[schools.index(i)]:
- with gr.TabItem(name):
- '''
- with gr.Row():
- gr.Markdown(
- '
'
- )
- input2 = gr.Dropdown(label="Language", choices=lan, value="自动", interactive=True)
- with gr.Column():
- input1 = gr.TextArea(label="Text", value=example)
- input4 = gr.Slider(minimum=0, maximum=1.0, label="更改噪声比例(noise scale),以控制情感", value=0.667)
- input5 = gr.Slider(minimum=0, maximum=1.0, label="更改噪声偏差(noise scale w),以控制音素长短", value=0.8)
- input6 = gr.Slider(minimum=0.1, maximum=10, label="duration", value=1)
- btnVC = gr.Button("Submit")
- output1 = gr.Audio(label="采样率22050")
-
- btnVC.click(tts_fn, inputs=[input1, input2, input4, input5, input6], outputs=[output1])
- app.launch()
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/blip_inference.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/blip_inference.py
deleted file mode 100644
index a5f0114a9e79e4ffc9c1e8e940cffee12861006a..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/blip_inference.py
+++ /dev/null
@@ -1,75 +0,0 @@
-from PIL import Image
-from transformers import Blip2Processor, Blip2ForConditionalGeneration
-
-import torch
-import os
-from glob import glob
-import argparse
-from glob import glob
-
-from BLIP2.blip_video_model import Blip2ForVideoConditionalGeneration as Blip2ForConditionalGeneration
-
-from termcolor import colored, cprint
-
-parser = argparse.ArgumentParser()
-parser.add_argument("-d", "--data_root", type=str, required=True)
-parser.add_argument("-fn" , "--frame_num", type=int, default=8)
-parser.add_argument("-fps" , "--frame_rate", type=int, default=1)
-args = parser.parse_args()
-
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-# Salesforce/blip2-flan-t5-xxl
-# Salesforce/blip2-opt-6.7b
-blip2_version = "Salesforce/blip2-flan-t5-xl"
-# blip2_version = "Salesforce/blip2-opt-6.7b"
-
-weight_dtype = torch.bfloat16 if "flan" in blip2_version else torch.float16
-# weight_dtype = torch.float16
-
-processor = Blip2Processor.from_pretrained(blip2_version)
-model = Blip2ForConditionalGeneration.from_pretrained(
- blip2_version, torch_dtype=weight_dtype
-)
-model.to(device)
-
-
-if not os.path.isdir(args.data_root):
- image_list = [args.data_root]
-else:
- # ipdb.set_trace()
- all_image_list = sorted(glob(os.path.join(args.data_root, "*.jpg"))) + sorted(glob(os.path.join(args.data_root, "*.png")))
- image_list = [all_image_list[f] for f in range(0, args.frame_num*args.frame_rate, args.frame_rate)]
- assert len(image_list) == args.frame_num
-
-
-images = []
-for image_path in image_list:
- image = Image.open(image_path).convert("RGB")
- images.append(image)
-
-def blip2_call(prompt=None, max_new_tokens=20):
- inputs = processor(images, text=prompt, return_tensors="pt").to(device, weight_dtype)
- generated_ids = model.generate(**inputs, max_new_tokens=max_new_tokens)
- generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
- if prompt is not None:
- cprint(prompt, "red")
- else:
- cprint("No prompt", "red")
-
- print(generated_text)
-
-
-## prompt captioning
-prompt = "this is a video of"
-
-print("Captioning")
-blip2_call(prompt, 20)
-
-
-prompt = "Question: what is the protagonist in this video? Answer: "
-
-blip2_call(prompt, 10)
-
-
diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/__init__.py b/spaces/Marne/MockingBird/mockingbirdforuse/__init__.py
deleted file mode 100644
index 339b6b696e3657372501f54d29cf2aca84d67d84..0000000000000000000000000000000000000000
--- a/spaces/Marne/MockingBird/mockingbirdforuse/__init__.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import re
-import librosa
-import numpy as np
-from io import BytesIO
-from pathlib import Path
-from scipy.io import wavfile
-from typing import List, Literal, Optional
-
-from .encoder.inference import Encoder, preprocess_wav
-from .synthesizer.inference import Synthesizer
-from .vocoder.hifigan.inference import HifiGanVocoder
-from .vocoder.wavernn.inference import WaveRNNVocoder
-from .log import logger
-
-
-def process_text(text: str) -> List[str]:
- punctuation = "!,。、,?!," # punctuate and split/clean text
- processed_texts = []
- text = re.sub(r"[{}]+".format(punctuation), "\n", text)
- for processed_text in text.split("\n"):
- if processed_text:
- processed_texts.append(processed_text.strip())
- return processed_texts
-
-
-class MockingBird:
- def __init__(self):
- self.encoder: Optional[Encoder] = None
- self.gan_vocoder: Optional[HifiGanVocoder] = None
- self.rnn_vocoder: Optional[WaveRNNVocoder] = None
- self.synthesizer: Optional[Synthesizer] = None
-
- def load_model(
- self,
- encoder_path: Path,
- gan_vocoder_path: Optional[Path] = None,
- rnn_vocoder_path: Optional[Path] = None,
- ):
- """
- 设置 Encoder模型 和 Vocoder模型 路径
-
- Args:
- encoder_path (Path): Encoder模型路径
- gan_vocoder_path (Path): HifiGan Vocoder模型路径,可选,需要用到 HifiGan 类型时必须填写
- rnn_vocoder_path (Path): WaveRNN Vocoder模型路径,可选,需要用到 WaveRNN 类型时必须填写
- """
- self.encoder = Encoder(encoder_path)
- if gan_vocoder_path:
- self.gan_vocoder = HifiGanVocoder(gan_vocoder_path)
- if rnn_vocoder_path:
- self.rnn_vocoder = WaveRNNVocoder(rnn_vocoder_path)
-
- def set_synthesizer(self, synthesizer_path: Path):
- """
- 设置Synthesizer模型路径
-
- Args:
- synthesizer_path (Path): Synthesizer模型路径
- """
- self.synthesizer = Synthesizer(synthesizer_path)
- logger.info(f"using synthesizer model: {synthesizer_path}")
-
- def synthesize(
- self,
- text: str,
- input_wav: Path,
- vocoder_type: Literal["HifiGan", "WaveRNN"] = "HifiGan",
- style_idx: int = 0,
- min_stop_token: int = 5,
- steps: int = 1000,
- ) -> BytesIO:
- """
- 生成语音
-
- Args:
- text (str): 目标文字
- input_wav (Path): 目标录音路径
- vocoder_type (HifiGan / WaveRNN): Vocoder模型,默认使用HifiGan
- style_idx (int, optional): Style 范围 -1~9,默认为 0
- min_stop_token (int, optional): Accuracy(精度) 范围3~9,默认为 5
- steps (int, optional): MaxLength(最大句长) 范围200~2000,默认为 1000
- """
- if not self.encoder:
- raise Exception("Please set encoder path first")
-
- if not self.synthesizer:
- raise Exception("Please set synthesizer path first")
-
- # Load input wav
- wav, sample_rate = librosa.load(input_wav)
-
- encoder_wav = preprocess_wav(wav, sample_rate)
- embed, _, _ = self.encoder.embed_utterance(encoder_wav, return_partials=True)
-
- # Load input text
- texts = process_text(text)
-
- # synthesize and vocode
- embeds = [embed] * len(texts)
- specs = self.synthesizer.synthesize_spectrograms(
- texts,
- embeds,
- style_idx=style_idx,
- min_stop_token=min_stop_token,
- steps=steps,
- )
- spec = np.concatenate(specs, axis=1)
- if vocoder_type == "WaveRNN":
- if not self.rnn_vocoder:
- raise Exception("Please set wavernn vocoder path first")
- wav, sample_rate = self.rnn_vocoder.infer_waveform(spec)
- else:
- if not self.gan_vocoder:
- raise Exception("Please set hifigan vocoder path first")
- wav, sample_rate = self.gan_vocoder.infer_waveform(spec)
-
- # Return cooked wav
- out = BytesIO()
- wavfile.write(out, sample_rate, wav.astype(np.float32))
- return out
diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/modules/modules.py b/spaces/MashiroSA/sovits-emu-voice-transform/modules/modules.py
deleted file mode 100644
index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000
--- a/spaces/MashiroSA/sovits-emu-voice-transform/modules/modules.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import modules.commons as commons
-from modules.commons import init_weights, get_padding
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/resnet.py
deleted file mode 100644
index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/resnet.py
+++ /dev/null
@@ -1,316 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-
-from .utils import constant_init, kaiming_init
-
-
-def conv3x3(in_planes, out_planes, stride=1, dilation=1):
- """3x3 convolution with padding."""
- return nn.Conv2d(
- in_planes,
- out_planes,
- kernel_size=3,
- stride=stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- super(BasicBlock, self).__init__()
- assert style in ['pytorch', 'caffe']
- self.conv1 = conv3x3(inplanes, planes, stride, dilation)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- assert not with_cp
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- """Bottleneck block.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottleneck, self).__init__()
- assert style in ['pytorch', 'caffe']
- if style == 'pytorch':
- conv1_stride = 1
- conv2_stride = stride
- else:
- conv1_stride = stride
- conv2_stride = 1
- self.conv1 = nn.Conv2d(
- inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False)
- self.conv2 = nn.Conv2d(
- planes,
- planes,
- kernel_size=3,
- stride=conv2_stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
- self.bn1 = nn.BatchNorm2d(planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(
- planes, planes * self.expansion, kernel_size=1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- self.with_cp = with_cp
-
- def forward(self, x):
-
- def _inner_forward(x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-def make_res_layer(block,
- inplanes,
- planes,
- blocks,
- stride=1,
- dilation=1,
- style='pytorch',
- with_cp=False):
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- dilation,
- downsample,
- style=style,
- with_cp=with_cp))
- inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(
- block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp))
-
- return nn.Sequential(*layers)
-
-
-class ResNet(nn.Module):
- """ResNet backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- num_stages (int): Resnet stages, normally 4.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- frozen_stages (int): Stages to be frozen (all param fixed). -1 means
- not freezing any parameters.
- bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze
- running stats (mean and var).
- bn_frozen (bool): Whether to freeze weight and bias of BN layers.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- """
-
- arch_settings = {
- 18: (BasicBlock, (2, 2, 2, 2)),
- 34: (BasicBlock, (3, 4, 6, 3)),
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self,
- depth,
- num_stages=4,
- strides=(1, 2, 2, 2),
- dilations=(1, 1, 1, 1),
- out_indices=(0, 1, 2, 3),
- style='pytorch',
- frozen_stages=-1,
- bn_eval=True,
- bn_frozen=False,
- with_cp=False):
- super(ResNet, self).__init__()
- if depth not in self.arch_settings:
- raise KeyError(f'invalid depth {depth} for resnet')
- assert num_stages >= 1 and num_stages <= 4
- block, stage_blocks = self.arch_settings[depth]
- stage_blocks = stage_blocks[:num_stages]
- assert len(strides) == len(dilations) == num_stages
- assert max(out_indices) < num_stages
-
- self.out_indices = out_indices
- self.style = style
- self.frozen_stages = frozen_stages
- self.bn_eval = bn_eval
- self.bn_frozen = bn_frozen
- self.with_cp = with_cp
-
- self.inplanes = 64
- self.conv1 = nn.Conv2d(
- 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.res_layers = []
- for i, num_blocks in enumerate(stage_blocks):
- stride = strides[i]
- dilation = dilations[i]
- planes = 64 * 2**i
- res_layer = make_res_layer(
- block,
- self.inplanes,
- planes,
- num_blocks,
- stride=stride,
- dilation=dilation,
- style=self.style,
- with_cp=with_cp)
- self.inplanes = planes * block.expansion
- layer_name = f'layer{i + 1}'
- self.add_module(layer_name, res_layer)
- self.res_layers.append(layer_name)
-
- self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1)
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- from ..runner import load_checkpoint
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- outs = []
- for i, layer_name in enumerate(self.res_layers):
- res_layer = getattr(self, layer_name)
- x = res_layer(x)
- if i in self.out_indices:
- outs.append(x)
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def train(self, mode=True):
- super(ResNet, self).train(mode)
- if self.bn_eval:
- for m in self.modules():
- if isinstance(m, nn.BatchNorm2d):
- m.eval()
- if self.bn_frozen:
- for params in m.parameters():
- params.requires_grad = False
- if mode and self.frozen_stages >= 0:
- for param in self.conv1.parameters():
- param.requires_grad = False
- for param in self.bn1.parameters():
- param.requires_grad = False
- self.bn1.eval()
- self.bn1.weight.requires_grad = False
- self.bn1.bias.requires_grad = False
- for i in range(1, self.frozen_stages + 1):
- mod = getattr(self, f'layer{i}')
- mod.eval()
- for param in mod.parameters():
- param.requires_grad = False
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/seg/sampler/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/seg/sampler/__init__.py
deleted file mode 100644
index 332b242c03d1c5e80d4577df442a9a037b1816e1..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/core/seg/sampler/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .base_pixel_sampler import BasePixelSampler
-from .ohem_pixel_sampler import OHEMPixelSampler
-
-__all__ = ['BasePixelSampler', 'OHEMPixelSampler']
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/loading.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/loading.py
deleted file mode 100644
index d3692ae91f19b9c7ccf6023168788ff42c9e93e3..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/loading.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import os.path as osp
-
-import annotator.uniformer.mmcv as mmcv
-import numpy as np
-
-from ..builder import PIPELINES
-
-
-@PIPELINES.register_module()
-class LoadImageFromFile(object):
- """Load an image from file.
-
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
- key "filename"). Added or updated keys are "filename", "img", "img_shape",
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
-
- Args:
- to_float32 (bool): Whether to convert the loaded image to a float32
- numpy array. If set to False, the loaded image is an uint8 array.
- Defaults to False.
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
- Defaults to 'color'.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default:
- 'cv2'
- """
-
- def __init__(self,
- to_float32=False,
- color_type='color',
- file_client_args=dict(backend='disk'),
- imdecode_backend='cv2'):
- self.to_float32 = to_float32
- self.color_type = color_type
- self.file_client_args = file_client_args.copy()
- self.file_client = None
- self.imdecode_backend = imdecode_backend
-
- def __call__(self, results):
- """Call functions to load image and get image meta information.
-
- Args:
- results (dict): Result dict from :obj:`mmseg.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded image and meta information.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results.get('img_prefix') is not None:
- filename = osp.join(results['img_prefix'],
- results['img_info']['filename'])
- else:
- filename = results['img_info']['filename']
- img_bytes = self.file_client.get(filename)
- img = mmcv.imfrombytes(
- img_bytes, flag=self.color_type, backend=self.imdecode_backend)
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = filename
- results['ori_filename'] = results['img_info']['filename']
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- # Set initial values for default meta_keys
- results['pad_shape'] = img.shape
- results['scale_factor'] = 1.0
- num_channels = 1 if len(img.shape) < 3 else img.shape[2]
- results['img_norm_cfg'] = dict(
- mean=np.zeros(num_channels, dtype=np.float32),
- std=np.ones(num_channels, dtype=np.float32),
- to_rgb=False)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(to_float32={self.to_float32},'
- repr_str += f"color_type='{self.color_type}',"
- repr_str += f"imdecode_backend='{self.imdecode_backend}')"
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadAnnotations(object):
- """Load annotations for semantic segmentation.
-
- Args:
- reduce_zero_label (bool): Whether reduce all label value by 1.
- Usually used for datasets where 0 is background label.
- Default: False.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default:
- 'pillow'
- """
-
- def __init__(self,
- reduce_zero_label=False,
- file_client_args=dict(backend='disk'),
- imdecode_backend='pillow'):
- self.reduce_zero_label = reduce_zero_label
- self.file_client_args = file_client_args.copy()
- self.file_client = None
- self.imdecode_backend = imdecode_backend
-
- def __call__(self, results):
- """Call function to load multiple types annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmseg.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded semantic segmentation annotations.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results.get('seg_prefix', None) is not None:
- filename = osp.join(results['seg_prefix'],
- results['ann_info']['seg_map'])
- else:
- filename = results['ann_info']['seg_map']
- img_bytes = self.file_client.get(filename)
- gt_semantic_seg = mmcv.imfrombytes(
- img_bytes, flag='unchanged',
- backend=self.imdecode_backend).squeeze().astype(np.uint8)
- # modify if custom classes
- if results.get('label_map', None) is not None:
- for old_id, new_id in results['label_map'].items():
- gt_semantic_seg[gt_semantic_seg == old_id] = new_id
- # reduce zero_label
- if self.reduce_zero_label:
- # avoid using underflow conversion
- gt_semantic_seg[gt_semantic_seg == 0] = 255
- gt_semantic_seg = gt_semantic_seg - 1
- gt_semantic_seg[gt_semantic_seg == 254] = 255
- results['gt_semantic_seg'] = gt_semantic_seg
- results['seg_fields'].append('gt_semantic_seg')
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(reduce_zero_label={self.reduce_zero_label},'
- repr_str += f"imdecode_backend='{self.imdecode_backend}')"
- return repr_str
diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/vl_utils.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/vl_utils.py
deleted file mode 100644
index c91bb02f584398f08a28e6b7719e2b99f6e28616..0000000000000000000000000000000000000000
--- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/util/vl_utils.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import os
-import random
-from typing import List
-
-import torch
-
-
-def create_positive_map_from_span(tokenized, token_span, max_text_len=256):
- """construct a map such that positive_map[i,j] = True iff box i is associated to token j
- Input:
- - tokenized:
- - input_ids: Tensor[1, ntokens]
- - attention_mask: Tensor[1, ntokens]
- - token_span: list with length num_boxes.
- - each item: [start_idx, end_idx]
- """
- positive_map = torch.zeros((len(token_span), max_text_len), dtype=torch.float)
- for j, tok_list in enumerate(token_span):
- for (beg, end) in tok_list:
- beg_pos = tokenized.char_to_token(beg)
- end_pos = tokenized.char_to_token(end - 1)
- if beg_pos is None:
- try:
- beg_pos = tokenized.char_to_token(beg + 1)
- if beg_pos is None:
- beg_pos = tokenized.char_to_token(beg + 2)
- except:
- beg_pos = None
- if end_pos is None:
- try:
- end_pos = tokenized.char_to_token(end - 2)
- if end_pos is None:
- end_pos = tokenized.char_to_token(end - 3)
- except:
- end_pos = None
- if beg_pos is None or end_pos is None:
- continue
-
- assert beg_pos is not None and end_pos is not None
- if os.environ.get("SHILONG_DEBUG_ONLY_ONE_POS", None) == "TRUE":
- positive_map[j, beg_pos] = 1
- break
- else:
- positive_map[j, beg_pos : end_pos + 1].fill_(1)
-
- return positive_map / (positive_map.sum(-1)[:, None] + 1e-6)
-
-
-def build_captions_and_token_span(cat_list, force_lowercase):
- """
- Return:
- captions: str
- cat2tokenspan: dict
- {
- 'dog': [[0, 2]],
- ...
- }
- """
-
- cat2tokenspan = {}
- captions = ""
- for catname in cat_list:
- class_name = catname
- if force_lowercase:
- class_name = class_name.lower()
- if "/" in class_name:
- class_name_list: List = class_name.strip().split("/")
- class_name_list.append(class_name)
- class_name: str = random.choice(class_name_list)
-
- tokens_positive_i = []
- subnamelist = [i.strip() for i in class_name.strip().split(" ")]
- for subname in subnamelist:
- if len(subname) == 0:
- continue
- if len(captions) > 0:
- captions = captions + " "
- strat_idx = len(captions)
- end_idx = strat_idx + len(subname)
- tokens_positive_i.append([strat_idx, end_idx])
- captions = captions + subname
-
- if len(tokens_positive_i) > 0:
- captions = captions + " ."
- cat2tokenspan[class_name] = tokens_positive_i
-
- return captions, cat2tokenspan
-
-
-def build_id2posspan_and_caption(category_dict: dict):
- """Build id2pos_span and caption from category_dict
-
- Args:
- category_dict (dict): category_dict
- """
- cat_list = [item["name"].lower() for item in category_dict]
- id2catname = {item["id"]: item["name"].lower() for item in category_dict}
- caption, cat2posspan = build_captions_and_token_span(cat_list, force_lowercase=True)
- id2posspan = {catid: cat2posspan[catname] for catid, catname in id2catname.items()}
- return id2posspan, caption
diff --git a/spaces/Miuzarte/SUI-svc-3.0/preprocess_hubert_f0.py b/spaces/Miuzarte/SUI-svc-3.0/preprocess_hubert_f0.py
deleted file mode 100644
index 9a770d6bba5f93c850f35327ac18a400ce8d50fc..0000000000000000000000000000000000000000
--- a/spaces/Miuzarte/SUI-svc-3.0/preprocess_hubert_f0.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import os
-import argparse
-
-import torch
-import json
-from glob import glob
-
-from pyworld import pyworld
-from tqdm import tqdm
-from scipy.io import wavfile
-
-import utils
-from mel_processing import mel_spectrogram_torch
-#import h5py
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-import parselmouth
-import librosa
-import numpy as np
-def stft(y):
- return librosa.stft(
- y=y,
- n_fft=1280,
- hop_length=320,
- win_length=1280,
- )
-
-def energy(y):
- # Extract energy
- S = librosa.magphase(stft(y))[0]
- e = np.sqrt(np.sum(S ** 2, axis=0)) # np.linalg.norm(S, axis=0)
- return e.squeeze() # (Number of frames) => (654,)
-
-def get_energy(path, p_len=None):
- wav, sr = librosa.load(path, 48000)
- e = energy(wav)
- if p_len is None:
- p_len = wav.shape[0] // 320
- assert e.shape[0] -p_len <2 ,(e.shape[0] ,p_len)
- e = e[: p_len]
- return e
-
-
-
-def get_f0(path,p_len=None, f0_up_key=0):
- x, _ = librosa.load(path, 48000)
- if p_len is None:
- p_len = x.shape[0]//320
- else:
- assert abs(p_len-x.shape[0]//320) < 3, (path, p_len, x.shape)
- time_step = 320 / 48000 * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = parselmouth.Sound(x, 48000).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size=(p_len - len(f0) + 1) // 2
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
-
- f0bak = f0.copy()
- f0 *= pow(2, f0_up_key / 12)
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak
-
-def resize2d(x, target_len):
- source = np.array(x)
- source[source<0.001] = np.nan
- target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source)
- res = np.nan_to_num(target)
- return res
-
-def compute_f0(path, c_len):
- x, sr = librosa.load(path, sr=48000)
- f0, t = pyworld.dio(
- x.astype(np.double),
- fs=sr,
- f0_ceil=800,
- frame_period=1000 * 320 / sr,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, 48000)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- assert abs(c_len - x.shape[0]//320) < 3, (c_len, f0.shape)
-
- return None, resize2d(f0, c_len)
-
-
-def process(filename):
- print(filename)
- save_name = filename+".soft.pt"
- if not os.path.exists(save_name):
- devive = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- wav, _ = librosa.load(filename, sr=16000)
- wav = torch.from_numpy(wav).unsqueeze(0).to(devive)
- c = utils.get_hubert_content(hmodel, wav)
- torch.save(c.cpu(), save_name)
- else:
- c = torch.load(save_name)
- f0path = filename+".f0.npy"
- if not os.path.exists(f0path):
- cf0, f0 = compute_f0(filename, c.shape[-1] * 3)
- np.save(f0path, f0)
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--in_dir", type=str, default="dataset/48k", help="path to input dir")
- args = parser.parse_args()
-
- print("Loading hubert for content...")
- hmodel = utils.get_hubert_model(0 if torch.cuda.is_available() else None)
- print("Loaded hubert.")
-
- filenames = glob(f'{args.in_dir}/*/*.wav', recursive=True)#[:10]
-
- for filename in tqdm(filenames):
- process(filename)
-
\ No newline at end of file
diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/label_map_util.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/label_map_util.py
deleted file mode 100644
index e258e3ab57fbe0de3aeb664e64f5df5a6dc5111d..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_planning/label_map_util.py
+++ /dev/null
@@ -1,181 +0,0 @@
-# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Label map utility functions."""
-
-import logging
-
-import tensorflow as tf
-from google.protobuf import text_format
-import string_int_label_map_pb2
-
-
-def _validate_label_map(label_map):
- """Checks if a label map is valid.
-
- Args:
- label_map: StringIntLabelMap to validate.
-
- Raises:
- ValueError: if label map is invalid.
- """
- for item in label_map.item:
- if item.id < 0:
- raise ValueError('Label map ids should be >= 0.')
- if (item.id == 0 and item.name != 'background' and
- item.display_name != 'background'):
- raise ValueError('Label map id 0 is reserved for the background label')
-
-
-def create_category_index(categories):
- """Creates dictionary of COCO compatible categories keyed by category id.
-
- Args:
- categories: a list of dicts, each of which has the following keys:
- 'id': (required) an integer id uniquely identifying this category.
- 'name': (required) string representing category name
- e.g., 'cat', 'dog', 'pizza'.
-
- Returns:
- category_index: a dict containing the same entries as categories, but keyed
- by the 'id' field of each category.
- """
- category_index = {}
- for cat in categories:
- category_index[cat['id']] = cat
- return category_index
-
-
-def get_max_label_map_index(label_map):
- """Get maximum index in label map.
-
- Args:
- label_map: a StringIntLabelMapProto
-
- Returns:
- an integer
- """
- return max([item.id for item in label_map.item])
-
-
-def convert_label_map_to_categories(label_map,
- max_num_classes,
- use_display_name=True):
- """Loads label map proto and returns categories list compatible with eval.
-
- This function loads a label map and returns a list of dicts, each of which
- has the following keys:
- 'id': (required) an integer id uniquely identifying this category.
- 'name': (required) string representing category name
- e.g., 'cat', 'dog', 'pizza'.
- We only allow class into the list if its id-label_id_offset is
- between 0 (inclusive) and max_num_classes (exclusive).
- If there are several items mapping to the same id in the label map,
- we will only keep the first one in the categories list.
-
- Args:
- label_map: a StringIntLabelMapProto or None. If None, a default categories
- list is created with max_num_classes categories.
- max_num_classes: maximum number of (consecutive) label indices to include.
- use_display_name: (boolean) choose whether to load 'display_name' field
- as category name. If False or if the display_name field does not exist,
- uses 'name' field as category names instead.
- Returns:
- categories: a list of dictionaries representing all possible categories.
- """
- categories = []
- list_of_ids_already_added = []
- if not label_map:
- label_id_offset = 1
- for class_id in range(max_num_classes):
- categories.append({
- 'id': class_id + label_id_offset,
- 'name': 'category_{}'.format(class_id + label_id_offset)
- })
- return categories
- for item in label_map.item:
- if not 0 < item.id <= max_num_classes:
- logging.info('Ignore item %d since it falls outside of requested '
- 'label range.', item.id)
- continue
- if use_display_name and item.HasField('display_name'):
- name = item.display_name
- else:
- name = item.name
- if item.id not in list_of_ids_already_added:
- list_of_ids_already_added.append(item.id)
- categories.append({'id': item.id, 'name': name})
- return categories
-
-
-def load_labelmap(path):
- """Loads label map proto.
-
- Args:
- path: path to StringIntLabelMap proto text file.
- Returns:
- a StringIntLabelMapProto
- """
- with tf.gfile.GFile(path, 'r') as fid:
- label_map_string = fid.read()
- label_map = string_int_label_map_pb2.StringIntLabelMap()
- try:
- text_format.Merge(label_map_string, label_map)
- except text_format.ParseError:
- label_map.ParseFromString(label_map_string)
- _validate_label_map(label_map)
- return label_map
-
-
-def get_label_map_dict(label_map_path, use_display_name=False):
- """Reads a label map and returns a dictionary of label names to id.
-
- Args:
- label_map_path: path to label_map.
- use_display_name: whether to use the label map items' display names as keys.
-
- Returns:
- A dictionary mapping label names to id.
- """
- label_map = load_labelmap(label_map_path)
- label_map_dict = {}
- for item in label_map.item:
- if use_display_name:
- label_map_dict[item.display_name] = item.id
- else:
- label_map_dict[item.name] = item.id
- return label_map_dict
-
-
-def create_category_index_from_labelmap(label_map_path):
- """Reads a label map and returns a category index.
-
- Args:
- label_map_path: Path to `StringIntLabelMap` proto text file.
-
- Returns:
- A category index, which is a dictionary that maps integer ids to dicts
- containing categories, e.g.
- {1: {'id': 1, 'name': 'dog'}, 2: {'id': 2, 'name': 'cat'}, ...}
- """
- label_map = load_labelmap(label_map_path)
- max_num_classes = max(item.id for item in label_map.item)
- categories = convert_label_map_to_categories(label_map, max_num_classes)
- return create_category_index(categories)
-
-
-def create_class_agnostic_category_index():
- """Creates a category index with a single `object` class."""
- return {1: {'id': 1, 'name': 'object'}}
diff --git a/spaces/Not-Grim-Refer/huggingface-transformers-agents/app.py b/spaces/Not-Grim-Refer/huggingface-transformers-agents/app.py
deleted file mode 100644
index 2dc7fdc9bcfb37002a1be112a27a5d4788b0f5b3..0000000000000000000000000000000000000000
--- a/spaces/Not-Grim-Refer/huggingface-transformers-agents/app.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import gradio as gr
-
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Conversation, pipeline
-
-# Load the best pre-trained models and tokenizers for coding tasks
-models_and_tokenizers = [
- ("EleutherAI/gpt-neo-2.7B", AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B"), AutoModelForSeq2SeqLM.from_pretrained("EleutherAI/gpt-neo-2.7B")),
- ("Bard", AutoTokenizer.from_pretrained("bard"), AutoModelForSeq2SeqLM.from_pretrained("bard")),
- ("Turing NLG", AutoTokenizer.from_pretrained("Turing NLG"), AutoModelForSeq2SeqLM.from_pretrained("Turing NLG")),
- ("GPT-3", AutoTokenizer.from_pretrained("gpt-3"), AutoModelForSeq2SeqLM.from_pretrained("gpt-3")),
- ("GPT-J", AutoTokenizer.from_pretrained("gpt-j"), AutoModelForSeq2SeqLM.from_pretrained("gpt-j")),
-]
-
-# Create the conversational pipeline
-conversational_pipeline = pipeline("conversational", model=models_and_tokenizers[0][1], tokenizer=models_and_tokenizers[0][0])
-
-# Define a function to handle conversation with multiple models
-def handle_conversation(models, prompt):
- responses = []
- for model, tokenizer in models:
- conversation = Conversation(prompt)
- response = pipeline("conversational", model=model, tokenizer=tokenizer)(conversation)
- responses.append(response.generated_responses[-1])
- return responses
-
-# Replace the following line with the user's input code snippet
-user_code = """
-def reverse_prompt_engineer(code):
- # TODO: Reverse prompt engineer the code
- return None
-"""
-
-# Use the handle_conversation function to get responses from multiple models
-responses = handle_conversation(models_and_tokenizers, f"Now I want you to reverse prompt engineer the {user_code}. Give me a single prompt that would create a similar output.")
-print(responses)
-
-# Instruct the user how to use the tool
-print("To use this tool, simply paste your code snippet into the `user_code` variable and then run the code. The tool will then generate a prompt that can be used to create similar code.")
-
-# Create the interface
-app = gr.Interface(
- fn=handle_conversation,
- inputs="text",
- outputs="text",
- title="Reverse Prompt Engineer",
- description="Generate a prompt that can be used to create similar code.",
-)
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/audio_finetuning.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/audio_finetuning.py
deleted file mode 100644
index 4ef87c604f00581f03075e9ebe10a43dd51d6e45..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/audio_finetuning.py
+++ /dev/null
@@ -1,346 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import logging
-import os
-import torch
-import json
-
-from argparse import Namespace
-from dataclasses import dataclass, field
-from typing import Optional, Any
-
-from fairseq.data import AddTargetDataset, Dictionary, encoders
-from fairseq.tasks.audio_pretraining import AudioPretrainingTask, AudioPretrainingConfig
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.configs import GenerationConfig
-from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel
-
-from . import register_task
-from .. import utils
-from ..logging import metrics
-
-
-logger = logging.getLogger(__name__)
-
-
-class LabelEncoder(object):
- def __init__(self, dictionary):
- self.dictionary = dictionary
-
- def __call__(self, label):
- return self.dictionary.encode_line(
- label, append_eos=False, add_if_not_exist=False
- )
-
-
-def label_len_fn(label):
- return len(label.split(" "))
-
-
-@dataclass
-class AudioFinetuningConfig(AudioPretrainingConfig):
- # Options for reporting WER metrics during validation. Only applicable to
- # Seq2Seq models during fine-tuning
- eval_wer: bool = field(
- default=False, metadata={"help": "compute WER for Seq2Seq models"}
- )
- eval_wer_config: GenerationConfig = field(
- default_factory=lambda: GenerationConfig(),
- metadata={"help": "beam search config for evaluating wer during training"},
- )
- eval_wer_tokenizer: Any = field(
- default=None,
- metadata={"help": "tokenizer config for evaluating wer during training"},
- )
- eval_wer_post_process: str = field(
- default="letter",
- metadata={
- "help": "remove BPE tokens before scoring (can be sentencepiece, letter, and more)"
- },
- )
- eval_bleu: bool = field(
- default=False, metadata={"help": "evaluation with BLEU scores"}
- )
- eval_bleu_detok: Optional[str] = field(
- default=None, metadata={
- "help": "detokenize before computing BLEU (e.g., 'moses'); "
- "required if using --eval-bleu; use 'space' to disable "
- "detokenization; see fairseq.data.encoders for other options"
- }
- )
- eval_bleu_detok_args: str = field(
- default="{}",
- metadata={"help": "args for building the tokenizer, if needed"}
- )
- eval_tokenized_bleu: bool = field(
- default=False,
- metadata={"help": "compute tokenized BLEU instead of sacrebleu"}
- )
- eval_bleu_remove_bpe: Optional[str] = field(
- default=None, metadata={"help": "remove BPE before computing BLEU"}
- )
- eval_bleu_args: str = field(
- default="{}",
- metadata={"help": "generation args for BLUE scoring, e.g., "
- "'{\"beam\": 4, \"lenpen\": 0.6}'"}
- )
- eval_bleu_print_samples: bool = field(
- default=False,
- metadata={"help": "print sample generations during validation"}
- )
- autoregressive: bool = field(
- default=False,
- metadata={
- "help": "required for autoregressive decoders (like seq2seq models); "
- "adds 'prev_output_tokens' to input and appends eos to target"
- },
- )
-
-
-@register_task("audio_finetuning", dataclass=AudioFinetuningConfig)
-class AudioFinetuningTask(AudioPretrainingTask):
- """ """
-
- cfg: AudioFinetuningConfig
-
- def __init__(
- self,
- cfg: AudioFinetuningConfig,
- ):
- super().__init__(cfg)
- self.blank_symbol = ""
-
- self.state.add_factory("target_dictionary", self.load_target_dictionary)
-
- def load_target_dictionary(self):
- if self.cfg.labels:
- dict_path = os.path.join(self.cfg.data, f"dict.{self.cfg.labels}.txt")
- return Dictionary.load(dict_path)
- return None
-
- def load_dataset(self, split: str, task_cfg: AudioFinetuningConfig = None, **kwargs):
- super().load_dataset(split, task_cfg, **kwargs)
-
- task_cfg = task_cfg or self.cfg
- assert task_cfg.labels is not None
- text_compression_level = getattr(
- TextCompressionLevel, str(self.cfg.text_compression_level)
- )
- data_path = self.cfg.data
- label_path = os.path.join(data_path, f"{split}.{task_cfg.labels}")
- skipped_indices = getattr(self.datasets[split], "skipped_indices", set())
- text_compressor = TextCompressor(level=text_compression_level)
- with open(label_path, "r") as f:
- labels = [
- text_compressor.compress(l)
- for i, l in enumerate(f) if i not in skipped_indices
- ]
-
- assert len(labels) == len(self.datasets[split]), (
- f"labels length ({len(labels)}) and dataset length "
- f"({len(self.datasets[split])}) do not match"
- )
-
- process_label = LabelEncoder(self.target_dictionary)
-
- self.datasets[split] = AddTargetDataset(
- self.datasets[split],
- labels,
- pad=self.target_dictionary.pad(),
- eos=self.target_dictionary.eos(),
- batch_targets=True,
- process_label=process_label,
- label_len_fn=label_len_fn,
- add_to_input=task_cfg.get("autoregressive", False),
- text_compression_level=text_compression_level
- )
-
- @property
- def target_dictionary(self):
- """Return the :class:`~fairseq.data.Dictionary` for the language
- model."""
- return self.state.target_dictionary
-
- def valid_step(self, sample, model, criterion):
- loss, sample_size, logging_output = super().valid_step(sample, model, criterion)
- if self.cfg.eval_wer and self.cfg.autoregressive:
- metrics = self._inference_with_wer(self.sequence_generator, sample, model)
- logging_output["_num_char_errors"] = metrics["num_char_errors"]
- logging_output["_num_chars"] = metrics["num_chars"]
- logging_output["_num_word_errors"] = metrics["num_word_errors"]
- logging_output["_num_words"] = metrics["num_words"]
- if self.cfg.eval_bleu and self.cfg.autoregressive:
- metrics = self._inference_with_bleu(self.sequence_generator, sample, model)
- logging_output['_bleu_sys_len'] = metrics.sys_len
- logging_output['_bleu_ref_len'] = metrics.ref_len
- # we split counts into separate entries so that they can be
- # summed efficiently across workers using fast-stat-sync
- assert len(metrics.counts) == 4
- for i in range(4):
- logging_output[f"_bleu_counts_{i}"] = metrics.counts[i]
- logging_output[f"_bleu_totals_{i}"] = metrics.totals[i]
- return loss, sample_size, logging_output
-
- def build_model(self, model_cfg: FairseqDataclass):
- model = super().build_model(model_cfg)
-
- if self.cfg.eval_wer and self.cfg.autoregressive:
- self.sequence_generator = self.build_generator(
- [model],
- self.cfg.eval_wer_config,
- )
- if self.cfg.eval_wer_tokenizer:
- self.tokenizer = encoders.build_tokenizer(self.cfg.eval_wer_tokenizer)
- else:
- self.tokenizer = None
- if self.cfg.eval_bleu and self.cfg.autoregressive:
- assert self.cfg.eval_bleu_detok is not None, (
- '--eval-bleu-detok is required if using --eval-bleu; '
- 'try --eval-bleu-detok=moses (or --eval-bleu-detok=space '
- 'to disable detokenization, e.g., when using sentencepiece)'
- )
- detok_args = json.loads(self.cfg.eval_bleu_detok_args)
- self.tokenizer = encoders.build_tokenizer(
- Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args)
- )
- gen_args = json.loads(self.cfg.eval_bleu_args)
- gen_args = Namespace(**gen_args)
- self.sequence_generator = self.build_generator([model], gen_args)
-
- return model
-
- def _inference_with_wer(self, generator, sample, model):
- import editdistance
-
- def decode(toks):
- s = self.target_dictionary.string(
- toks.int().cpu(),
- self.cfg.eval_wer_post_process,
- escape_unk=True,
- )
- if self.tokenizer:
- s = self.tokenizer.decode(s)
- return s
-
- num_word_errors, num_char_errors = 0, 0
- num_chars, num_words = 0, 0
- gen_out = self.inference_step(generator, [model], sample, None)
- for i in range(len(gen_out)):
- hyp = decode(gen_out[i][0]["tokens"])
- ref = decode(
- utils.strip_pad(sample["target"][i], self.target_dictionary.pad()),
- )
- num_char_errors += editdistance.eval(hyp, ref)
- num_chars += len(ref)
- hyp_words = hyp.split()
- ref_words = ref.split()
- num_word_errors += editdistance.eval(hyp_words, ref_words)
- num_words += len(ref_words)
-
- return {
- "num_char_errors": num_char_errors,
- "num_chars": num_chars,
- "num_word_errors": num_word_errors,
- "num_words": num_words,
- }
-
- def _inference_with_bleu(self, generator, sample, model):
- import sacrebleu
-
- def decode(toks, is_ref):
- s = self.target_dictionary.string(
- toks.int().cpu(),
- self.cfg.eval_bleu_remove_bpe,
- # The default unknown string in fairseq is ``, but
- # this is tokenized by sacrebleu as `< unk >`, inflating
- # BLEU scores. Instead, we use a somewhat more verbose
- # alternative that is unlikely to appear in the real
- # reference, but doesn't get split into multiple tokens.
- unk_string=(
- "UNKNOWNTOKENINREF" if is_ref else "UNKNOWNTOKENINHYP"
- ),
- )
- if self.tokenizer:
- s = self.tokenizer.decode(s)
- return s
-
- gen_out = self.inference_step(generator, [model], sample)
- hyps, refs = [], []
- for i in range(len(gen_out)):
- hyps.append(decode(gen_out[i][0]['tokens'], is_ref=False))
- refs.append(
- decode(
- utils.strip_pad(
- sample['target'][i],
- self.target_dictionary.pad()
- ),
- is_ref=True, # don't count as matches to the hypo
- )
- )
- if self.cfg.eval_bleu_print_samples:
- logger.info('H-{} {}'.format(sample["id"][0], hyps[0]))
- logger.info('T-{} {}'.format(sample["id"][0], refs[0]))
-
- eval_tokenization = 'none' if self.cfg.eval_tokenized_bleu else '13a'
- return sacrebleu.corpus_bleu(hyps, [refs], tokenize=eval_tokenization)
-
- def reduce_metrics(self, logging_outputs, criterion):
- super().reduce_metrics(logging_outputs, criterion)
-
- if self.cfg.eval_wer:
- zero = torch.scalar_tensor(0.0)
- num_char_errors = sum(
- log.get("_num_char_errors", zero) for log in logging_outputs
- )
- num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs)
- num_word_errors = sum(
- log.get("_num_word_errors", zero) for log in logging_outputs
- )
- num_words = sum(log.get("_num_words", zero) for log in logging_outputs)
- metrics.log_scalar("_num_char_errors", num_char_errors)
- metrics.log_scalar("_num_chars", num_chars)
- metrics.log_scalar("_num_word_errors", num_word_errors)
- metrics.log_scalar("_num_words", num_words)
- if num_chars > 0:
- metrics.log_derived(
- "uer",
- lambda meters: meters["_num_char_errors"].sum
- * 100.0
- / meters["_num_chars"].sum
- if meters["_num_chars"].sum > 0
- else float("nan"),
- )
- if num_words > 0:
- metrics.log_derived(
- "wer",
- lambda meters: meters["_num_word_errors"].sum
- * 100.0
- / meters["_num_words"].sum
- if meters["_num_words"].sum > 0
- else float("nan"),
- )
- if self.cfg.eval_bleu:
- len_keys = ["_bleu_sys_len", "_bleu_ref_len"]
- count_keys = [f"_bleu_counts_{i}" for i in range(4)]
- total_keys = [f"_bleu_totals_{i}" for i in range(4)]
- for k in len_keys + count_keys + total_keys:
- metrics.log_scalar(
- k, sum(log.get(k, 0) for log in logging_outputs)
- )
-
- import sacrebleu
- metrics.log_derived(
- 'bleu',
- lambda meters: sacrebleu.compute_bleu(
- correct=[meters[k].sum for k in count_keys],
- total=[meters[k].sum for k in total_keys],
- sys_len=meters['_bleu_sys_len'].sum,
- ref_len=meters['_bleu_ref_len'].sum,
- smooth_method="exp"
- ).score
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/scripts/binarize_manifest.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/scripts/binarize_manifest.sh
deleted file mode 100644
index 6f201bdb524fad51a69d8c45889eaa1578efc62d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/scripts/binarize_manifest.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/usr/bin/env bash
-
-# usage: bash binarize_manifest
-
-DEST_DIR=$1
-TRAIN_SPLIT=$2
-VALID_SPLIT=$3
-FAIRSEQ_ROOT=$4
-
-mkdir -p $DEST_DIR
-
-# split file path and lengths into separate files
-cut -f1 $TRAIN_SPLIT.tsv > $DEST_DIR/train_fnames.txt
-cut -f1 $VALID_SPLIT.tsv > $DEST_DIR/valid_fnames.txt
-cut -f2 $TRAIN_SPLIT.tsv > $DEST_DIR/train.lengths
-cut -f2 $VALID_SPLIT.tsv > $DEST_DIR/valid.lengths
-
-# copy root directory
-head -1 $TRAIN_SPLIT.tsv > $DEST_DIR/train.root
-head -1 $VALID_SPLIT.tsv > $DEST_DIR/valid.root
-
-# remove root directory
-sed -i '1d' $DEST_DIR/train_fnames.txt
-sed -i '1d' $DEST_DIR/valid_fnames.txt
-sed -i '1d' $DEST_DIR/train.lengths
-sed -i '1d' $DEST_DIR/valid.lengths
-
-# insert spaces between characters
-sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/train_fnames.txt
-sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/valid_fnames.txt
-
-# run preprocessor
-PYTHONPATH=$FAIRSEQ_ROOT python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $DEST_DIR/train_fnames.txt --validpref $DEST_DIR/valid_fnames.txt --workers 60 --only-source --destdir $DEST_DIR
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/legacy/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/legacy/__init__.py
deleted file mode 100644
index 9bd5c72b5e9d7f67fb7e4ef10808d7ec08967ff4..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/legacy/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .block_pair_dataset import BlockPairDataset
-from .masked_lm_dataset import MaskedLMDataset
-from .masked_lm_dictionary import BertDictionary, MaskedLMDictionary
-
-
-__all__ = [
- "BertDictionary",
- "BlockPairDataset",
- "MaskedLMDataset",
- "MaskedLMDictionary",
-]
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/berard.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/berard.py
deleted file mode 100644
index c505e3acaa84e5f3263ccbfaf9556f77123f09fc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/speech_to_text/berard.py
+++ /dev/null
@@ -1,606 +0,0 @@
-#!/usr/bin/env python3
-
-from ast import literal_eval
-from typing import List, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import checkpoint_utils, utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqIncrementalDecoder,
- register_model,
- register_model_architecture,
-)
-
-
-@register_model("s2t_berard")
-class BerardModel(FairseqEncoderDecoderModel):
- """Implementation of a model similar to https://arxiv.org/abs/1802.04200
-
- Paper title: End-to-End Automatic Speech Translation of Audiobooks
- An implementation is available in tensorflow at
- https://github.com/eske/seq2seq
- Relevant files in this implementation are the config
- (https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml)
- and the model code
- (https://github.com/eske/seq2seq/blob/master/translate/models.py).
- The encoder and decoder try to be close to the original implementation.
- The attention is an MLP as in Bahdanau et al.
- (https://arxiv.org/abs/1409.0473).
- There is no state initialization by averaging the encoder outputs.
- """
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- parser.add_argument(
- "--input-layers",
- type=str,
- metavar="EXPR",
- help="List of linear layer dimensions. These "
- "layers are applied to the input features and "
- "are followed by tanh and possibly dropout.",
- )
- parser.add_argument(
- "--dropout",
- type=float,
- metavar="D",
- help="Dropout probability to use in the encoder/decoder. "
- "Note that this parameters control dropout in various places, "
- "there is no fine-grained control for dropout for embeddings "
- "vs LSTM layers for example.",
- )
- parser.add_argument(
- "--in-channels",
- type=int,
- metavar="N",
- help="Number of encoder input channels. " "Typically value is 1.",
- )
- parser.add_argument(
- "--conv-layers",
- type=str,
- metavar="EXPR",
- help="List of conv layers " "(format: (channels, kernel, stride)).",
- )
- parser.add_argument(
- "--num-blstm-layers",
- type=int,
- metavar="N",
- help="Number of encoder bi-LSTM layers.",
- )
- parser.add_argument(
- "--lstm-size", type=int, metavar="N", help="LSTM hidden size."
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="Embedding dimension of the decoder target tokens.",
- )
- parser.add_argument(
- "--decoder-hidden-dim",
- type=int,
- metavar="N",
- help="Decoder LSTM hidden dimension.",
- )
- parser.add_argument(
- "--decoder-num-layers",
- type=int,
- metavar="N",
- help="Number of decoder LSTM layers.",
- )
- parser.add_argument(
- "--attention-dim",
- type=int,
- metavar="N",
- help="Hidden layer dimension in MLP attention.",
- )
- parser.add_argument(
- "--output-layer-dim",
- type=int,
- metavar="N",
- help="Hidden layer dim for linear layer prior to output projection.",
- )
- parser.add_argument(
- "--load-pretrained-encoder-from",
- type=str,
- metavar="STR",
- help="model to take encoder weights from (for initialization)",
- )
- parser.add_argument(
- "--load-pretrained-decoder-from",
- type=str,
- metavar="STR",
- help="model to take decoder weights from (for initialization)",
- )
-
- @classmethod
- def build_encoder(cls, args, task):
- encoder = BerardEncoder(
- input_layers=literal_eval(args.input_layers),
- conv_layers=literal_eval(args.conv_layers),
- in_channels=args.input_channels,
- input_feat_per_channel=args.input_feat_per_channel,
- num_blstm_layers=args.num_blstm_layers,
- lstm_size=args.lstm_size,
- dropout=args.dropout,
- )
- if getattr(args, "load_pretrained_encoder_from", None):
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task):
- decoder = LSTMDecoder(
- dictionary=task.target_dictionary,
- embed_dim=args.decoder_embed_dim,
- num_layers=args.decoder_num_layers,
- hidden_size=args.decoder_hidden_dim,
- dropout=args.dropout,
- encoder_output_dim=2 * args.lstm_size, # bidirectional
- attention_dim=args.attention_dim,
- output_layer_dim=args.output_layer_dim,
- )
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- encoder = cls.build_encoder(args, task)
- decoder = cls.build_decoder(args, task)
-
- return cls(encoder, decoder)
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- # net_output['encoder_out'] is a (B, T, D) tensor
- lprobs = super().get_normalized_probs(net_output, log_probs, sample)
- # lprobs is a (B, T, D) tensor
- lprobs.batch_first = True
- return lprobs
-
-
-class BerardEncoder(FairseqEncoder):
- def __init__(
- self,
- input_layers: List[int],
- conv_layers: List[Tuple[int]],
- in_channels: int,
- input_feat_per_channel: int,
- num_blstm_layers: int,
- lstm_size: int,
- dropout: float,
- ):
- """
- Args:
- input_layers: list of linear layer dimensions. These layers are
- applied to the input features and are followed by tanh and
- possibly dropout.
- conv_layers: list of conv2d layer configurations. A configuration is
- a tuple (out_channels, conv_kernel_size, stride).
- in_channels: number of input channels.
- input_feat_per_channel: number of input features per channel. These
- are speech features, typically 40 or 80.
- num_blstm_layers: number of bidirectional LSTM layers.
- lstm_size: size of the LSTM hidden (and cell) size.
- dropout: dropout probability. Dropout can be applied after the
- linear layers and LSTM layers but not to the convolutional
- layers.
- """
- super().__init__(None)
-
- self.input_layers = nn.ModuleList()
- in_features = input_feat_per_channel
- for out_features in input_layers:
- if dropout > 0:
- self.input_layers.append(
- nn.Sequential(
- nn.Linear(in_features, out_features), nn.Dropout(p=dropout)
- )
- )
- else:
- self.input_layers.append(nn.Linear(in_features, out_features))
- in_features = out_features
-
- self.in_channels = in_channels
- self.input_dim = input_feat_per_channel
- self.conv_kernel_sizes_and_strides = []
- self.conv_layers = nn.ModuleList()
- lstm_input_dim = input_layers[-1]
- for conv_layer in conv_layers:
- out_channels, conv_kernel_size, conv_stride = conv_layer
- self.conv_layers.append(
- nn.Conv2d(
- in_channels,
- out_channels,
- conv_kernel_size,
- stride=conv_stride,
- padding=conv_kernel_size // 2,
- )
- )
- self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride))
- in_channels = out_channels
- lstm_input_dim //= conv_stride
-
- lstm_input_dim *= conv_layers[-1][0]
- self.lstm_size = lstm_size
- self.num_blstm_layers = num_blstm_layers
- self.lstm = nn.LSTM(
- input_size=lstm_input_dim,
- hidden_size=lstm_size,
- num_layers=num_blstm_layers,
- dropout=dropout,
- bidirectional=True,
- )
- self.output_dim = 2 * lstm_size # bidirectional
- if dropout > 0:
- self.dropout = nn.Dropout(p=dropout)
- else:
- self.dropout = None
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- """
- Args
- src_tokens: padded tensor (B, T, C * feat)
- src_lengths: tensor of original lengths of input utterances (B,)
- """
- bsz, max_seq_len, _ = src_tokens.size()
- # (B, C, T, feat)
- x = (
- src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
- .transpose(1, 2)
- .contiguous()
- )
-
- for input_layer in self.input_layers:
- x = input_layer(x)
- x = torch.tanh(x)
-
- for conv_layer in self.conv_layers:
- x = conv_layer(x)
-
- bsz, _, output_seq_len, _ = x.size()
-
- # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) ->
- # (T, B, C * feat)
- x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
-
- input_lengths = src_lengths.clone()
- for k, s in self.conv_kernel_sizes_and_strides:
- p = k // 2
- input_lengths = (input_lengths.float() + 2 * p - k) / s + 1
- input_lengths = input_lengths.floor().long()
-
- packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths)
-
- h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
- c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_()
- packed_outs, _ = self.lstm(packed_x, (h0, c0))
-
- # unpack outputs and apply dropout
- x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs)
- if self.dropout is not None:
- x = self.dropout(x)
-
- encoder_padding_mask = (
- lengths_to_padding_mask(output_lengths).to(src_tokens.device).t()
- )
-
- return {
- "encoder_out": x, # (T, B, C)
- "encoder_padding_mask": encoder_padding_mask, # (T, B)
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
- 1, new_order
- )
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(1, new_order)
- return encoder_out
-
-
-class MLPAttention(nn.Module):
- """The original attention from Badhanau et al. (2014)
-
- https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron.
- The attention score between position i in the encoder and position j in the
- decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a)
- """
-
- def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim):
- super().__init__()
-
- self.context_dim = context_dim
- self.attention_dim = attention_dim
- # W_ae and b_a
- self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True)
- # W_ad
- self.decoder_proj = nn.Linear(
- decoder_hidden_state_dim, self.attention_dim, bias=False
- )
- # V_a
- self.to_scores = nn.Linear(self.attention_dim, 1, bias=False)
-
- def forward(self, decoder_state, source_hids, encoder_padding_mask):
- """The expected input dimensions are:
- decoder_state: bsz x decoder_hidden_state_dim
- source_hids: src_len x bsz x context_dim
- encoder_padding_mask: src_len x bsz
- """
- src_len, bsz, _ = source_hids.size()
- # (src_len*bsz) x context_dim (to feed through linear)
- flat_source_hids = source_hids.view(-1, self.context_dim)
- # (src_len*bsz) x attention_dim
- encoder_component = self.encoder_proj(flat_source_hids)
- # src_len x bsz x attention_dim
- encoder_component = encoder_component.view(src_len, bsz, self.attention_dim)
- # 1 x bsz x attention_dim
- decoder_component = self.decoder_proj(decoder_state).unsqueeze(0)
- # Sum with broadcasting and apply the non linearity
- # src_len x bsz x attention_dim
- hidden_att = torch.tanh(
- (decoder_component + encoder_component).view(-1, self.attention_dim)
- )
- # Project onto the reals to get attentions scores (src_len x bsz)
- attn_scores = self.to_scores(hidden_att).view(src_len, bsz)
-
- # Mask + softmax (src_len x bsz)
- if encoder_padding_mask is not None:
- attn_scores = (
- attn_scores.float()
- .masked_fill_(encoder_padding_mask, float("-inf"))
- .type_as(attn_scores)
- ) # FP16 support: cast to float and back
- # srclen x bsz
- normalized_masked_attn_scores = F.softmax(attn_scores, dim=0)
-
- # Sum weighted sources (bsz x context_dim)
- attn_weighted_context = (
- source_hids * normalized_masked_attn_scores.unsqueeze(2)
- ).sum(dim=0)
-
- return attn_weighted_context, normalized_masked_attn_scores
-
-
-class LSTMDecoder(FairseqIncrementalDecoder):
- def __init__(
- self,
- dictionary,
- embed_dim,
- num_layers,
- hidden_size,
- dropout,
- encoder_output_dim,
- attention_dim,
- output_layer_dim,
- ):
- """
- Args:
- dictionary: target text dictionary.
- embed_dim: embedding dimension for target tokens.
- num_layers: number of LSTM layers.
- hidden_size: hidden size for LSTM layers.
- dropout: dropout probability. Dropout can be applied to the
- embeddings, the LSTM layers, and the context vector.
- encoder_output_dim: encoder output dimension (hidden size of
- encoder LSTM).
- attention_dim: attention dimension for MLP attention.
- output_layer_dim: size of the linear layer prior to output
- projection.
- """
- super().__init__(dictionary)
- self.num_layers = num_layers
- self.hidden_size = hidden_size
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx)
- if dropout > 0:
- self.dropout = nn.Dropout(p=dropout)
- else:
- self.dropout = None
-
- self.layers = nn.ModuleList()
- for layer_id in range(num_layers):
- input_size = embed_dim if layer_id == 0 else encoder_output_dim
- self.layers.append(
- nn.LSTMCell(input_size=input_size, hidden_size=hidden_size)
- )
-
- self.context_dim = encoder_output_dim
- self.attention = MLPAttention(
- decoder_hidden_state_dim=hidden_size,
- context_dim=encoder_output_dim,
- attention_dim=attention_dim,
- )
-
- self.deep_output_layer = nn.Linear(
- hidden_size + encoder_output_dim + embed_dim, output_layer_dim
- )
- self.output_projection = nn.Linear(output_layer_dim, num_embeddings)
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- encoder_padding_mask = encoder_out["encoder_padding_mask"]
- encoder_outs = encoder_out["encoder_out"]
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- bsz, seqlen = prev_output_tokens.size()
-
- srclen = encoder_outs.size(0)
-
- # embed tokens
- embeddings = self.embed_tokens(prev_output_tokens)
- x = embeddings
- if self.dropout is not None:
- x = self.dropout(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # initialize previous states (or get from cache during incremental
- # generation)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is not None:
- prev_hiddens, prev_cells = cached_state
- else:
- prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers
- prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers
-
- attn_scores = x.new_zeros(bsz, srclen)
- attention_outs = []
- outs = []
- for j in range(seqlen):
- input = x[j, :, :]
- attention_out = None
- for i, layer in enumerate(self.layers):
- # the previous state is one layer below except for the bottom
- # layer where the previous state is the state emitted by the
- # top layer
- hidden, cell = layer(
- input,
- (
- prev_hiddens[(i - 1) % self.num_layers],
- prev_cells[(i - 1) % self.num_layers],
- ),
- )
- if self.dropout is not None:
- hidden = self.dropout(hidden)
- prev_hiddens[i] = hidden
- prev_cells[i] = cell
- if attention_out is None:
- attention_out, attn_scores = self.attention(
- hidden, encoder_outs, encoder_padding_mask
- )
- if self.dropout is not None:
- attention_out = self.dropout(attention_out)
- attention_outs.append(attention_out)
- input = attention_out
-
- # collect the output of the top layer
- outs.append(hidden)
-
- # cache previous states (no-op except during incremental generation)
- utils.set_incremental_state(
- self, incremental_state, "cached_state", (prev_hiddens, prev_cells)
- )
-
- # collect outputs across time steps
- x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size)
- attention_outs_concat = torch.cat(attention_outs, dim=0).view(
- seqlen, bsz, self.context_dim
- )
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
- attention_outs_concat = attention_outs_concat.transpose(0, 1)
-
- # concat LSTM output, attention output and embedding
- # before output projection
- x = torch.cat((x, attention_outs_concat, embeddings), dim=2)
- x = self.deep_output_layer(x)
- x = torch.tanh(x)
- if self.dropout is not None:
- x = self.dropout(x)
- # project back to size of vocabulary
- x = self.output_projection(x)
-
- # to return the full attn_scores tensor, we need to fix the decoder
- # to account for subsampling input frames
- # return x, attn_scores
- return x, None
-
- def reorder_incremental_state(self, incremental_state, new_order):
- super().reorder_incremental_state(incremental_state, new_order)
- cached_state = utils.get_incremental_state(
- self, incremental_state, "cached_state"
- )
- if cached_state is None:
- return
-
- def reorder_state(state):
- if isinstance(state, list):
- return [reorder_state(state_i) for state_i in state]
- return state.index_select(0, new_order)
-
- new_state = tuple(map(reorder_state, cached_state))
- utils.set_incremental_state(self, incremental_state, "cached_state", new_state)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard")
-def berard(args):
- """The original version: "End-to-End Automatic Speech Translation of
- Audiobooks" (https://arxiv.org/abs/1802.04200)
- """
- args.input_layers = getattr(args, "input_layers", "[256, 128]")
- args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]")
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
- args.lstm_size = getattr(args, "lstm_size", 256)
- args.dropout = getattr(args, "dropout", 0.2)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 128)
- args.load_pretrained_encoder_from = getattr(
- args, "load_pretrained_encoder_from", None
- )
- args.load_pretrained_decoder_from = getattr(
- args, "load_pretrained_decoder_from", None
- )
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3")
-def berard_256_3_3(args):
- """Used in
- * "Harnessing Indirect Training Data for End-to-End Automatic Speech
- Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515)
- * "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"
- (https://arxiv.org/pdf/2002.01320.pdf)
- * "Self-Supervised Representations Improve End-to-End Speech Translation"
- (https://arxiv.org/abs/2006.12124)
- """
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
- berard(args)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2")
-def berard_512_3_2(args):
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 3)
- args.lstm_size = getattr(args, "lstm_size", 512)
- args.dropout = getattr(args, "dropout", 0.3)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 2)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 256)
- berard(args)
-
-
-@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3")
-def berard_512_5_3(args):
- args.num_blstm_layers = getattr(args, "num_blstm_layers", 5)
- args.lstm_size = getattr(args, "lstm_size", 512)
- args.dropout = getattr(args, "dropout", 0.3)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_num_layers = getattr(args, "decoder_num_layers", 3)
- args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024)
- args.attention_dim = getattr(args, "attention_dim", 512)
- args.output_layer_dim = getattr(args, "output_layer_dim", 256)
- berard(args)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py
deleted file mode 100644
index 50683e6d7c8c0db5b8f019e5f7f5fb8c6dfd9f66..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py
+++ /dev/null
@@ -1,585 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import copy
-
-import torch.nn as nn
-from fairseq import checkpoint_utils
-from fairseq import utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- register_model,
- register_model_architecture,
- FairseqEncoder,
-)
-from fairseq.models.speech_to_text import XMTransformerModel, Wav2VecEncoderWithAdaptor
-from fairseq.models.speech_to_text.xm_transformer import (
- set_default_adaptor_args,
- set_default_w2v_encoder_args,
-)
-from fairseq.models.transformer import TransformerEncoder, TransformerDecoder
-from fairseq.models.wav2vec import TransformerSentenceEncoderLayer
-from fairseq.utils import safe_hasattr
-
-from .s2t_dualinputtransformer import (
- DualInputS2TTransformerModel,
- TransformerMultiInputDecoder,
- DualInputEncoder,
-)
-
-
-class TransformerSentenceEncoderLayerStd(TransformerSentenceEncoderLayer):
- def __init__(self, sent_enc_layer):
- super(TransformerSentenceEncoderLayer, self).__init__()
- self.embedding_dim = sent_enc_layer.embedding_dim
- self.dropout = sent_enc_layer.dropout
- self.activation_dropout = sent_enc_layer.activation_dropout
-
- # Initialize blocks
- self.activation_fn = sent_enc_layer.activation_fn
- self.self_attn = sent_enc_layer.self_attn
-
- self.dropout1 = sent_enc_layer.dropout1
- self.dropout2 = sent_enc_layer.dropout2
- self.dropout3 = sent_enc_layer.dropout3
-
- self.layer_norm_first = sent_enc_layer.layer_norm_first
-
- # layer norm associated with the self attention layer
- self.self_attn_layer_norm = sent_enc_layer.self_attn_layer_norm
- self.fc1 = sent_enc_layer.fc1
- self.fc2 = sent_enc_layer.fc2
-
- # layer norm associated with the position wise feed-forward NN
- self.final_layer_norm = sent_enc_layer.final_layer_norm
-
- def forward(
- self,
- x,
- self_attn_mask=None,
- self_attn_padding_mask=None,
- need_weights=None,
- att_args=None,
- ):
- x, attn = super().forward(
- x, self_attn_mask, self_attn_padding_mask, need_weights, att_args
- )
- return x
-
-
-# TODO retire SharedEncoder
-class SharedEncoder(FairseqEncoder):
- def __init__(self, wav2vec_enc, mbart_enc, adaptor, shared_layers):
- super().__init__(None)
- self.w2v_encoder = wav2vec_enc
- self.shared_layers = self.w2v_encoder.w2v_model.encoder.layers[-shared_layers:]
- self.w2v_encoder.w2v_model.encoder.layers = (
- self.w2v_encoder.w2v_model.encoder.layers[:-shared_layers]
- )
- self.adaptor = adaptor
- if self.shared_layers[-1].layer_norm_first:
- self.final_layer_norm = mbart_enc.layer_norm
- else:
- mbart_enc.layer_norm = None
- self.final_layer_norm = None
- shared_layer_from = len(mbart_enc.layers) - shared_layers
- if shared_layer_from < 0:
- shared_layer_from = 0
- for layer_id, layer in enumerate(self.shared_layers):
- mbart_enc.layers[
- shared_layer_from + layer_id
- ] = TransformerSentenceEncoderLayerStd(layer)
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- padding_mask = lengths_to_padding_mask(src_lengths)
- if not padding_mask.any():
- padding_mask = None
-
- out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True)
- x = out["encoder_out"]
- enc_padding_mask = None
- if out["encoder_padding_mask"] is not None:
- enc_padding_mask = out["encoder_padding_mask"].transpose(
- 0, 1
- ) # T X B --> B X T
-
- x, enc_padding_mask = self.adaptor(x, enc_padding_mask)
- for layer in self.shared_layers:
- x, _ = layer(x, enc_padding_mask)
- if self.final_layer_norm is not None:
- x = self.final_layer_norm(x)
-
- return {
- "encoder_out": [x], # T x B x C
- "encoder_padding_mask": [enc_padding_mask]
- if enc_padding_mask is not None
- else [], # B x T
- "encoder_embedding": [], # B x T x C
- "encoder_states": [], # List[T x B x C]
- "src_tokens": [],
- "src_lengths": [],
- }
-
-
-class StackedWav2VecEncoderWithAdaptor(FairseqEncoder):
- def __init__(
- self,
- wav2vec_enc,
- mbart_enc_layers,
- mbart_layer_norm,
- adaptor,
- drop_w2v_layers=0,
- ):
- super().__init__(None)
- self.w2v_encoder = wav2vec_enc
- self.adaptor = adaptor
- self.mbart_encoder_layers = mbart_enc_layers
- self.final_layer_norm = mbart_layer_norm
- if drop_w2v_layers > 0:
- self.w2v_encoder.w2v_model.encoder.layers = (
- self.w2v_encoder.w2v_model.encoder.layers[:-drop_w2v_layers]
- )
-
- def forward(self, src_tokens, src_lengths=None, return_all_hiddens=False, **kwargs):
- padding_mask = lengths_to_padding_mask(src_lengths)
- if not padding_mask.any():
- padding_mask = None
-
- out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True)
- x = out["encoder_out"]
- enc_padding_mask = None
- if out["encoder_padding_mask"] is not None:
- enc_padding_mask = out["encoder_padding_mask"].transpose(
- 0, 1
- ) # T X B --> B X T
-
- x, enc_padding_mask = self.adaptor(x, enc_padding_mask)
- encoder_states = []
- for layer in self.mbart_encoder_layers:
- x = layer(x, enc_padding_mask)
- if return_all_hiddens:
- encoder_states.append(x)
- if self.final_layer_norm is not None:
- x = self.final_layer_norm(x)
-
- return {
- "encoder_out": [x], # T x B x C
- "encoder_padding_mask": [enc_padding_mask]
- if enc_padding_mask is not None
- else [], # B x T
- "encoder_embedding": [], # B x T x C
- "encoder_states": encoder_states, # List[T x B x C]
- "src_tokens": [],
- "src_lengths": [],
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- new_encoder_out = (
- []
- if len(encoder_out["encoder_out"]) == 0
- else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]]
- )
-
- new_encoder_padding_mask = (
- []
- if len(encoder_out["encoder_padding_mask"]) == 0
- else [
- x.index_select(0, new_order)
- for x in encoder_out["encoder_padding_mask"]
- ]
- )
-
- new_encoder_embedding = (
- []
- if len(encoder_out["encoder_embedding"]) == 0
- else [
- x.index_select(0, new_order) for x in encoder_out["encoder_embedding"]
- ]
- )
-
- encoder_states = encoder_out["encoder_states"]
- if len(encoder_states) > 0:
- for idx, state in enumerate(encoder_states):
- encoder_states[idx] = state.index_select(1, new_order)
-
- return {
- "encoder_out": new_encoder_out, # T x B x C
- "encoder_padding_mask": new_encoder_padding_mask, # B x T
- "encoder_embedding": new_encoder_embedding, # B x T x C
- "encoder_states": encoder_states, # List[T x B x C]
- "src_tokens": [], # B x T
- "src_lengths": [], # B x 1
- }
-
-
-# Note:
-# dual input transformer:
-# encoder: wav2vec for speech + mbart encoder for text
-# decoder: mbart decoder for text
-@register_model("dual_input_xm_transformer")
-class DualInputXMTransformerModel(DualInputS2TTransformerModel):
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- # wav2vec encoder
- Wav2VecEncoderWithAdaptor.add_args(parser)
- # add_decoder_args(parser)
- # mbart Transformer
- parser.add_argument(
- "--activation-fn",
- type=str,
- default="relu",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
-
- parser.add_argument(
- "--mbart-dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--mbart-attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--mbart-activation-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN.",
- )
-
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
-
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--layernorm-embedding",
- action="store_true",
- help="add layernorm to embedding",
- )
- parser.add_argument(
- "--no-scale-embedding",
- action="store_true",
- help="if True, dont scale embeddings",
- )
- parser.add_argument(
- "--load-pretrained-mbart-from",
- type=str,
- metavar="STR",
- help="model to take text encoder decoder weights from (for initialization)",
- )
- # parser.add_argument("--finetune-w2v-params", type=str, metavar="STR",
- # help="comma-separated param strings to finetune.")
- parser.add_argument(
- "--finetune-mbart-decoder-params",
- type=str,
- metavar="STR",
- help="comma-separated param strings to finetune.",
- )
- parser.add_argument(
- "--finetune-mbart-encoder-params",
- type=str,
- metavar="STR",
- help="comma-separated param strings to finetune.",
- )
- parser.add_argument(
- "--skip-encoder-projection",
- action="store_true",
- help="skip the projection layer in encoder",
- )
-
- parser.add_argument(
- "--enc-grad-mult",
- type=float,
- metavar="V",
- default=1.0,
- help="multiply enc1 and enc2 gradient by V",
- )
- parser.add_argument(
- "--enc2-along-grad-mult",
- type=float,
- metavar="V",
- default=1.0,
- help="multiply enc2 gradient by V if only enc2 is used",
- )
- parser.add_argument(
- "--text-input-cost-ratio",
- type=float,
- default=1.0,
- metavar="V",
- help="text input cost ratio relative to speech input cost",
- )
- parser.add_argument(
- "--stack-w2v-mbart-encoder",
- action="store_true",
- help="stack w2v and mbart encoder",
- )
- parser.add_argument(
- "--stack-w2v-mbart-nonorm-encoder",
- action="store_true",
- help="stack w2v and mbart encoder",
- )
- parser.add_argument(
- "--no-final-norm-decoder", action="store_true", help="no layer norm"
- )
- parser.add_argument(
- "--drop-w2v-layers",
- type=int,
- default=0,
- metavar="N",
- help="drop w2v encoder layers",
- )
-
- parser.add_argument(
- "--share-w2v-text-encoder",
- action="store_true",
- help="share w2v encoder layers with text encoder",
- )
- parser.add_argument(
- "--shared-w2v-layers",
- type=int,
- default=0,
- metavar="N",
- help="shared encoder layers from w2v encoder",
- )
-
- @classmethod
- def build_encoder(cls, args, task):
- _args = copy.deepcopy(args)
- _args.dropout = args.mbart_dropout
- _args.attention_dropout = args.mbart_attention_dropout
- _args.activation_dropout = args.mbart_activation_dropout
- _args.max_source_positions = 1024
- enc_emb = nn.Embedding(
- len(task.src_dict), _args.encoder_embed_dim, task.src_dict.pad()
- )
- text_encoder = TransformerEncoder(_args, task.src_dict, enc_emb)
- spch_encoder = Wav2VecEncoderWithAdaptor(args)
- if getattr(args, "load_pretrained_mbart_from", None):
- text_encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=text_encoder, checkpoint=args.load_pretrained_mbart_from
- )
- if getattr(args, "stack_w2v_mbart_encoder", False):
- assert getattr(args, "share_w2v_text_encoder", False) is False
- spch_encoder = StackedWav2VecEncoderWithAdaptor(
- spch_encoder.w2v_encoder,
- text_encoder.layers,
- text_encoder.layer_norm,
- spch_encoder.adaptor,
- args.drop_w2v_layers,
- )
- elif getattr(args, "stack_w2v_mbart_nonorm_encoder", False):
- text_encoder.layer_norm = None
- spch_encoder = StackedWav2VecEncoderWithAdaptor(
- spch_encoder.w2v_encoder,
- text_encoder.layers,
- text_encoder.layer_norm,
- spch_encoder.adaptor,
- args.drop_w2v_layers,
- )
- elif getattr(args, "share_w2v_text_encoder", False):
- spch_encoder = SharedEncoder(
- spch_encoder.w2v_encoder,
- text_encoder,
- spch_encoder.adaptor,
- args.shared_w2v_layers,
- )
-
- for k, p in spch_encoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(
- args, "finetune_w2v_params"
- ) and XMTransformerModel.finetune_params(args.finetune_w2v_params, k):
- p.requires_grad = True
- else:
- p.requires_grad = False
- for k, p in text_encoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(
- args, "finetune_mbart_encoder_params"
- ) and XMTransformerModel.finetune_params(
- args.finetune_mbart_encoder_params, k
- ):
- p.requires_grad = True
- else:
- p.requires_grad = False
- cross_attentive_loss_before_last_layer = (
- 0 if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else -1
- )
- encoder = DualInputEncoder(
- args,
- spch_encoder,
- text_encoder,
- task.src_dict,
- cross_attentive_loss_before_last_layer,
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task):
- _args = copy.deepcopy(args)
- _args.dropout = args.mbart_dropout
- _args.attention_dropout = args.mbart_attention_dropout
- _args.activation_dropout = args.mbart_activation_dropout
- _args.max_target_positions = 1024
- dec_emb = nn.Embedding(
- len(task.tgt_dict), _args.encoder_embed_dim, task.tgt_dict.pad()
- )
- decoder = TransformerDecoder(_args, task.tgt_dict, dec_emb)
- if getattr(args, "load_pretrained_mbart_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_mbart_from
- )
- if getattr(args, "no_final_norm_decoder", False):
- decoder.layer_norm = None
- for k, p in decoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(
- args, "finetune_mbart_decoder_params"
- ) and XMTransformerModel.finetune_params(
- args.finetune_mbart_decoder_params, k
- ):
- p.requires_grad = True
- else:
- p.requires_grad = False
-
- compute_cross_attentive_loss = (
- True if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else False
- )
- cross_attentive_loss_without_norm = getattr(
- args, "attentive_cost_without_normalize", False
- )
- cross_attentive_loss_reverse = (
- False # getattr(args, "attentive_cost_reverse", False)
- )
- decoder = TransformerMultiInputDecoder(
- dictionary=task.target_dictionary,
- spch_decoder=decoder,
- text_decoder=decoder,
- compute_cross_attentive_loss=compute_cross_attentive_loss,
- cross_attentive_loss_with_norm=True
- if not cross_attentive_loss_without_norm
- else False,
- cross_attentive_loss_reverse=cross_attentive_loss_reverse,
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- # make sure that all args are properly defaulted
- # (in case there are any new ones)
- dualinputxmtransformer_base(args)
-
- encoder = cls.build_encoder(args, task)
- decoder = cls.build_decoder(args, task)
- return cls(encoder, decoder)
-
-
-@register_model_architecture("dual_input_xm_transformer", "dualinputxmtransformer_base")
-def dualinputxmtransformer_base(args):
- # wav2vec encoder
- set_default_w2v_encoder_args(args)
- set_default_adaptor_args(args)
-
- # mbart model
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(
- args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim
- )
- args.encoder_layers = getattr(args, "encoder_layers", 12)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True)
- args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True)
-
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * 1024)
- args.decoder_layers = getattr(args, "decoder_layers", 12)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True)
- args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0)
-
- args.adaptive_input = getattr(args, "adaptive_input", False)
-
- args.mbart_attention_dropout = getattr(args, "mbart_attention_dropout", 0.0)
- args.mbart_activation_dropout = getattr(args, "mbart_activation_dropout", 0.0)
- args.mbart_dropout = getattr(args, "mbart_dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", True
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- args.no_scale_embedding = getattr(args, "no_scale_embedding", False)
- args.quant_noise_pq = getattr(args, "quant_noise_pq", 0)
- args.layernorm_embedding = getattr(args, "layernorm_embedding", True)
-
- args.activation_fn = getattr(args, "activation_fn", "gelu")
- args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh")
- args.pooler_dropout = getattr(args, "pooler_dropout", 0.0)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/noising.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/noising.py
deleted file mode 100644
index 2b1cc347203bfbdc9f1cba29e2e36427b7b5be57..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/noising.py
+++ /dev/null
@@ -1,335 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-from fairseq.data import data_utils
-
-
-class WordNoising(object):
- """Generate a noisy version of a sentence, without changing words themselves."""
-
- def __init__(self, dictionary, bpe_cont_marker="@@", bpe_end_marker=None):
- self.dictionary = dictionary
- self.bpe_end = None
- if bpe_cont_marker:
- self.bpe_end = np.array(
- [
- not self.dictionary[i].endswith(bpe_cont_marker)
- for i in range(len(self.dictionary))
- ]
- )
- elif bpe_end_marker:
- self.bpe_end = np.array(
- [
- self.dictionary[i].endswith(bpe_end_marker)
- for i in range(len(self.dictionary))
- ]
- )
-
- self.get_word_idx = (
- self._get_bpe_word_idx if self.bpe_end is not None else self._get_token_idx
- )
-
- def noising(self, x, lengths, noising_prob=0.0):
- raise NotImplementedError()
-
- def _get_bpe_word_idx(self, x):
- """
- Given a list of BPE tokens, for every index in the tokens list,
- return the index of the word grouping that it belongs to.
- For example, for input x corresponding to ["how", "are", "y@@", "ou"],
- return [[0], [1], [2], [2]].
- """
- # x: (T x B)
- bpe_end = self.bpe_end[x]
-
- if x.size(0) == 1 and x.size(1) == 1:
- # Special case when we only have one word in x. If x = [[N]],
- # bpe_end is a scalar (bool) instead of a 2-dim array of bools,
- # which makes the sum operation below fail.
- return np.array([[0]])
-
- # do a reduce front sum to generate word ids
- word_idx = bpe_end[::-1].cumsum(0)[::-1]
- word_idx = word_idx.max(0)[None, :] - word_idx
- return word_idx
-
- def _get_token_idx(self, x):
- """
- This is to extend noising functions to be able to apply to non-bpe
- tokens, e.g. word or characters.
- """
- x = torch.t(x)
- word_idx = np.array([range(len(x_i)) for x_i in x])
- return np.transpose(word_idx)
-
-
-class WordDropout(WordNoising):
- """Randomly drop input words. If not passing blank_idx (default is None),
- then dropped words will be removed. Otherwise, it will be replaced by the
- blank_idx."""
-
- def __init__(
- self,
- dictionary,
- default_dropout_prob=0.1,
- bpe_cont_marker="@@",
- bpe_end_marker=None,
- ):
- super().__init__(dictionary, bpe_cont_marker, bpe_end_marker)
- self.default_dropout_prob = default_dropout_prob
-
- def noising(self, x, lengths, dropout_prob=None, blank_idx=None):
- if dropout_prob is None:
- dropout_prob = self.default_dropout_prob
- # x: (T x B), lengths: B
- if dropout_prob == 0:
- return x, lengths
-
- assert 0 < dropout_prob < 1
-
- # be sure to drop entire words
- word_idx = self.get_word_idx(x)
- sentences = []
- modified_lengths = []
- for i in range(lengths.size(0)):
- # Since dropout probabilities need to apply over non-pad tokens,
- # it is not trivial to generate the keep mask without consider
- # input lengths; otherwise, this could be done outside the loop
-
- # We want to drop whole words based on word_idx grouping
- num_words = max(word_idx[:, i]) + 1
-
- # ith example: [x0, x1, ..., eos, pad, ..., pad]
- # We should only generate keep probs for non-EOS tokens. Thus if the
- # input sentence ends in EOS, the last word idx is not included in
- # the dropout mask generation and we append True to always keep EOS.
- # Otherwise, just generate the dropout mask for all word idx
- # positions.
- has_eos = x[lengths[i] - 1, i] == self.dictionary.eos()
- if has_eos: # has eos?
- keep = np.random.rand(num_words - 1) >= dropout_prob
- keep = np.append(keep, [True]) # keep EOS symbol
- else:
- keep = np.random.rand(num_words) >= dropout_prob
-
- words = x[: lengths[i], i].tolist()
-
- # TODO: speed up the following loop
- # drop words from the input according to keep
- new_s = [
- w if keep[word_idx[j, i]] else blank_idx for j, w in enumerate(words)
- ]
- new_s = [w for w in new_s if w is not None]
- # we need to have at least one word in the sentence (more than the
- # start / end sentence symbols)
- if len(new_s) <= 1:
- # insert at beginning in case the only token left is EOS
- # EOS should be at end of list.
- new_s.insert(0, words[np.random.randint(0, len(words))])
- assert len(new_s) >= 1 and (
- not has_eos # Either don't have EOS at end or last token is EOS
- or (len(new_s) >= 2 and new_s[-1] == self.dictionary.eos())
- ), "New sentence is invalid."
- sentences.append(new_s)
- modified_lengths.append(len(new_s))
- # re-construct input
- modified_lengths = torch.LongTensor(modified_lengths)
- modified_x = torch.LongTensor(
- modified_lengths.max(), modified_lengths.size(0)
- ).fill_(self.dictionary.pad())
- for i in range(modified_lengths.size(0)):
- modified_x[: modified_lengths[i], i].copy_(torch.LongTensor(sentences[i]))
-
- return modified_x, modified_lengths
-
-
-class WordShuffle(WordNoising):
- """Shuffle words by no more than k positions."""
-
- def __init__(
- self,
- dictionary,
- default_max_shuffle_distance=3,
- bpe_cont_marker="@@",
- bpe_end_marker=None,
- ):
- super().__init__(dictionary, bpe_cont_marker, bpe_end_marker)
- self.default_max_shuffle_distance = 3
-
- def noising(self, x, lengths, max_shuffle_distance=None):
- if max_shuffle_distance is None:
- max_shuffle_distance = self.default_max_shuffle_distance
- # x: (T x B), lengths: B
- if max_shuffle_distance == 0:
- return x, lengths
-
- # max_shuffle_distance < 1 will return the same sequence
- assert max_shuffle_distance > 1
-
- # define noise word scores
- noise = np.random.uniform(
- 0,
- max_shuffle_distance,
- size=(x.size(0), x.size(1)),
- )
- noise[0] = -1 # do not move start sentence symbol
- # be sure to shuffle entire words
- word_idx = self.get_word_idx(x)
- x2 = x.clone()
- for i in range(lengths.size(0)):
- length_no_eos = lengths[i]
- if x[lengths[i] - 1, i] == self.dictionary.eos():
- length_no_eos = lengths[i] - 1
- # generate a random permutation
- scores = word_idx[:length_no_eos, i] + noise[word_idx[:length_no_eos, i], i]
- # ensure no reordering inside a word
- scores += 1e-6 * np.arange(length_no_eos.item())
- permutation = scores.argsort()
- # shuffle words
- x2[:length_no_eos, i].copy_(
- x2[:length_no_eos, i][torch.from_numpy(permutation)]
- )
- return x2, lengths
-
-
-class UnsupervisedMTNoising(WordNoising):
- """
- Implements the default configuration for noising in UnsupervisedMT
- (github.com/facebookresearch/UnsupervisedMT)
- """
-
- def __init__(
- self,
- dictionary,
- max_word_shuffle_distance,
- word_dropout_prob,
- word_blanking_prob,
- bpe_cont_marker="@@",
- bpe_end_marker=None,
- ):
- super().__init__(dictionary)
- self.max_word_shuffle_distance = max_word_shuffle_distance
- self.word_dropout_prob = word_dropout_prob
- self.word_blanking_prob = word_blanking_prob
-
- self.word_dropout = WordDropout(
- dictionary=dictionary,
- bpe_cont_marker=bpe_cont_marker,
- bpe_end_marker=bpe_end_marker,
- )
- self.word_shuffle = WordShuffle(
- dictionary=dictionary,
- bpe_cont_marker=bpe_cont_marker,
- bpe_end_marker=bpe_end_marker,
- )
-
- def noising(self, x, lengths):
- # 1. Word Shuffle
- noisy_src_tokens, noisy_src_lengths = self.word_shuffle.noising(
- x=x,
- lengths=lengths,
- max_shuffle_distance=self.max_word_shuffle_distance,
- )
- # 2. Word Dropout
- noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising(
- x=noisy_src_tokens,
- lengths=noisy_src_lengths,
- dropout_prob=self.word_dropout_prob,
- )
- # 3. Word Blanking
- noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising(
- x=noisy_src_tokens,
- lengths=noisy_src_lengths,
- dropout_prob=self.word_blanking_prob,
- blank_idx=self.dictionary.unk(),
- )
-
- return noisy_src_tokens
-
-
-class NoisingDataset(torch.utils.data.Dataset):
- def __init__(
- self,
- src_dataset,
- src_dict,
- seed,
- noiser=None,
- noising_class=UnsupervisedMTNoising,
- **kwargs
- ):
- """
- Wrap a :class:`~torch.utils.data.Dataset` and apply noise to the
- samples based on the supplied noising configuration.
-
- Args:
- src_dataset (~torch.utils.data.Dataset): dataset to wrap.
- to build self.src_dataset --
- a LanguagePairDataset with src dataset as the source dataset and
- None as the target dataset. Should NOT have padding so that
- src_lengths are accurately calculated by language_pair_dataset
- collate function.
- We use language_pair_dataset here to encapsulate the tgt_dataset
- so we can re-use the LanguagePairDataset collater to format the
- batches in the structure that SequenceGenerator expects.
- src_dict (~fairseq.data.Dictionary): source dictionary
- seed (int): seed to use when generating random noise
- noiser (WordNoising): a pre-initialized :class:`WordNoising`
- instance. If this is None, a new instance will be created using
- *noising_class* and *kwargs*.
- noising_class (class, optional): class to use to initialize a
- default :class:`WordNoising` instance.
- kwargs (dict, optional): arguments to initialize the default
- :class:`WordNoising` instance given by *noiser*.
- """
- self.src_dataset = src_dataset
- self.src_dict = src_dict
- self.seed = seed
- self.noiser = (
- noiser
- if noiser is not None
- else noising_class(
- dictionary=src_dict,
- **kwargs,
- )
- )
- self.sizes = src_dataset.sizes
-
-
- def __getitem__(self, index):
- """
- Returns a single noisy sample. Multiple samples are fed to the collater
- create a noising dataset batch.
- """
- src_tokens = self.src_dataset[index]
- src_lengths = torch.LongTensor([len(src_tokens)])
- src_tokens = src_tokens.unsqueeze(0)
-
- # Transpose src tokens to fit expected shape of x in noising function
- # (batch size, sequence length) -> (sequence length, batch size)
- src_tokens_t = torch.t(src_tokens)
-
- with data_utils.numpy_seed(self.seed + index):
- noisy_src_tokens = self.noiser.noising(src_tokens_t, src_lengths)
-
- # Transpose back to expected src_tokens format
- # (sequence length, 1) -> (1, sequence length)
- noisy_src_tokens = torch.t(noisy_src_tokens)
- return noisy_src_tokens[0]
-
- def __len__(self):
- """
- The length of the noising dataset is the length of src.
- """
- return len(self.src_dataset)
-
- @property
- def supports_prefetch(self):
- return self.src_dataset.supports_prefetch
-
- def prefetch(self, indices):
- if self.src_dataset.supports_prefetch:
- self.src_dataset.prefetch(indices)
diff --git a/spaces/OlaWod/FreeVC/utils.py b/spaces/OlaWod/FreeVC/utils.py
deleted file mode 100644
index 1bd5b6185af6c9f1c270b8ba345bfc36d059e081..0000000000000000000000000000000000000000
--- a/spaces/OlaWod/FreeVC/utils.py
+++ /dev/null
@@ -1,305 +0,0 @@
-import os
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-from torch.nn import functional as F
-from commons import sequence_mask
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def get_cmodel(rank):
- checkpoint = torch.load('wavlm/WavLM-Large.pt')
- cfg = WavLMConfig(checkpoint['cfg'])
- cmodel = WavLM(cfg).cuda(rank)
- cmodel.load_state_dict(checkpoint['model'])
- cmodel.eval()
- return cmodel
-
-
-def get_content(cmodel, y):
- with torch.no_grad():
- c = cmodel.extract_features(y.squeeze(1))[0]
- c = c.transpose(1, 2)
- return c
-
-
-def get_vocoder(rank):
- with open("hifigan/config.json", "r") as f:
- config = json.load(f)
- config = hifigan.AttrDict(config)
- vocoder = hifigan.Generator(config)
- ckpt = torch.load("hifigan/generator_v1")
- vocoder.load_state_dict(ckpt["generator"])
- vocoder.eval()
- vocoder.remove_weight_norm()
- vocoder.cuda(rank)
- return vocoder
-
-
-def transform(mel, height): # 68-92
- #r = np.random.random()
- #rate = r * 0.3 + 0.85 # 0.85-1.15
- #height = int(mel.size(-2) * rate)
- tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1)))
- if height >= mel.size(-2):
- return tgt[:, :mel.size(-2), :]
- else:
- silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1)
- silence += torch.randn_like(silence) / 10
- return torch.cat((tgt, silence), 1)
-
-
-def stretch(mel, width): # 0.5-2
- return torchvision.transforms.functional.resize(mel, (mel.size(-2), width))
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict= {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})" .format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/README.md b/spaces/Omnibus/Video-Diffusion-WebUI/README.md
deleted file mode 100644
index 269acace59ae1caff969f832b9a89b446ad250d3..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/Video-Diffusion-WebUI/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Video Diffusion WebUI
-emoji: 🏃
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-tags:
-- making-demos
-duplicated_from: ArtGAN/Video-Diffusion-WebUI
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_keypoint.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_keypoint.py
deleted file mode 100644
index b4ceb066faf696954244205dc75376b767071217..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/data/coco_keypoint.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from detectron2.data.detection_utils import create_keypoint_hflip_indices
-
-from .coco import dataloader
-
-dataloader.train.dataset.min_keypoints = 1
-dataloader.train.dataset.names = "keypoints_coco_2017_train"
-dataloader.test.dataset.names = "keypoints_coco_2017_val"
-
-dataloader.train.mapper.update(
- use_instance_mask=False,
- use_keypoint=True,
- keypoint_hflip_indices=create_keypoint_hflip_indices(dataloader.train.dataset.names),
-)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py
deleted file mode 100644
index 744d5306f5b0ba4cf508731bd790bad823b520fa..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/common/models/mask_rcnn_fpn.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from detectron2.config import LazyCall as L
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.meta_arch import GeneralizedRCNN
-from detectron2.modeling.anchor_generator import DefaultAnchorGenerator
-from detectron2.modeling.backbone.fpn import LastLevelMaxPool
-from detectron2.modeling.backbone import BasicStem, FPN, ResNet
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.matcher import Matcher
-from detectron2.modeling.poolers import ROIPooler
-from detectron2.modeling.proposal_generator import RPN, StandardRPNHead
-from detectron2.modeling.roi_heads import (
- StandardROIHeads,
- FastRCNNOutputLayers,
- MaskRCNNConvUpsampleHead,
- FastRCNNConvFCHead,
-)
-
-model = L(GeneralizedRCNN)(
- backbone=L(FPN)(
- bottom_up=L(ResNet)(
- stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"),
- stages=L(ResNet.make_default_stages)(
- depth=50,
- stride_in_1x1=True,
- norm="FrozenBN",
- ),
- out_features=["res2", "res3", "res4", "res5"],
- ),
- in_features="${.bottom_up.out_features}",
- out_channels=256,
- top_block=L(LastLevelMaxPool)(),
- ),
- proposal_generator=L(RPN)(
- in_features=["p2", "p3", "p4", "p5", "p6"],
- head=L(StandardRPNHead)(in_channels=256, num_anchors=3),
- anchor_generator=L(DefaultAnchorGenerator)(
- sizes=[[32], [64], [128], [256], [512]],
- aspect_ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64],
- offset=0.0,
- ),
- anchor_matcher=L(Matcher)(
- thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True
- ),
- box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]),
- batch_size_per_image=256,
- positive_fraction=0.5,
- pre_nms_topk=(2000, 1000),
- post_nms_topk=(1000, 1000),
- nms_thresh=0.7,
- ),
- roi_heads=L(StandardROIHeads)(
- num_classes=80,
- batch_size_per_image=512,
- positive_fraction=0.25,
- proposal_matcher=L(Matcher)(
- thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False
- ),
- box_in_features=["p2", "p3", "p4", "p5"],
- box_pooler=L(ROIPooler)(
- output_size=7,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- box_head=L(FastRCNNConvFCHead)(
- input_shape=ShapeSpec(channels=256, height=7, width=7),
- conv_dims=[],
- fc_dims=[1024, 1024],
- ),
- box_predictor=L(FastRCNNOutputLayers)(
- input_shape=ShapeSpec(channels=1024),
- test_score_thresh=0.05,
- box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)),
- num_classes="${..num_classes}",
- ),
- mask_in_features=["p2", "p3", "p4", "p5"],
- mask_pooler=L(ROIPooler)(
- output_size=14,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- mask_head=L(MaskRCNNConvUpsampleHead)(
- input_shape=ShapeSpec(channels=256, width=14, height=14),
- num_classes="${..num_classes}",
- conv_dims=[256, 256, 256, 256, 256],
- ),
- ),
- pixel_mean=[103.530, 116.280, 123.675],
- pixel_std=[1.0, 1.0, 1.0],
- input_format="BGR",
-)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/demo.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/demo.py
deleted file mode 100644
index 4baa8767f7b299f18253aadb15a9bac5b9cc07fc..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/demo/demo.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import glob
-import multiprocessing as mp
-import numpy as np
-import os
-import tempfile
-import time
-import warnings
-import cv2
-import tqdm
-
-from detectron2.config import get_cfg
-from detectron2.data.detection_utils import read_image
-from detectron2.utils.logger import setup_logger
-
-from predictor import VisualizationDemo
-
-# constants
-WINDOW_NAME = "COCO detections"
-
-
-def setup_cfg(args):
- # load config from file and command-line arguments
- cfg = get_cfg()
- # To use demo for Panoptic-DeepLab, please uncomment the following two lines.
- # from detectron2.projects.panoptic_deeplab import add_panoptic_deeplab_config # noqa
- # add_panoptic_deeplab_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- # Set score_threshold for builtin models
- cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold
- cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold
- cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold
- cfg.freeze()
- return cfg
-
-
-def get_parser():
- parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs")
- parser.add_argument(
- "--config-file",
- default="configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml",
- metavar="FILE",
- help="path to config file",
- )
- parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.")
- parser.add_argument("--video-input", help="Path to video file.")
- parser.add_argument(
- "--input",
- nargs="+",
- help="A list of space separated input images; "
- "or a single glob pattern such as 'directory/*.jpg'",
- )
- parser.add_argument(
- "--output",
- help="A file or directory to save output visualizations. "
- "If not given, will show output in an OpenCV window.",
- )
-
- parser.add_argument(
- "--confidence-threshold",
- type=float,
- default=0.5,
- help="Minimum score for instance predictions to be shown",
- )
- parser.add_argument(
- "--opts",
- help="Modify config options using the command-line 'KEY VALUE' pairs",
- default=[],
- nargs=argparse.REMAINDER,
- )
- return parser
-
-
-def test_opencv_video_format(codec, file_ext):
- with tempfile.TemporaryDirectory(prefix="video_format_test") as dir:
- filename = os.path.join(dir, "test_file" + file_ext)
- writer = cv2.VideoWriter(
- filename=filename,
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=float(30),
- frameSize=(10, 10),
- isColor=True,
- )
- [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)]
- writer.release()
- if os.path.isfile(filename):
- return True
- return False
-
-
-if __name__ == "__main__":
- mp.set_start_method("spawn", force=True)
- args = get_parser().parse_args()
- setup_logger(name="fvcore")
- logger = setup_logger()
- logger.info("Arguments: " + str(args))
-
- cfg = setup_cfg(args)
-
- demo = VisualizationDemo(cfg)
-
- if args.input:
- if len(args.input) == 1:
- args.input = glob.glob(os.path.expanduser(args.input[0]))
- assert args.input, "The input path(s) was not found"
- for path in tqdm.tqdm(args.input, disable=not args.output):
- # use PIL, to be consistent with evaluation
- img = read_image(path, format="BGR")
- start_time = time.time()
- predictions, visualized_output = demo.run_on_image(img)
- logger.info(
- "{}: {} in {:.2f}s".format(
- path,
- "detected {} instances".format(len(predictions["instances"]))
- if "instances" in predictions
- else "finished",
- time.time() - start_time,
- )
- )
-
- if args.output:
- if os.path.isdir(args.output):
- assert os.path.isdir(args.output), args.output
- out_filename = os.path.join(args.output, os.path.basename(path))
- else:
- assert len(args.input) == 1, "Please specify a directory with args.output"
- out_filename = args.output
- visualized_output.save(out_filename)
- else:
- cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
- cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1])
- if cv2.waitKey(0) == 27:
- break # esc to quit
- elif args.webcam:
- assert args.input is None, "Cannot have both --input and --webcam!"
- assert args.output is None, "output not yet supported with --webcam!"
- cam = cv2.VideoCapture(0)
- for vis in tqdm.tqdm(demo.run_on_video(cam)):
- cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
- cv2.imshow(WINDOW_NAME, vis)
- if cv2.waitKey(1) == 27:
- break # esc to quit
- cam.release()
- cv2.destroyAllWindows()
- elif args.video_input:
- video = cv2.VideoCapture(args.video_input)
- width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
- height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
- frames_per_second = video.get(cv2.CAP_PROP_FPS)
- num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
- basename = os.path.basename(args.video_input)
- codec, file_ext = (
- ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4")
- )
- if codec == ".mp4v":
- warnings.warn("x264 codec not available, switching to mp4v")
- if args.output:
- if os.path.isdir(args.output):
- output_fname = os.path.join(args.output, basename)
- output_fname = os.path.splitext(output_fname)[0] + file_ext
- else:
- output_fname = args.output
- assert not os.path.isfile(output_fname), output_fname
- output_file = cv2.VideoWriter(
- filename=output_fname,
- # some installation of opencv may not support x264 (due to its license),
- # you can try other format (e.g. MPEG)
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=float(frames_per_second),
- frameSize=(width, height),
- isColor=True,
- )
- assert os.path.isfile(args.video_input)
- for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames):
- if args.output:
- output_file.write(vis_frame)
- else:
- cv2.namedWindow(basename, cv2.WINDOW_NORMAL)
- cv2.imshow(basename, vis_frame)
- if cv2.waitKey(1) == 27:
- break # esc to quit
- video.release()
- if args.output:
- output_file.release()
- else:
- cv2.destroyAllWindows()
diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/__init__.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/__init__.py
deleted file mode 100644
index 81ac34183d164e666d42b5481e7f7e83ad15c183..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import logging
-
-
-logger = logging.getLogger("base")
-
-
-def create_model(opt):
- model = opt["model"]
- if model == "image_base":
- from models.kernel_encoding.image_base_model import ImageBaseModel as M
- else:
- raise NotImplementedError("Model [{:s}] not recognized.".format(model))
- m = M(opt)
- logger.info("Model [{:s}] is created.".format(m.__class__.__name__))
- return m
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/control.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/control.go
deleted file mode 100644
index 69039befe7589f80e6a2e859e724aaf30bc62971..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/control.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/parser.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/parser.go
deleted file mode 100644
index 379fef982b440ee7d238efeacf0ef848e7ac122b..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/elisp/parser.go and /dev/null differ
diff --git a/spaces/Pixeled/dogcat/app.py b/spaces/Pixeled/dogcat/app.py
deleted file mode 100644
index f394165c0f651ead3c64ecbcb8cde550e15bff31..0000000000000000000000000000000000000000
--- a/spaces/Pixeled/dogcat/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-
-learn = load_learner("model.pkl")
-
-categories = ("Cat","Dog")
-
-def classify_image(img):
- pred,idx,probs = learn.predict(img)
- return dict(zip(categories, map(float,probs)))
-
-image = gr.inputs.Image(shape=(192,192))
-label = gr.outputs.Label()
-
-examples = ["cat.jpg","dog.jpg"]
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
-intf.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Proveedy/dreambooth-trainingv15/train_dreambooth.py b/spaces/Proveedy/dreambooth-trainingv15/train_dreambooth.py
deleted file mode 100644
index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000
--- a/spaces/Proveedy/dreambooth-trainingv15/train_dreambooth.py
+++ /dev/null
@@ -1,889 +0,0 @@
-import argparse
-import itertools
-import math
-import os
-from pathlib import Path
-from typing import Optional
-import subprocess
-import sys
-import gc
-import random
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-from torch.utils.data import Dataset
-
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.optimization import get_scheduler
-from huggingface_hub import HfFolder, Repository, whoami
-from PIL import Image
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import CLIPTextModel, CLIPTokenizer
-
-
-logger = get_logger(__name__)
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- #required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--instance_data_dir",
- type=str,
- default=None,
- #required=True,
- help="A folder containing the training data of instance images.",
- )
- parser.add_argument(
- "--class_data_dir",
- type=str,
- default=None,
- #required=False,
- help="A folder containing the training data of class images.",
- )
- parser.add_argument(
- "--instance_prompt",
- type=str,
- default=None,
- help="The prompt with identifier specifying the instance",
- )
- parser.add_argument(
- "--class_prompt",
- type=str,
- default="",
- help="The prompt to specify images in the same class as provided instance images.",
- )
- parser.add_argument(
- "--with_prior_preservation",
- default=False,
- action="store_true",
- help="Flag to add prior preservation loss.",
- )
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
- parser.add_argument(
- "--num_class_images",
- type=int,
- default=100,
- help=(
- "Minimal class images for prior preservation loss. If not have enough images, additional images will be"
- " sampled with class_prompt."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=512,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
- )
- parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder")
- parser.add_argument(
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument(
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
- )
- parser.add_argument("--num_train_epochs", type=int, default=1)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-6,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default="no",
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose"
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
- "and an Nvidia Ampere GPU."
- ),
- )
-
- parser.add_argument(
- "--save_n_steps",
- type=int,
- default=1,
- help=("Save the model every n global_steps"),
- )
-
-
- parser.add_argument(
- "--save_starting_step",
- type=int,
- default=1,
- help=("The step from which it starts saving intermediary checkpoints"),
- )
-
- parser.add_argument(
- "--stop_text_encoder_training",
- type=int,
- default=1000000,
- help=("The step at which the text_encoder is no longer trained"),
- )
-
-
- parser.add_argument(
- "--image_captions_filename",
- action="store_true",
- help="Get captions from filename",
- )
-
-
- parser.add_argument(
- "--dump_only_text_encoder",
- action="store_true",
- default=False,
- help="Dump only text encoder",
- )
-
- parser.add_argument(
- "--train_only_unet",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--cache_latents",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--Session_dir",
- type=str,
- default="",
- help="Current session directory",
- )
-
-
-
-
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- #if args.instance_data_dir is None:
- # raise ValueError("You must specify a train data directory.")
-
- #if args.with_prior_preservation:
- # if args.class_data_dir is None:
- # raise ValueError("You must specify a data directory for class images.")
- # if args.class_prompt is None:
- # raise ValueError("You must specify prompt for class images.")
-
- return args
-
-
-class DreamBoothDataset(Dataset):
- """
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
- It pre-processes the images and the tokenizes prompts.
- """
-
- def __init__(
- self,
- instance_data_root,
- instance_prompt,
- tokenizer,
- args,
- class_data_root=None,
- class_prompt=None,
- size=512,
- center_crop=False,
- ):
- self.size = size
- self.center_crop = center_crop
- self.tokenizer = tokenizer
- self.image_captions_filename = None
-
- self.instance_data_root = Path(instance_data_root)
- if not self.instance_data_root.exists():
- raise ValueError("Instance images root doesn't exists.")
-
- self.instance_images_path = list(Path(instance_data_root).iterdir())
- self.num_instance_images = len(self.instance_images_path)
- self.instance_prompt = instance_prompt
- self._length = self.num_instance_images
-
- if args.image_captions_filename:
- self.image_captions_filename = True
-
- if class_data_root is not None:
- self.class_data_root = Path(class_data_root)
- self.class_data_root.mkdir(parents=True, exist_ok=True)
- self.class_images_path = list(self.class_data_root.iterdir())
- random.shuffle(self.class_images_path)
- self.num_class_images = len(self.class_images_path)
- self._length = max(self.num_class_images, self.num_instance_images)
- self.class_prompt = class_prompt
- else:
- self.class_data_root = None
-
- self.image_transforms = transforms.Compose(
- [
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, index):
- example = {}
- path = self.instance_images_path[index % self.num_instance_images]
- instance_image = Image.open(path)
- if not instance_image.mode == "RGB":
- instance_image = instance_image.convert("RGB")
-
- instance_prompt = self.instance_prompt
-
- if self.image_captions_filename:
- filename = Path(path).stem
- pt=''.join([i for i in filename if not i.isdigit()])
- pt=pt.replace("_"," ")
- pt=pt.replace("(","")
- pt=pt.replace(")","")
- pt=pt.replace("-","")
- instance_prompt = pt
- sys.stdout.write(" [0;32m" +instance_prompt+" [0m")
- sys.stdout.flush()
-
-
- example["instance_images"] = self.image_transforms(instance_image)
- example["instance_prompt_ids"] = self.tokenizer(
- instance_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- if self.class_data_root:
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
- if not class_image.mode == "RGB":
- class_image = class_image.convert("RGB")
- example["class_images"] = self.image_transforms(class_image)
- example["class_prompt_ids"] = self.tokenizer(
- self.class_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- return example
-
-
-
-class PromptDataset(Dataset):
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
-
- def __init__(self, prompt, num_samples):
- self.prompt = prompt
- self.num_samples = num_samples
-
- def __len__(self):
- return self.num_samples
-
- def __getitem__(self, index):
- example = {}
- example["prompt"] = self.prompt
- example["index"] = index
- return example
-
-class LatentsDataset(Dataset):
- def __init__(self, latents_cache, text_encoder_cache):
- self.latents_cache = latents_cache
- self.text_encoder_cache = text_encoder_cache
-
- def __len__(self):
- return len(self.latents_cache)
-
- def __getitem__(self, index):
- return self.latents_cache[index], self.text_encoder_cache[index]
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
- """
- Starts from base starting dict and then adds the remaining key values from updater replacing the values from
- the first starting/base dict with the second updater dict.
-
- For later: how does d = {**d1, **d2} replace collision?
-
- :param starting_dict:
- :param updater_dict:
- :return:
- """
- new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
- new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
- return new_dict
-
-def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
- """
-
- ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
- :param args1:
- :param args2:
- :return:
- """
- # - the merged args
- # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
- merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
- args = argparse.Namespace(**merged_key_values_for_namespace)
- return args
-
-def run_training(args_imported):
- args_default = parse_args()
- args = merge_args(args_default, args_imported)
- print(args)
- logging_dir = Path(args.output_dir, args.logging_dir)
- i=args.save_starting_step
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with="tensorboard",
- logging_dir=logging_dir,
- )
-
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
- if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1:
- raise ValueError(
- "Gradient accumulation is not supported when training the text encoder in distributed training. "
- "Please set gradient_accumulation_steps to 1. This feature will be supported in the future."
- )
-
- if args.seed is not None:
- set_seed(args.seed)
-
- if args.with_prior_preservation:
- class_images_dir = Path(args.class_data_dir)
- if not class_images_dir.exists():
- class_images_dir.mkdir(parents=True)
- cur_class_images = len(list(class_images_dir.iterdir()))
-
- if cur_class_images < args.num_class_images:
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path, torch_dtype=torch_dtype
- )
- pipeline.set_progress_bar_config(disable=True)
-
- num_new_images = args.num_class_images - cur_class_images
- logger.info(f"Number of class images to sample: {num_new_images}.")
-
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
-
- sample_dataloader = accelerator.prepare(sample_dataloader)
- pipeline.to(accelerator.device)
-
- for example in tqdm(
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
- ):
- with torch.autocast("cuda"):
- images = pipeline(example["prompt"]).images
-
- for i, image in enumerate(images):
- image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg")
-
- del pipeline
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- repo = Repository(args.output_dir, clone_from=repo_name)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- # Load the tokenizer
- if args.tokenizer_name:
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
- elif args.pretrained_model_name_or_path:
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
-
- # Load models and create wrapper for stable diffusion
- if args.train_only_unet:
- if os.path.exists(str(args.output_dir+"/text_encoder_trained")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained")
- elif os.path.exists(str(args.output_dir+"/text_encoder")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
- if is_xformers_available():
- try:
- print("Enabling memory efficient attention with xformers...")
- unet.enable_xformers_memory_efficient_attention()
- except Exception as e:
- logger.warning(
- f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}"
- )
- vae.requires_grad_(False)
- if not args.train_text_encoder:
- text_encoder.requires_grad_(False)
-
- if args.gradient_checkpointing:
- unet.enable_gradient_checkpointing()
- if args.train_text_encoder:
- text_encoder.gradient_checkpointing_enable()
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
- if args.use_8bit_adam:
- try:
- import bitsandbytes as bnb
- except ImportError:
- raise ImportError(
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
- )
-
- optimizer_class = bnb.optim.AdamW8bit
- else:
- optimizer_class = torch.optim.AdamW
-
- params_to_optimize = (
- itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters()
- )
- optimizer = optimizer_class(
- params_to_optimize,
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler")
-
- train_dataset = DreamBoothDataset(
- instance_data_root=args.instance_data_dir,
- instance_prompt=args.instance_prompt,
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
- class_prompt=args.class_prompt,
- tokenizer=tokenizer,
- size=args.resolution,
- center_crop=args.center_crop,
- args=args,
- )
-
- def collate_fn(examples):
- input_ids = [example["instance_prompt_ids"] for example in examples]
- pixel_values = [example["instance_images"] for example in examples]
-
- # Concat class and instance examples for prior preservation.
- # We do this to avoid doing two forward passes.
- if args.with_prior_preservation:
- input_ids += [example["class_prompt_ids"] for example in examples]
- pixel_values += [example["class_images"] for example in examples]
-
- pixel_values = torch.stack(pixel_values)
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
-
- input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids
-
- batch = {
- "input_ids": input_ids,
- "pixel_values": pixel_values,
- }
- return batch
-
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- if args.train_text_encoder:
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler
- )
- else:
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, optimizer, train_dataloader, lr_scheduler
- )
-
- weight_dtype = torch.float32
- if args.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif args.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move text_encode and vae to gpu.
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- vae.to(accelerator.device, dtype=weight_dtype)
- if not args.train_text_encoder:
- text_encoder.to(accelerator.device, dtype=weight_dtype)
-
-
- if args.cache_latents:
- latents_cache = []
- text_encoder_cache = []
- for batch in tqdm(train_dataloader, desc="Caching latents"):
- with torch.no_grad():
- batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype)
- batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True)
- latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
- if args.train_text_encoder:
- text_encoder_cache.append(batch["input_ids"])
- else:
- text_encoder_cache.append(text_encoder(batch["input_ids"])[0])
- train_dataset = LatentsDataset(latents_cache, text_encoder_cache)
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True)
-
- del vae
- #if not args.train_text_encoder:
- # del text_encoder
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("dreambooth", config=vars(args))
-
- def bar(prg):
- br='|'+'█' * prg + ' ' * (25-prg)+'|'
- return br
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- global_step = 0
-
- for epoch in range(args.num_train_epochs):
- unet.train()
- if args.train_text_encoder:
- text_encoder.train()
- for step, batch in enumerate(train_dataloader):
- with accelerator.accumulate(unet):
- # Convert images to latent space
- with torch.no_grad():
- if args.cache_latents:
- latents_dist = batch[0][0]
- else:
- latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist
- latents = latents_dist.sample() * 0.18215
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # Get the text embedding for conditioning
- if(args.cache_latents):
- if args.train_text_encoder:
- encoder_hidden_states = text_encoder(batch[0][1])[0]
- else:
- encoder_hidden_states = batch[0][1]
- else:
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
-
- # Predict the noise residual
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- if args.with_prior_preservation:
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
- target, target_prior = torch.chunk(target, 2, dim=0)
-
- # Compute instance loss
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean()
-
- # Compute prior loss
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
-
- # Add the prior loss to the instance loss.
- loss = loss + args.prior_loss_weight * prior_loss
- else:
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- accelerator.backward(loss)
- if accelerator.sync_gradients:
- params_to_clip = (
- itertools.chain(unet.parameters(), text_encoder.parameters())
- if args.train_text_encoder
- else unet.parameters()
- )
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- fll=round((global_step*100)/args.max_train_steps)
- fll=round(fll/4)
- pr=bar(fll)
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- progress_bar.set_description_str("Progress:"+pr)
- accelerator.log(logs, step=global_step)
-
- if global_step >= args.max_train_steps:
- break
-
- if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30:
- if accelerator.is_main_process:
- print(" [0;32m" +" Freezing the text_encoder ..."+" [0m")
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if os.path.exists(frz_dir):
- subprocess.call('rm -r '+ frz_dir, shell=True)
- os.mkdir(frz_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(frz_dir)
-
- if args.save_n_steps >= 200:
- if global_step < args.max_train_steps and global_step+1==i:
- ckpt_name = "_step_" + str(global_step+1)
- save_dir = Path(args.output_dir+ckpt_name)
- save_dir=str(save_dir)
- save_dir=save_dir.replace(" ", "_")
- if not os.path.exists(save_dir):
- os.mkdir(save_dir)
- inst=save_dir[16:]
- inst=inst.replace(" ", "_")
- print(" [1;32mSAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt")
- # Create the pipeline using the trained modules and save it.
- if accelerator.is_main_process:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(save_dir)
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True)
- subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True)
- chkpth=args.Session_dir+"/"+inst+".ckpt"
- subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True)
- subprocess.call('rm -r '+ save_dir, shell=True)
- i=i+args.save_n_steps
-
- accelerator.wait_for_everyone()
-
- # Create the pipeline using using the trained modules and save it.
- if accelerator.is_main_process:
- if args.dump_only_text_encoder:
- txt_dir=args.output_dir + "/text_encoder_trained"
- if not os.path.exists(txt_dir):
- os.mkdir(txt_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(txt_dir)
-
- elif args.train_only_unet:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(args.output_dir)
- txt_dir=args.output_dir + "/text_encoder_trained"
- subprocess.call('rm -r '+txt_dir, shell=True)
-
- else:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- frz_dir=args.output_dir + "/text_encoder_frozen"
- pipeline.save_pretrained(args.output_dir)
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True)
- subprocess.call('rm -r '+ frz_dir, shell=True)
-
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
-
- accelerator.end_training()
- del pipeline
- torch.cuda.empty_cache()
- gc.collect()
-if __name__ == "__main__":
- pass
- #main()
-
diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/open_images_helper.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/open_images_helper.py
deleted file mode 100644
index 8feb7c6e705fc165d2983303192aaa88f579b243..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/open_images_helper.py
+++ /dev/null
@@ -1,379 +0,0 @@
-open_images_unify_categories_for_coco = {
- '/m/03bt1vf': '/m/01g317',
- '/m/04yx4': '/m/01g317',
- '/m/05r655': '/m/01g317',
- '/m/01bl7v': '/m/01g317',
- '/m/0cnyhnx': '/m/01xq0k1',
- '/m/01226z': '/m/018xm',
- '/m/05ctyq': '/m/018xm',
- '/m/058qzx': '/m/04ctx',
- '/m/06pcq': '/m/0l515',
- '/m/03m3pdh': '/m/02crq1',
- '/m/046dlr': '/m/01x3z',
- '/m/0h8mzrc': '/m/01x3z',
-}
-
-
-top_300_classes_plus_coco_compatibility = [
- ('Man', 1060962),
- ('Clothing', 986610),
- ('Tree', 748162),
- ('Woman', 611896),
- ('Person', 610294),
- ('Human face', 442948),
- ('Girl', 175399),
- ('Building', 162147),
- ('Car', 159135),
- ('Plant', 155704),
- ('Human body', 137073),
- ('Flower', 133128),
- ('Window', 127485),
- ('Human arm', 118380),
- ('House', 114365),
- ('Wheel', 111684),
- ('Suit', 99054),
- ('Human hair', 98089),
- ('Human head', 92763),
- ('Chair', 88624),
- ('Boy', 79849),
- ('Table', 73699),
- ('Jeans', 57200),
- ('Tire', 55725),
- ('Skyscraper', 53321),
- ('Food', 52400),
- ('Footwear', 50335),
- ('Dress', 50236),
- ('Human leg', 47124),
- ('Toy', 46636),
- ('Tower', 45605),
- ('Boat', 43486),
- ('Land vehicle', 40541),
- ('Bicycle wheel', 34646),
- ('Palm tree', 33729),
- ('Fashion accessory', 32914),
- ('Glasses', 31940),
- ('Bicycle', 31409),
- ('Furniture', 30656),
- ('Sculpture', 29643),
- ('Bottle', 27558),
- ('Dog', 26980),
- ('Snack', 26796),
- ('Human hand', 26664),
- ('Bird', 25791),
- ('Book', 25415),
- ('Guitar', 24386),
- ('Jacket', 23998),
- ('Poster', 22192),
- ('Dessert', 21284),
- ('Baked goods', 20657),
- ('Drink', 19754),
- ('Flag', 18588),
- ('Houseplant', 18205),
- ('Tableware', 17613),
- ('Airplane', 17218),
- ('Door', 17195),
- ('Sports uniform', 17068),
- ('Shelf', 16865),
- ('Drum', 16612),
- ('Vehicle', 16542),
- ('Microphone', 15269),
- ('Street light', 14957),
- ('Cat', 14879),
- ('Fruit', 13684),
- ('Fast food', 13536),
- ('Animal', 12932),
- ('Vegetable', 12534),
- ('Train', 12358),
- ('Horse', 11948),
- ('Flowerpot', 11728),
- ('Motorcycle', 11621),
- ('Fish', 11517),
- ('Desk', 11405),
- ('Helmet', 10996),
- ('Truck', 10915),
- ('Bus', 10695),
- ('Hat', 10532),
- ('Auto part', 10488),
- ('Musical instrument', 10303),
- ('Sunglasses', 10207),
- ('Picture frame', 10096),
- ('Sports equipment', 10015),
- ('Shorts', 9999),
- ('Wine glass', 9632),
- ('Duck', 9242),
- ('Wine', 9032),
- ('Rose', 8781),
- ('Tie', 8693),
- ('Butterfly', 8436),
- ('Beer', 7978),
- ('Cabinetry', 7956),
- ('Laptop', 7907),
- ('Insect', 7497),
- ('Goggles', 7363),
- ('Shirt', 7098),
- ('Dairy Product', 7021),
- ('Marine invertebrates', 7014),
- ('Cattle', 7006),
- ('Trousers', 6903),
- ('Van', 6843),
- ('Billboard', 6777),
- ('Balloon', 6367),
- ('Human nose', 6103),
- ('Tent', 6073),
- ('Camera', 6014),
- ('Doll', 6002),
- ('Coat', 5951),
- ('Mobile phone', 5758),
- ('Swimwear', 5729),
- ('Strawberry', 5691),
- ('Stairs', 5643),
- ('Goose', 5599),
- ('Umbrella', 5536),
- ('Cake', 5508),
- ('Sun hat', 5475),
- ('Bench', 5310),
- ('Bookcase', 5163),
- ('Bee', 5140),
- ('Computer monitor', 5078),
- ('Hiking equipment', 4983),
- ('Office building', 4981),
- ('Coffee cup', 4748),
- ('Curtain', 4685),
- ('Plate', 4651),
- ('Box', 4621),
- ('Tomato', 4595),
- ('Coffee table', 4529),
- ('Office supplies', 4473),
- ('Maple', 4416),
- ('Muffin', 4365),
- ('Cocktail', 4234),
- ('Castle', 4197),
- ('Couch', 4134),
- ('Pumpkin', 3983),
- ('Computer keyboard', 3960),
- ('Human mouth', 3926),
- ('Christmas tree', 3893),
- ('Mushroom', 3883),
- ('Swimming pool', 3809),
- ('Pastry', 3799),
- ('Lavender (Plant)', 3769),
- ('Football helmet', 3732),
- ('Bread', 3648),
- ('Traffic sign', 3628),
- ('Common sunflower', 3597),
- ('Television', 3550),
- ('Bed', 3525),
- ('Cookie', 3485),
- ('Fountain', 3484),
- ('Paddle', 3447),
- ('Bicycle helmet', 3429),
- ('Porch', 3420),
- ('Deer', 3387),
- ('Fedora', 3339),
- ('Canoe', 3338),
- ('Carnivore', 3266),
- ('Bowl', 3202),
- ('Human eye', 3166),
- ('Ball', 3118),
- ('Pillow', 3077),
- ('Salad', 3061),
- ('Beetle', 3060),
- ('Orange', 3050),
- ('Drawer', 2958),
- ('Platter', 2937),
- ('Elephant', 2921),
- ('Seafood', 2921),
- ('Monkey', 2915),
- ('Countertop', 2879),
- ('Watercraft', 2831),
- ('Helicopter', 2805),
- ('Kitchen appliance', 2797),
- ('Personal flotation device', 2781),
- ('Swan', 2739),
- ('Lamp', 2711),
- ('Boot', 2695),
- ('Bronze sculpture', 2693),
- ('Chicken', 2677),
- ('Taxi', 2643),
- ('Juice', 2615),
- ('Cowboy hat', 2604),
- ('Apple', 2600),
- ('Tin can', 2590),
- ('Necklace', 2564),
- ('Ice cream', 2560),
- ('Human beard', 2539),
- ('Coin', 2536),
- ('Candle', 2515),
- ('Cart', 2512),
- ('High heels', 2441),
- ('Weapon', 2433),
- ('Handbag', 2406),
- ('Penguin', 2396),
- ('Rifle', 2352),
- ('Violin', 2336),
- ('Skull', 2304),
- ('Lantern', 2285),
- ('Scarf', 2269),
- ('Saucer', 2225),
- ('Sheep', 2215),
- ('Vase', 2189),
- ('Lily', 2180),
- ('Mug', 2154),
- ('Parrot', 2140),
- ('Human ear', 2137),
- ('Sandal', 2115),
- ('Lizard', 2100),
- ('Kitchen & dining room table', 2063),
- ('Spider', 1977),
- ('Coffee', 1974),
- ('Goat', 1926),
- ('Squirrel', 1922),
- ('Cello', 1913),
- ('Sushi', 1881),
- ('Tortoise', 1876),
- ('Pizza', 1870),
- ('Studio couch', 1864),
- ('Barrel', 1862),
- ('Cosmetics', 1841),
- ('Moths and butterflies', 1841),
- ('Convenience store', 1817),
- ('Watch', 1792),
- ('Home appliance', 1786),
- ('Harbor seal', 1780),
- ('Luggage and bags', 1756),
- ('Vehicle registration plate', 1754),
- ('Shrimp', 1751),
- ('Jellyfish', 1730),
- ('French fries', 1723),
- ('Egg (Food)', 1698),
- ('Football', 1697),
- ('Musical keyboard', 1683),
- ('Falcon', 1674),
- ('Candy', 1660),
- ('Medical equipment', 1654),
- ('Eagle', 1651),
- ('Dinosaur', 1634),
- ('Surfboard', 1630),
- ('Tank', 1628),
- ('Grape', 1624),
- ('Lion', 1624),
- ('Owl', 1622),
- ('Ski', 1613),
- ('Waste container', 1606),
- ('Frog', 1591),
- ('Sparrow', 1585),
- ('Rabbit', 1581),
- ('Pen', 1546),
- ('Sea lion', 1537),
- ('Spoon', 1521),
- ('Sink', 1512),
- ('Teddy bear', 1507),
- ('Bull', 1495),
- ('Sofa bed', 1490),
- ('Dragonfly', 1479),
- ('Brassiere', 1478),
- ('Chest of drawers', 1472),
- ('Aircraft', 1466),
- ('Human foot', 1463),
- ('Pig', 1455),
- ('Fork', 1454),
- ('Antelope', 1438),
- ('Tripod', 1427),
- ('Tool', 1424),
- ('Cheese', 1422),
- ('Lemon', 1397),
- ('Hamburger', 1393),
- ('Dolphin', 1390),
- ('Mirror', 1390),
- ('Marine mammal', 1387),
- ('Giraffe', 1385),
- ('Snake', 1368),
- ('Gondola', 1364),
- ('Wheelchair', 1360),
- ('Piano', 1358),
- ('Cupboard', 1348),
- ('Banana', 1345),
- ('Trumpet', 1335),
- ('Lighthouse', 1333),
- ('Invertebrate', 1317),
- ('Carrot', 1268),
- ('Sock', 1260),
- ('Tiger', 1241),
- ('Camel', 1224),
- ('Parachute', 1224),
- ('Bathroom accessory', 1223),
- ('Earrings', 1221),
- ('Headphones', 1218),
- ('Skirt', 1198),
- ('Skateboard', 1190),
- ('Sandwich', 1148),
- ('Saxophone', 1141),
- ('Goldfish', 1136),
- ('Stool', 1104),
- ('Traffic light', 1097),
- ('Shellfish', 1081),
- ('Backpack', 1079),
- ('Sea turtle', 1078),
- ('Cucumber', 1075),
- ('Tea', 1051),
- ('Toilet', 1047),
- ('Roller skates', 1040),
- ('Mule', 1039),
- ('Bust', 1031),
- ('Broccoli', 1030),
- ('Crab', 1020),
- ('Oyster', 1019),
- ('Cannon', 1012),
- ('Zebra', 1012),
- ('French horn', 1008),
- ('Grapefruit', 998),
- ('Whiteboard', 997),
- ('Zucchini', 997),
- ('Crocodile', 992),
-
- ('Clock', 960),
- ('Wall clock', 958),
-
- ('Doughnut', 869),
- ('Snail', 868),
-
- ('Baseball glove', 859),
-
- ('Panda', 830),
- ('Tennis racket', 830),
-
- ('Pear', 652),
-
- ('Bagel', 617),
- ('Oven', 616),
- ('Ladybug', 615),
- ('Shark', 615),
- ('Polar bear', 614),
- ('Ostrich', 609),
-
- ('Hot dog', 473),
- ('Microwave oven', 467),
- ('Fire hydrant', 20),
- ('Stop sign', 20),
- ('Parking meter', 20),
- ('Bear', 20),
- ('Flying disc', 20),
- ('Snowboard', 20),
- ('Tennis ball', 20),
- ('Kite', 20),
- ('Baseball bat', 20),
- ('Kitchen knife', 20),
- ('Knife', 20),
- ('Submarine sandwich', 20),
- ('Computer mouse', 20),
- ('Remote control', 20),
- ('Toaster', 20),
- ('Sink', 20),
- ('Refrigerator', 20),
- ('Alarm clock', 20),
- ('Wall clock', 20),
- ('Scissors', 20),
- ('Hair dryer', 20),
- ('Toothbrush', 20),
- ('Suitcase', 20)
-]
diff --git a/spaces/Qrstud/gpt/README.md b/spaces/Qrstud/gpt/README.md
deleted file mode 100644
index e60635d86c3ec00b3dccddee02e30724705775d7..0000000000000000000000000000000000000000
--- a/spaces/Qrstud/gpt/README.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Chatgpt Python
-emoji: 📚
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
-python_version: 3.8.9
-tags:
-- chatgpt
-- gpt3
-- gpt-3
-- chatbot
-- chat
-- bot
-- chatgpt3
-- gpt
-duplicated_from: Damonho316/gpt
----
-
-# 🤖 ChatGPT-Python 🐍
-[](https://github.com/dylanmeca/ChatGPT-Python)
-[](https://github.com/dylanmeca/ChatGPT-Python/blob/main/LICENSE)
-[](https://github.com/dylanmeca)
-[](https://www.python.org)
-
-ChatGPT-Python is a software that allows you to use ChatGPT from python through a web interface, using the OpenAI API.
\ No newline at end of file
diff --git a/spaces/RMXK/RVC_HFF/Fixes/local_fixes.py b/spaces/RMXK/RVC_HFF/Fixes/local_fixes.py
deleted file mode 100644
index 8a418076eee6f65fe06eb0f607061796b839c1ee..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/Fixes/local_fixes.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import os
-import sys
-import time
-import shutil
-import requests
-import zipfile
-
-def insert_new_line(file_name, line_to_find, text_to_insert):
- lines = []
- with open(file_name, 'r', encoding='utf-8') as read_obj:
- lines = read_obj.readlines()
- already_exists = False
- with open(file_name + '.tmp', 'w', encoding='utf-8') as write_obj:
- for i in range(len(lines)):
- write_obj.write(lines[i])
- if lines[i].strip() == line_to_find:
- # If next line exists and starts with sys.path.append, skip
- if i+1 < len(lines) and lines[i+1].strip().startswith("sys.path.append"):
- print('It was already fixed! Skip adding a line...')
- already_exists = True
- break
- else:
- write_obj.write(text_to_insert + '\n')
- # If no existing sys.path.append line was found, replace the original file
- if not already_exists:
- os.replace(file_name + '.tmp', file_name)
- return True
- else:
- # If existing line was found, delete temporary file
- os.remove(file_name + '.tmp')
- return False
-
-def replace_in_file(file_name, old_text, new_text):
- with open(file_name, 'r', encoding='utf-8') as file:
- file_contents = file.read()
-
- if old_text in file_contents:
- file_contents = file_contents.replace(old_text, new_text)
- with open(file_name, 'w', encoding='utf-8') as file:
- file.write(file_contents)
- return True
-
- return False
-
-if __name__ == "__main__":
- current_path = os.getcwd()
- file_name = os.path.join(current_path, "infer", "modules", "train", "extract", "extract_f0_print.py")
- line_to_find = 'import numpy as np, logging'
- text_to_insert = "sys.path.append(r'" + current_path + "')"
-
-
- success_1 = insert_new_line(file_name, line_to_find, text_to_insert)
- if success_1:
- print('The first operation was successful!')
- else:
- print('He skipped the first operation because it was already fixed!')
-
- file_name = 'infer-web.py'
- old_text = 'with gr.Blocks(theme=gr.themes.Soft()) as app:'
- new_text = 'with gr.Blocks() as app:'
-
- success_2 = replace_in_file(file_name, old_text, new_text)
- if success_2:
- print('The second operation was successful!')
- else:
- print('The second operation was omitted because it was already fixed!')
-
- print('Local corrections successful! You should now be able to infer and train locally in Applio RVC Fork.')
-
- time.sleep(5)
-
-def find_torchcrepe_directory(directory):
- """
- Recursively searches for the topmost folder named 'torchcrepe' within a directory.
- Returns the path of the directory found or None if none is found.
- """
- for root, dirs, files in os.walk(directory):
- if 'torchcrepe' in dirs:
- return os.path.join(root, 'torchcrepe')
- return None
-
-def download_and_extract_torchcrepe():
- url = 'https://github.com/maxrmorrison/torchcrepe/archive/refs/heads/master.zip'
- temp_dir = 'temp_torchcrepe'
- destination_dir = os.getcwd()
-
- try:
- torchcrepe_dir_path = os.path.join(destination_dir, 'torchcrepe')
-
- if os.path.exists(torchcrepe_dir_path):
- print("Skipping the torchcrepe download. The folder already exists.")
- return
-
- # Download the file
- print("Starting torchcrepe download...")
- response = requests.get(url)
-
- # Raise an error if the GET request was unsuccessful
- response.raise_for_status()
- print("Download completed.")
-
- # Save the downloaded file
- zip_file_path = os.path.join(temp_dir, 'master.zip')
- os.makedirs(temp_dir, exist_ok=True)
- with open(zip_file_path, 'wb') as file:
- file.write(response.content)
- print(f"Zip file saved to {zip_file_path}")
-
- # Extract the zip file
- print("Extracting content...")
- with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
- zip_file.extractall(temp_dir)
- print("Extraction completed.")
-
- # Locate the torchcrepe folder and move it to the destination directory
- torchcrepe_dir = find_torchcrepe_directory(temp_dir)
- if torchcrepe_dir:
- shutil.move(torchcrepe_dir, destination_dir)
- print(f"Moved the torchcrepe directory to {destination_dir}!")
- else:
- print("The torchcrepe directory could not be located.")
-
- except Exception as e:
- print("Torchcrepe not successfully downloaded", e)
-
- # Clean up temporary directory
- if os.path.exists(temp_dir):
- shutil.rmtree(temp_dir)
-
-# Run the function
-download_and_extract_torchcrepe()
-
-temp_dir = 'temp_torchcrepe'
-
-if os.path.exists(temp_dir):
- shutil.rmtree(temp_dir)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/inspect.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/inspect.py
deleted file mode 100644
index a4e3599306e554cb62528cc6e807f319658d86bf..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/inspect.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import logging
-from optparse import Values
-from typing import Any, Dict, List
-
-from pip._vendor.packaging.markers import default_environment
-from pip._vendor.rich import print_json
-
-from pip import __version__
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.req_command import Command
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.metadata import BaseDistribution, get_environment
-from pip._internal.utils.compat import stdlib_pkgs
-from pip._internal.utils.urls import path_to_url
-
-logger = logging.getLogger(__name__)
-
-
-class InspectCommand(Command):
- """
- Inspect the content of a Python environment and produce a report in JSON format.
- """
-
- ignore_require_venv = True
- usage = """
- %prog [options]"""
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "--local",
- action="store_true",
- default=False,
- help=(
- "If in a virtualenv that has global access, do not list "
- "globally-installed packages."
- ),
- )
- self.cmd_opts.add_option(
- "--user",
- dest="user",
- action="store_true",
- default=False,
- help="Only output packages installed in user-site.",
- )
- self.cmd_opts.add_option(cmdoptions.list_path())
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- logger.warning(
- "pip inspect is currently an experimental command. "
- "The output format may change in a future release without prior warning."
- )
-
- cmdoptions.check_list_path_option(options)
- dists = get_environment(options.path).iter_installed_distributions(
- local_only=options.local,
- user_only=options.user,
- skip=set(stdlib_pkgs),
- )
- output = {
- "version": "0",
- "pip_version": __version__,
- "installed": [self._dist_to_dict(dist) for dist in dists],
- "environment": default_environment(),
- # TODO tags? scheme?
- }
- print_json(data=output)
- return SUCCESS
-
- def _dist_to_dict(self, dist: BaseDistribution) -> Dict[str, Any]:
- res: Dict[str, Any] = {
- "metadata": dist.metadata_dict,
- "metadata_location": dist.info_location,
- }
- # direct_url. Note that we don't have download_info (as in the installation
- # report) since it is not recorded in installed metadata.
- direct_url = dist.direct_url
- if direct_url is not None:
- res["direct_url"] = direct_url.to_dict()
- else:
- # Emulate direct_url for legacy editable installs.
- editable_project_location = dist.editable_project_location
- if editable_project_location is not None:
- res["direct_url"] = {
- "url": path_to_url(editable_project_location),
- "dir_info": {
- "editable": True,
- },
- }
- # installer
- installer = dist.installer
- if dist.installer:
- res["installer"] = installer
- # requested
- if dist.installed_with_dist_info:
- res["requested"] = dist.requested
- return res
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/wheel.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/wheel.py
deleted file mode 100644
index 1afbd562c65832b985e0e5f350876cd723aa3b72..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/wheel.py
+++ /dev/null
@@ -1,203 +0,0 @@
-import logging
-import os
-import shutil
-from optparse import Values
-from typing import List
-
-from pip._internal.cache import WheelCache
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.req_command import RequirementCommand, with_cleanup
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.exceptions import CommandError
-from pip._internal.operations.build.build_tracker import get_build_tracker
-from pip._internal.req.req_install import (
- InstallRequirement,
- LegacySetupPyOptionsCheckMode,
- check_legacy_setup_py_options,
-)
-from pip._internal.utils.deprecation import deprecated
-from pip._internal.utils.misc import ensure_dir, normalize_path
-from pip._internal.utils.temp_dir import TempDirectory
-from pip._internal.wheel_builder import build, should_build_for_wheel_command
-
-logger = logging.getLogger(__name__)
-
-
-class WheelCommand(RequirementCommand):
- """
- Build Wheel archives for your requirements and dependencies.
-
- Wheel is a built-package format, and offers the advantage of not
- recompiling your software during every install. For more details, see the
- wheel docs: https://wheel.readthedocs.io/en/latest/
-
- 'pip wheel' uses the build system interface as described here:
- https://pip.pypa.io/en/stable/reference/build-system/
-
- """
-
- usage = """
- %prog [options] ...
- %prog [options] -r ...
- %prog [options] [-e] ...
- %prog [options] [-e] ...
- %prog [options] ..."""
-
- def add_options(self) -> None:
-
- self.cmd_opts.add_option(
- "-w",
- "--wheel-dir",
- dest="wheel_dir",
- metavar="dir",
- default=os.curdir,
- help=(
- "Build wheels into , where the default is the "
- "current working directory."
- ),
- )
- self.cmd_opts.add_option(cmdoptions.no_binary())
- self.cmd_opts.add_option(cmdoptions.only_binary())
- self.cmd_opts.add_option(cmdoptions.prefer_binary())
- self.cmd_opts.add_option(cmdoptions.no_build_isolation())
- self.cmd_opts.add_option(cmdoptions.use_pep517())
- self.cmd_opts.add_option(cmdoptions.no_use_pep517())
- self.cmd_opts.add_option(cmdoptions.check_build_deps())
- self.cmd_opts.add_option(cmdoptions.constraints())
- self.cmd_opts.add_option(cmdoptions.editable())
- self.cmd_opts.add_option(cmdoptions.requirements())
- self.cmd_opts.add_option(cmdoptions.src())
- self.cmd_opts.add_option(cmdoptions.ignore_requires_python())
- self.cmd_opts.add_option(cmdoptions.no_deps())
- self.cmd_opts.add_option(cmdoptions.progress_bar())
-
- self.cmd_opts.add_option(
- "--no-verify",
- dest="no_verify",
- action="store_true",
- default=False,
- help="Don't verify if built wheel is valid.",
- )
-
- self.cmd_opts.add_option(cmdoptions.config_settings())
- self.cmd_opts.add_option(cmdoptions.build_options())
- self.cmd_opts.add_option(cmdoptions.global_options())
-
- self.cmd_opts.add_option(
- "--pre",
- action="store_true",
- default=False,
- help=(
- "Include pre-release and development versions. By default, "
- "pip only finds stable versions."
- ),
- )
-
- self.cmd_opts.add_option(cmdoptions.require_hashes())
-
- index_opts = cmdoptions.make_option_group(
- cmdoptions.index_group,
- self.parser,
- )
-
- self.parser.insert_option_group(0, index_opts)
- self.parser.insert_option_group(0, self.cmd_opts)
-
- @with_cleanup
- def run(self, options: Values, args: List[str]) -> int:
- session = self.get_default_session(options)
-
- finder = self._build_package_finder(options, session)
- wheel_cache = WheelCache(options.cache_dir, options.format_control)
-
- options.wheel_dir = normalize_path(options.wheel_dir)
- ensure_dir(options.wheel_dir)
-
- build_tracker = self.enter_context(get_build_tracker())
-
- directory = TempDirectory(
- delete=not options.no_clean,
- kind="wheel",
- globally_managed=True,
- )
-
- reqs = self.get_requirements(args, options, finder, session)
- check_legacy_setup_py_options(
- options, reqs, LegacySetupPyOptionsCheckMode.WHEEL
- )
-
- if "no-binary-enable-wheel-cache" in options.features_enabled:
- # TODO: remove format_control from WheelCache when the deprecation cycle
- # is over
- wheel_cache = WheelCache(options.cache_dir)
- else:
- if options.format_control.no_binary:
- deprecated(
- reason=(
- "--no-binary currently disables reading from "
- "the cache of locally built wheels. In the future "
- "--no-binary will not influence the wheel cache."
- ),
- replacement="to use the --no-cache-dir option",
- feature_flag="no-binary-enable-wheel-cache",
- issue=11453,
- gone_in="23.1",
- )
- wheel_cache = WheelCache(options.cache_dir, options.format_control)
-
- preparer = self.make_requirement_preparer(
- temp_build_dir=directory,
- options=options,
- build_tracker=build_tracker,
- session=session,
- finder=finder,
- download_dir=options.wheel_dir,
- use_user_site=False,
- verbosity=self.verbosity,
- )
-
- resolver = self.make_resolver(
- preparer=preparer,
- finder=finder,
- options=options,
- wheel_cache=wheel_cache,
- ignore_requires_python=options.ignore_requires_python,
- use_pep517=options.use_pep517,
- )
-
- self.trace_basic_info(finder)
-
- requirement_set = resolver.resolve(reqs, check_supported_wheels=True)
-
- reqs_to_build: List[InstallRequirement] = []
- for req in requirement_set.requirements.values():
- if req.is_wheel:
- preparer.save_linked_requirement(req)
- elif should_build_for_wheel_command(req):
- reqs_to_build.append(req)
-
- # build wheels
- build_successes, build_failures = build(
- reqs_to_build,
- wheel_cache=wheel_cache,
- verify=(not options.no_verify),
- build_options=options.build_options or [],
- global_options=options.global_options or [],
- )
- for req in build_successes:
- assert req.link and req.link.is_wheel
- assert req.local_file_path
- # copy from cache to target directory
- try:
- shutil.copy(req.local_file_path, options.wheel_dir)
- except OSError as e:
- logger.warning(
- "Building wheel for %s failed: %s",
- req.name,
- e,
- )
- build_failures.append(req)
- if len(build_failures) != 0:
- raise CommandError("Failed to build one or more wheels")
-
- return SUCCESS
diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/imgfolder.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/imgfolder.py
deleted file mode 100644
index 40168f00e8ad177f3d94f75578dba2e640944c4c..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/r2d2/datasets/imgfolder.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright 2019-present NAVER Corp.
-# CC BY-NC-SA 3.0
-# Available only for non-commercial use
-
-import os, pdb
-
-from .dataset import Dataset
-from .pair_dataset import SyntheticPairDataset
-
-
-class ImgFolder(Dataset):
- """load all images in a folder (no recursion)."""
-
- def __init__(self, root, imgs=None, exts=(".jpg", ".png", ".ppm")):
- Dataset.__init__(self)
- self.root = root
- self.imgs = imgs or [f for f in os.listdir(root) if f.endswith(exts)]
- self.nimg = len(self.imgs)
-
- def get_key(self, idx):
- return self.imgs[idx]
diff --git a/spaces/Ricecake123/RVC-demo/docs/Changelog_CN.md b/spaces/Ricecake123/RVC-demo/docs/Changelog_CN.md
deleted file mode 100644
index eb67ba574ef10fb980292baec0cd0f9e7582f851..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/docs/Changelog_CN.md
+++ /dev/null
@@ -1,80 +0,0 @@
-### 20230618更新
-- v2增加32k和48k两个新预训练模型
-- 修复非f0模型推理报错
-- 对于超过一小时的训练集的索引建立环节,自动kmeans缩小特征处理以加速索引训练、加入和查询
-- 附送一个人声转吉他玩具仓库
-- 数据处理剔除异常值切片
-- onnx导出选项卡
-
-失败的实验:
-- ~~特征检索增加时序维度:寄,没啥效果~~
-- ~~特征检索增加PCAR降维可选项:寄,数据大用kmeans缩小数据量,数据小降维操作耗时比省下的匹配耗时还多~~
-- ~~支持onnx推理(附带仅推理的小压缩包):寄,生成nsf还是需要pytorch~~
-- ~~训练时在音高、gender、eq、噪声等方面对输入进行随机增强:寄,没啥效果~~
-
-todolist:
-- 接入小型声码器调研
-- 训练集音高识别支持crepe
-- crepe的精度支持和RVC-config同步
-- 对接F0编辑器
-
-
-### 20230528更新
-- 增加v2的jupyter notebook,韩文changelog,增加一些环境依赖
-- 增加呼吸、清辅音、齿音保护模式
-- 支持crepe-full推理
-- UVR5人声伴奏分离加上3个去延迟模型和MDX-Net去混响模型,增加HP3人声提取模型
-- 索引名称增加版本和实验名称
-- 人声伴奏分离、推理批量导出增加音频导出格式选项
-- 废弃32k模型的训练
-
-### 20230513更新
-- 清除一键包内部老版本runtime内残留的lib.infer_pack和uvr5_pack
-- 修复训练集预处理伪多进程的bug
-- 增加harvest识别音高可选通过中值滤波削弱哑音现象,可调整中值滤波半径
-- 导出音频增加后处理重采样
-- 训练n_cpu进程数从"仅调整f0提取"改为"调整数据预处理和f0提取"
-- 自动检测logs文件夹下的index路径,提供下拉列表功能
-- tab页增加"常见问题解答"(也可参考github-rvc-wiki)
-- 相同路径的输入音频推理增加了音高缓存(用途:使用harvest音高提取,整个pipeline会经历漫长且重复的音高提取过程,如果不使用缓存,实验不同音色、索引、音高中值滤波半径参数的用户在第一次测试后的等待结果会非常痛苦)
-
-### 20230514更新
-- 音量包络对齐输入混合(可以缓解“输入静音输出小幅度噪声”的问题。如果输入音频背景底噪大则不建议开启,默认不开启(值为1可视为不开启))
-- 支持按照指定频率保存提取的小模型(假如你想尝试不同epoch下的推理效果,但是不想保存所有大checkpoint并且每次都要ckpt手工处理提取小模型,这项功能会非常实用)
-- 通过设置环境变量解决服务端开了系统全局代理导致浏览器连接错误的问题
-- 支持v2预训练模型(目前只公开了40k版本进行测试,另外2个采样率还没有训练完全)
-- 推理前限制超过1的过大音量
-- 微调数据预处理参数
-
-
-### 20230409更新
-- 修正训练参数,提升显卡平均利用率,A100最高从25%提升至90%左右,V100:50%->90%左右,2060S:60%->85%左右,P40:25%->95%左右,训练速度显著提升
-- 修正参数:总batch_size改为每张卡的batch_size
-- 修正total_epoch:最大限制100解锁至1000;默认10提升至默认20
-- 修复ckpt提取识别是否带音高错误导致推理异常的问题
-- 修复分布式训练每个rank都保存一次ckpt的问题
-- 特征提取进行nan特征过滤
-- 修复静音输入输出随机辅音or噪声的问题(老版模型需要重做训练集重训)
-
-### 20230416更新
-- 新增本地实时变声迷你GUI,双击go-realtime-gui.bat启动
-- 训练推理均对<50Hz的频段进行滤波过滤
-- 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑
-- WebUI支持根据系统区域变更语言(现支持en_US,ja_JP,zh_CN,zh_HK,zh_SG,zh_TW,不支持的默认en_US)
-- 修正部分显卡识别(例如V100-16G识别失败,P4识别失败)
-
-### 20230428更新
-- 升级faiss索引设置,速度更快,质量更高
-- 取消total_npy依赖,后续分享模型不再需要填写total_npy
-- 解锁16系限制。4G显存GPU给到4G的推理设置。
-- 修复部分音频格式下UVR5人声伴奏分离的bug
-- 实时变声迷你gui增加对非40k与不懈怠音高模型的支持
-
-### 后续计划:
-功能:
-- 支持多人训练选项卡(至多4人)
-
-底模:
-- 收集呼吸wav加入训练集修正呼吸变声电音的问题
-- 我们正在训练增加了歌声训练集的底模,未来会公开
-
diff --git a/spaces/Ritori/TTS_Yui/hifi-gan/README.md b/spaces/Ritori/TTS_Yui/hifi-gan/README.md
deleted file mode 100644
index 4c63381f7c41874fe97c13f40b92d6bb9413e751..0000000000000000000000000000000000000000
--- a/spaces/Ritori/TTS_Yui/hifi-gan/README.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
-
-### Jungil Kong, Jaehyeon Kim, Jaekyoung Bae
-
-In our [paper](https://arxiv.org/abs/2010.05646),
-we proposed HiFi-GAN: a GAN-based model capable of generating high fidelity speech efficiently.
-We provide our implementation and pretrained models as open source in this repository.
-
-**Abstract :**
-Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms.
-Although such methods improve the sampling efficiency and memory usage,
-their sample quality has not yet reached that of autoregressive and flow-based generative models.
-In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis.
-As speech audio consists of sinusoidal signals with various periods,
-we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality.
-A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method
-demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than
-real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen
-speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times
-faster than real-time on CPU with comparable quality to an autoregressive counterpart.
-
-Visit our [demo website](https://jik876.github.io/hifi-gan-demo/) for audio samples.
-
-
-## Pre-requisites
-1. Python >= 3.6
-2. Clone this repository.
-3. Install python requirements. Please refer [requirements.txt](requirements.txt)
-4. Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/).
-And move all wav files to `LJSpeech-1.1/wavs`
-
-
-## Training
-```
-python train.py --config config_v1.json
-```
-To train V2 or V3 Generator, replace `config_v1.json` with `config_v2.json` or `config_v3.json`.
-Checkpoints and copy of the configuration file are saved in `cp_hifigan` directory by default.
-You can change the path by adding `--checkpoint_path` option.
-
-Validation loss during training with V1 generator.
-
-
-## Pretrained Model
-You can also use pretrained models we provide.
-[Download pretrained models](https://drive.google.com/drive/folders/1-eEYTB5Av9jNql0WGBlRoi-WH2J7bp5Y?usp=sharing)
-Details of each folder are as in follows:
-
-|Folder Name|Generator|Dataset|Fine-Tuned|
-|------|---|---|---|
-|LJ_V1|V1|LJSpeech|No|
-|LJ_V2|V2|LJSpeech|No|
-|LJ_V3|V3|LJSpeech|No|
-|LJ_FT_T2_V1|V1|LJSpeech|Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2))|
-|LJ_FT_T2_V2|V2|LJSpeech|Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2))|
-|LJ_FT_T2_V3|V3|LJSpeech|Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2))|
-|VCTK_V1|V1|VCTK|No|
-|VCTK_V2|V2|VCTK|No|
-|VCTK_V3|V3|VCTK|No|
-|UNIVERSAL_V1|V1|Universal|No|
-
-We provide the universal model with discriminator weights that can be used as a base for transfer learning to other datasets.
-
-## Fine-Tuning
-1. Generate mel-spectrograms in numpy format using [Tacotron2](https://github.com/NVIDIA/tacotron2) with teacher-forcing.
-The file name of the generated mel-spectrogram should match the audio file and the extension should be `.npy`.
-Example:
- ```
- Audio File : LJ001-0001.wav
- Mel-Spectrogram File : LJ001-0001.npy
- ```
-2. Create `ft_dataset` folder and copy the generated mel-spectrogram files into it.
-3. Run the following command.
- ```
- python train.py --fine_tuning True --config config_v1.json
- ```
- For other command line options, please refer to the training section.
-
-
-## Inference from wav file
-1. Make `test_files` directory and copy wav files into the directory.
-2. Run the following command.
- ```
- python inference.py --checkpoint_file [generator checkpoint file path]
- ```
-Generated wav files are saved in `generated_files` by default.
-You can change the path by adding `--output_dir` option.
-
-
-## Inference for end-to-end speech synthesis
-1. Make `test_mel_files` directory and copy generated mel-spectrogram files into the directory.
-You can generate mel-spectrograms using [Tacotron2](https://github.com/NVIDIA/tacotron2),
-[Glow-TTS](https://github.com/jaywalnut310/glow-tts) and so forth.
-2. Run the following command.
- ```
- python inference_e2e.py --checkpoint_file [generator checkpoint file path]
- ```
-Generated wav files are saved in `generated_files_from_mel` by default.
-You can change the path by adding `--output_dir` option.
-
-
-## Acknowledgements
-We referred to [WaveGlow](https://github.com/NVIDIA/waveglow), [MelGAN](https://github.com/descriptinc/melgan-neurips)
-and [Tacotron2](https://github.com/NVIDIA/tacotron2) to implement this.
-
diff --git a/spaces/RoAr777/fer/app.py b/spaces/RoAr777/fer/app.py
deleted file mode 100644
index 6fc412b5fa7f6f469ab5316880c8ff527a0f908e..0000000000000000000000000000000000000000
--- a/spaces/RoAr777/fer/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Sun Dec 25 08:38:00 2022
-
-@author: ROSHAN
-"""
-
-import tensorflow as tf
-import gradio as gr
-import numpy as np
-import cv2
-from PIL import Image as im
-from matplotlib import pyplot as plt
-cls=['Angry', 'Disgust', 'Fear', 'Happy', 'Sad', 'Surprise', 'Neutral']
-model = tf.keras.models.load_model("56fer.h5")
-face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
-def show(img):
- img=img[:, :, ::-1].copy()
-
- gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
- faces = face_cascade.detectMultiScale(gray, 1.5, 1)
- r=[]
- x=faces[0][0]
- y=faces[0][1]
- w=faces[0][2]
- h=faces[0][3]
- cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2)
- r.append(img)
- sharp_kernel = np.array([[0, -1, 0],
- [-1, 5, -1],
- [0, -1, 0]])
- sharp_img = cv2.filter2D(src=gray, ddepth=-1, kernel=sharp_kernel)
-
- crop_img = sharp_img[y:y+h, x:x+w]
-
- npa=np.array(crop_img)/255.0
- predictions = model.predict(np.resize(npa,(48,48)).reshape(-1,48,48,1))
- score =predictions[0]
- score=tf.nn.softmax(predictions[0])
- plt.figure()
- confidences = {cls[i]: float(score[i]) for i in range(len(cls))}
- return confidences
-demo = gr.Interface(
- fn=show,
- inputs="image",
- outputs=gr.outputs.Label(num_top_classes=7),
-)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/hungarian_assigner.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/hungarian_assigner.py
deleted file mode 100644
index e10cc14afac4ddfcb9395c1a250ece1fbfe3263c..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/bbox/assigners/hungarian_assigner.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from ..match_costs import build_match_cost
-from ..transforms import bbox_cxcywh_to_xyxy
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-try:
- from scipy.optimize import linear_sum_assignment
-except ImportError:
- linear_sum_assignment = None
-
-
-@BBOX_ASSIGNERS.register_module()
-class HungarianAssigner(BaseAssigner):
- """Computes one-to-one matching between predictions and ground truth.
-
- This class computes an assignment between the targets and the predictions
- based on the costs. The costs are weighted sum of three components:
- classification cost, regression L1 cost and regression iou cost. The
- targets don't include the no_object, so generally there are more
- predictions than targets. After the one-to-one matching, the un-matched
- are treated as backgrounds. Thus each query prediction will be assigned
- with `0` or a positive integer indicating the ground truth index:
-
- - 0: negative sample, no assigned gt
- - positive integer: positive sample, index (1-based) of assigned gt
-
- Args:
- cls_weight (int | float, optional): The scale factor for classification
- cost. Default 1.0.
- bbox_weight (int | float, optional): The scale factor for regression
- L1 cost. Default 1.0.
- iou_weight (int | float, optional): The scale factor for regression
- iou cost. Default 1.0.
- iou_calculator (dict | optional): The config for the iou calculation.
- Default type `BboxOverlaps2D`.
- iou_mode (str | optional): "iou" (intersection over union), "iof"
- (intersection over foreground), or "giou" (generalized
- intersection over union). Default "giou".
- """
-
- def __init__(self,
- cls_cost=dict(type='ClassificationCost', weight=1.),
- reg_cost=dict(type='BBoxL1Cost', weight=1.0),
- iou_cost=dict(type='IoUCost', iou_mode='giou', weight=1.0)):
- self.cls_cost = build_match_cost(cls_cost)
- self.reg_cost = build_match_cost(reg_cost)
- self.iou_cost = build_match_cost(iou_cost)
-
- def assign(self,
- bbox_pred,
- cls_pred,
- gt_bboxes,
- gt_labels,
- img_meta,
- gt_bboxes_ignore=None,
- eps=1e-7):
- """Computes one-to-one matching based on the weighted costs.
-
- This method assign each query prediction to a ground truth or
- background. The `assigned_gt_inds` with -1 means don't care,
- 0 means negative sample, and positive number is the index (1-based)
- of assigned gt.
- The assignment is done in the following steps, the order matters.
-
- 1. assign every prediction to -1
- 2. compute the weighted costs
- 3. do Hungarian matching on CPU based on the costs
- 4. assign all to 0 (background) first, then for each matched pair
- between predictions and gts, treat this prediction as foreground
- and assign the corresponding gt index (plus 1) to it.
-
- Args:
- bbox_pred (Tensor): Predicted boxes with normalized coordinates
- (cx, cy, w, h), which are all in range [0, 1]. Shape
- [num_query, 4].
- cls_pred (Tensor): Predicted classification logits, shape
- [num_query, num_class].
- gt_bboxes (Tensor): Ground truth boxes with unnormalized
- coordinates (x1, y1, x2, y2). Shape [num_gt, 4].
- gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,).
- img_meta (dict): Meta information for current image.
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`. Default None.
- eps (int | float, optional): A value added to the denominator for
- numerical stability. Default 1e-7.
-
- Returns:
- :obj:`AssignResult`: The assigned result.
- """
- assert gt_bboxes_ignore is None, \
- 'Only case when gt_bboxes_ignore is None is supported.'
- num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0)
-
- # 1. assign -1 by default
- assigned_gt_inds = bbox_pred.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- assigned_labels = bbox_pred.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- if num_gts == 0 or num_bboxes == 0:
- # No ground truth or boxes, return empty assignment
- if num_gts == 0:
- # No ground truth, assign all to background
- assigned_gt_inds[:] = 0
- return AssignResult(
- num_gts, assigned_gt_inds, None, labels=assigned_labels)
- img_h, img_w, _ = img_meta['img_shape']
- factor = gt_bboxes.new_tensor([img_w, img_h, img_w,
- img_h]).unsqueeze(0)
-
- # 2. compute the weighted costs
- # classification and bboxcost.
- cls_cost = self.cls_cost(cls_pred, gt_labels)
- # regression L1 cost
- normalize_gt_bboxes = gt_bboxes / factor
- reg_cost = self.reg_cost(bbox_pred, normalize_gt_bboxes)
- # regression iou cost, defaultly giou is used in official DETR.
- bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor
- iou_cost = self.iou_cost(bboxes, gt_bboxes)
- # weighted sum of above three costs
- cost = cls_cost + reg_cost + iou_cost
-
- # 3. do Hungarian matching on CPU using linear_sum_assignment
- cost = cost.detach().cpu()
- if linear_sum_assignment is None:
- raise ImportError('Please run "pip install scipy" '
- 'to install scipy first.')
- matched_row_inds, matched_col_inds = linear_sum_assignment(cost)
- matched_row_inds = torch.from_numpy(matched_row_inds).to(
- bbox_pred.device)
- matched_col_inds = torch.from_numpy(matched_col_inds).to(
- bbox_pred.device)
-
- # 4. assign backgrounds and foregrounds
- # assign all indices to backgrounds first
- assigned_gt_inds[:] = 0
- # assign foregrounds based on matching results
- assigned_gt_inds[matched_row_inds] = matched_col_inds + 1
- assigned_labels[matched_row_inds] = gt_labels[matched_col_inds]
- return AssignResult(
- num_gts, assigned_gt_inds, None, labels=assigned_labels)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_sampler.py
deleted file mode 100644
index a802a74fd6c3610d9ae178e6201f47423eca7ad1..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_sampler.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from typing import List
-
-import torch
-from torch import nn as nn
-
-from annotator.uniformer.mmcv.runner import force_fp32
-from .furthest_point_sample import (furthest_point_sample,
- furthest_point_sample_with_dist)
-
-
-def calc_square_dist(point_feat_a, point_feat_b, norm=True):
- """Calculating square distance between a and b.
-
- Args:
- point_feat_a (Tensor): (B, N, C) Feature vector of each point.
- point_feat_b (Tensor): (B, M, C) Feature vector of each point.
- norm (Bool, optional): Whether to normalize the distance.
- Default: True.
-
- Returns:
- Tensor: (B, N, M) Distance between each pair points.
- """
- num_channel = point_feat_a.shape[-1]
- # [bs, n, 1]
- a_square = torch.sum(point_feat_a.unsqueeze(dim=2).pow(2), dim=-1)
- # [bs, 1, m]
- b_square = torch.sum(point_feat_b.unsqueeze(dim=1).pow(2), dim=-1)
-
- corr_matrix = torch.matmul(point_feat_a, point_feat_b.transpose(1, 2))
-
- dist = a_square + b_square - 2 * corr_matrix
- if norm:
- dist = torch.sqrt(dist) / num_channel
- return dist
-
-
-def get_sampler_cls(sampler_type):
- """Get the type and mode of points sampler.
-
- Args:
- sampler_type (str): The type of points sampler.
- The valid value are "D-FPS", "F-FPS", or "FS".
-
- Returns:
- class: Points sampler type.
- """
- sampler_mappings = {
- 'D-FPS': DFPSSampler,
- 'F-FPS': FFPSSampler,
- 'FS': FSSampler,
- }
- try:
- return sampler_mappings[sampler_type]
- except KeyError:
- raise KeyError(
- f'Supported `sampler_type` are {sampler_mappings.keys()}, but got \
- {sampler_type}')
-
-
-class PointsSampler(nn.Module):
- """Points sampling.
-
- Args:
- num_point (list[int]): Number of sample points.
- fps_mod_list (list[str], optional): Type of FPS method, valid mod
- ['F-FPS', 'D-FPS', 'FS'], Default: ['D-FPS'].
- F-FPS: using feature distances for FPS.
- D-FPS: using Euclidean distances of points for FPS.
- FS: using F-FPS and D-FPS simultaneously.
- fps_sample_range_list (list[int], optional):
- Range of points to apply FPS. Default: [-1].
- """
-
- def __init__(self,
- num_point: List[int],
- fps_mod_list: List[str] = ['D-FPS'],
- fps_sample_range_list: List[int] = [-1]):
- super().__init__()
- # FPS would be applied to different fps_mod in the list,
- # so the length of the num_point should be equal to
- # fps_mod_list and fps_sample_range_list.
- assert len(num_point) == len(fps_mod_list) == len(
- fps_sample_range_list)
- self.num_point = num_point
- self.fps_sample_range_list = fps_sample_range_list
- self.samplers = nn.ModuleList()
- for fps_mod in fps_mod_list:
- self.samplers.append(get_sampler_cls(fps_mod)())
- self.fp16_enabled = False
-
- @force_fp32()
- def forward(self, points_xyz, features):
- """
- Args:
- points_xyz (Tensor): (B, N, 3) xyz coordinates of the features.
- features (Tensor): (B, C, N) Descriptors of the features.
-
- Returns:
- Tensor: (B, npoint, sample_num) Indices of sampled points.
- """
- indices = []
- last_fps_end_index = 0
-
- for fps_sample_range, sampler, npoint in zip(
- self.fps_sample_range_list, self.samplers, self.num_point):
- assert fps_sample_range < points_xyz.shape[1]
-
- if fps_sample_range == -1:
- sample_points_xyz = points_xyz[:, last_fps_end_index:]
- if features is not None:
- sample_features = features[:, :, last_fps_end_index:]
- else:
- sample_features = None
- else:
- sample_points_xyz = \
- points_xyz[:, last_fps_end_index:fps_sample_range]
- if features is not None:
- sample_features = features[:, :, last_fps_end_index:
- fps_sample_range]
- else:
- sample_features = None
-
- fps_idx = sampler(sample_points_xyz.contiguous(), sample_features,
- npoint)
-
- indices.append(fps_idx + last_fps_end_index)
- last_fps_end_index += fps_sample_range
- indices = torch.cat(indices, dim=1)
-
- return indices
-
-
-class DFPSSampler(nn.Module):
- """Using Euclidean distances of points for FPS."""
-
- def __init__(self):
- super().__init__()
-
- def forward(self, points, features, npoint):
- """Sampling points with D-FPS."""
- fps_idx = furthest_point_sample(points.contiguous(), npoint)
- return fps_idx
-
-
-class FFPSSampler(nn.Module):
- """Using feature distances for FPS."""
-
- def __init__(self):
- super().__init__()
-
- def forward(self, points, features, npoint):
- """Sampling points with F-FPS."""
- assert features is not None, \
- 'feature input to FFPS_Sampler should not be None'
- features_for_fps = torch.cat([points, features.transpose(1, 2)], dim=2)
- features_dist = calc_square_dist(
- features_for_fps, features_for_fps, norm=False)
- fps_idx = furthest_point_sample_with_dist(features_dist, npoint)
- return fps_idx
-
-
-class FSSampler(nn.Module):
- """Using F-FPS and D-FPS simultaneously."""
-
- def __init__(self):
- super().__init__()
-
- def forward(self, points, features, npoint):
- """Sampling points with FS_Sampling."""
- assert features is not None, \
- 'feature input to FS_Sampler should not be None'
- ffps_sampler = FFPSSampler()
- dfps_sampler = DFPSSampler()
- fps_idx_ffps = ffps_sampler(points, features, npoint)
- fps_idx_dfps = dfps_sampler(points, features, npoint)
- fps_idx = torch.cat([fps_idx_ffps, fps_idx_dfps], dim=1)
- return fps_idx
diff --git a/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector/README.md b/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector/README.md
deleted file mode 100644
index 60c605b88ac603544946c4e6ef07ac4a97c85b27..0000000000000000000000000000000000000000
--- a/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Deepfakes_Video_Detector
-emoji: 🔥
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/SantoshKumar/SD-H5-AR-VR-IOT/README.md b/spaces/SantoshKumar/SD-H5-AR-VR-IOT/README.md
deleted file mode 100644
index cf3f8d98b20b52d747b638f3351abd6d208f489c..0000000000000000000000000000000000000000
--- a/spaces/SantoshKumar/SD-H5-AR-VR-IOT/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: SD H5 AR VR IOT
-emoji: 💩
-colorFrom: purple
-colorTo: purple
-sdk: static
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets.py
deleted file mode 100644
index 5da3948c2f2e9edcc3cdac49bdf9f738e403de40..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import layers
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/SeyedAli/Image-Similarity/src/model/similarity_interface.py b/spaces/SeyedAli/Image-Similarity/src/model/similarity_interface.py
deleted file mode 100644
index 318cdd972c2d2f758bd9b3dfdbb92cc9dfb28bee..0000000000000000000000000000000000000000
--- a/spaces/SeyedAli/Image-Similarity/src/model/similarity_interface.py
+++ /dev/null
@@ -1,3 +0,0 @@
-class SimilarityInterface:
- def extract_feature(img):
- return []
\ No newline at end of file
diff --git a/spaces/Snowling/White-box-Cartoonization/wbc/guided_filter.py b/spaces/Snowling/White-box-Cartoonization/wbc/guided_filter.py
deleted file mode 100644
index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000
--- a/spaces/Snowling/White-box-Cartoonization/wbc/guided_filter.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import tensorflow as tf
-import numpy as np
-
-
-
-
-def tf_box_filter(x, r):
- k_size = int(2*r+1)
- ch = x.get_shape().as_list()[-1]
- weight = 1/(k_size**2)
- box_kernel = weight*np.ones((k_size, k_size, ch, 1))
- box_kernel = np.array(box_kernel).astype(np.float32)
- output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME')
- return output
-
-
-
-def guided_filter(x, y, r, eps=1e-2):
-
- x_shape = tf.shape(x)
- #y_shape = tf.shape(y)
-
- N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r)
-
- mean_x = tf_box_filter(x, r) / N
- mean_y = tf_box_filter(y, r) / N
- cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y
- var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x
-
- A = cov_xy / (var_x + eps)
- b = mean_y - A * mean_x
-
- mean_A = tf_box_filter(A, r) / N
- mean_b = tf_box_filter(b, r) / N
-
- output = mean_A * x + mean_b
-
- return output
-
-
-
-def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8):
-
- #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4
-
- lr_x_shape = tf.shape(lr_x)
- #lr_y_shape = tf.shape(lr_y)
- hr_x_shape = tf.shape(hr_x)
-
- N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r)
-
- mean_x = tf_box_filter(lr_x, r) / N
- mean_y = tf_box_filter(lr_y, r) / N
- cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y
- var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x
-
- A = cov_xy / (var_x + eps)
- b = mean_y - A * mean_x
-
- mean_A = tf.image.resize_images(A, hr_x_shape[1: 3])
- mean_b = tf.image.resize_images(b, hr_x_shape[1: 3])
-
- output = mean_A * hr_x + mean_b
-
- return output
-
-
-if __name__ == '__main__':
- import cv2
- from tqdm import tqdm
-
- input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3])
- output = guided_filter(input_photo, input_photo, 5, eps=1)
- image = cv2.imread('output_figure1/cartoon2.jpg')
- image = image/127.5 - 1
- image = np.expand_dims(image, axis=0)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- sess = tf.Session(config=config)
- sess.run(tf.global_variables_initializer())
-
- out = sess.run(output, feed_dict={input_photo: image})
- out = (np.squeeze(out)+1)*127.5
- out = np.clip(out, 0, 255).astype(np.uint8)
- cv2.imwrite('output_figure1/cartoon2_filter.jpg', out)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/capture.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/capture.py
deleted file mode 100644
index 97b6336688b058e689ac6aa410c51a041377014b..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/capture.py
+++ /dev/null
@@ -1,170 +0,0 @@
-# encoding: utf-8
-"""IO capturing utilities."""
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-
-import sys
-from io import StringIO
-
-#-----------------------------------------------------------------------------
-# Classes and functions
-#-----------------------------------------------------------------------------
-
-
-class RichOutput(object):
- def __init__(self, data=None, metadata=None, transient=None, update=False):
- self.data = data or {}
- self.metadata = metadata or {}
- self.transient = transient or {}
- self.update = update
-
- def display(self):
- from IPython.display import publish_display_data
- publish_display_data(data=self.data, metadata=self.metadata,
- transient=self.transient, update=self.update)
-
- def _repr_mime_(self, mime):
- if mime not in self.data:
- return
- data = self.data[mime]
- if mime in self.metadata:
- return data, self.metadata[mime]
- else:
- return data
-
- def _repr_mimebundle_(self, include=None, exclude=None):
- return self.data, self.metadata
-
- def _repr_html_(self):
- return self._repr_mime_("text/html")
-
- def _repr_latex_(self):
- return self._repr_mime_("text/latex")
-
- def _repr_json_(self):
- return self._repr_mime_("application/json")
-
- def _repr_javascript_(self):
- return self._repr_mime_("application/javascript")
-
- def _repr_png_(self):
- return self._repr_mime_("image/png")
-
- def _repr_jpeg_(self):
- return self._repr_mime_("image/jpeg")
-
- def _repr_svg_(self):
- return self._repr_mime_("image/svg+xml")
-
-
-class CapturedIO(object):
- """Simple object for containing captured stdout/err and rich display StringIO objects
-
- Each instance `c` has three attributes:
-
- - ``c.stdout`` : standard output as a string
- - ``c.stderr`` : standard error as a string
- - ``c.outputs``: a list of rich display outputs
-
- Additionally, there's a ``c.show()`` method which will print all of the
- above in the same order, and can be invoked simply via ``c()``.
- """
-
- def __init__(self, stdout, stderr, outputs=None):
- self._stdout = stdout
- self._stderr = stderr
- if outputs is None:
- outputs = []
- self._outputs = outputs
-
- def __str__(self):
- return self.stdout
-
- @property
- def stdout(self):
- "Captured standard output"
- if not self._stdout:
- return ''
- return self._stdout.getvalue()
-
- @property
- def stderr(self):
- "Captured standard error"
- if not self._stderr:
- return ''
- return self._stderr.getvalue()
-
- @property
- def outputs(self):
- """A list of the captured rich display outputs, if any.
-
- If you have a CapturedIO object ``c``, these can be displayed in IPython
- using::
-
- from IPython.display import display
- for o in c.outputs:
- display(o)
- """
- return [ RichOutput(**kargs) for kargs in self._outputs ]
-
- def show(self):
- """write my output to sys.stdout/err as appropriate"""
- sys.stdout.write(self.stdout)
- sys.stderr.write(self.stderr)
- sys.stdout.flush()
- sys.stderr.flush()
- for kargs in self._outputs:
- RichOutput(**kargs).display()
-
- __call__ = show
-
-
-class capture_output(object):
- """context manager for capturing stdout/err"""
- stdout = True
- stderr = True
- display = True
-
- def __init__(self, stdout=True, stderr=True, display=True):
- self.stdout = stdout
- self.stderr = stderr
- self.display = display
- self.shell = None
-
- def __enter__(self):
- from IPython.core.getipython import get_ipython
- from IPython.core.displaypub import CapturingDisplayPublisher
- from IPython.core.displayhook import CapturingDisplayHook
-
- self.sys_stdout = sys.stdout
- self.sys_stderr = sys.stderr
-
- if self.display:
- self.shell = get_ipython()
- if self.shell is None:
- self.save_display_pub = None
- self.display = False
-
- stdout = stderr = outputs = None
- if self.stdout:
- stdout = sys.stdout = StringIO()
- if self.stderr:
- stderr = sys.stderr = StringIO()
- if self.display:
- self.save_display_pub = self.shell.display_pub
- self.shell.display_pub = CapturingDisplayPublisher()
- outputs = self.shell.display_pub.outputs
- self.save_display_hook = sys.displayhook
- sys.displayhook = CapturingDisplayHook(shell=self.shell,
- outputs=outputs)
-
- return CapturedIO(stdout, stderr, outputs)
-
- def __exit__(self, exc_type, exc_value, traceback):
- sys.stdout = self.sys_stdout
- sys.stderr = self.sys_stderr
- if self.display and self.shell:
- self.shell.display_pub = self.save_display_pub
- sys.displayhook = self.save_display_hook
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_decorators.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_decorators.py
deleted file mode 100644
index 38e871deb7e27ba896d15304378708f50e5ea0ba..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_decorators.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from IPython.utils import decorators
-
-def test_flag_calls():
- @decorators.flag_calls
- def f():
- pass
-
- assert not f.called
- f()
- assert f.called
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_streams.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_streams.py
deleted file mode 100644
index 4fa7ccc9ffe0e750a1b5a4164970ed4de9c93b2b..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_streams.py
+++ /dev/null
@@ -1,203 +0,0 @@
-from __future__ import annotations
-
-from abc import abstractmethod
-from typing import Any, Callable, Generic, TypeVar, Union
-
-from .._core._exceptions import EndOfStream
-from .._core._typedattr import TypedAttributeProvider
-from ._resources import AsyncResource
-from ._tasks import TaskGroup
-
-T_Item = TypeVar("T_Item")
-T_co = TypeVar("T_co", covariant=True)
-T_contra = TypeVar("T_contra", contravariant=True)
-
-
-class UnreliableObjectReceiveStream(
- Generic[T_co], AsyncResource, TypedAttributeProvider
-):
- """
- An interface for receiving objects.
-
- This interface makes no guarantees that the received messages arrive in the order in which they
- were sent, or that no messages are missed.
-
- Asynchronously iterating over objects of this type will yield objects matching the given type
- parameter.
- """
-
- def __aiter__(self) -> UnreliableObjectReceiveStream[T_co]:
- return self
-
- async def __anext__(self) -> T_co:
- try:
- return await self.receive()
- except EndOfStream:
- raise StopAsyncIteration
-
- @abstractmethod
- async def receive(self) -> T_co:
- """
- Receive the next item.
-
- :raises ~anyio.ClosedResourceError: if the receive stream has been explicitly
- closed
- :raises ~anyio.EndOfStream: if this stream has been closed from the other end
- :raises ~anyio.BrokenResourceError: if this stream has been rendered unusable
- due to external causes
- """
-
-
-class UnreliableObjectSendStream(
- Generic[T_contra], AsyncResource, TypedAttributeProvider
-):
- """
- An interface for sending objects.
-
- This interface makes no guarantees that the messages sent will reach the recipient(s) in the
- same order in which they were sent, or at all.
- """
-
- @abstractmethod
- async def send(self, item: T_contra) -> None:
- """
- Send an item to the peer(s).
-
- :param item: the item to send
- :raises ~anyio.ClosedResourceError: if the send stream has been explicitly
- closed
- :raises ~anyio.BrokenResourceError: if this stream has been rendered unusable
- due to external causes
- """
-
-
-class UnreliableObjectStream(
- UnreliableObjectReceiveStream[T_Item], UnreliableObjectSendStream[T_Item]
-):
- """
- A bidirectional message stream which does not guarantee the order or reliability of message
- delivery.
- """
-
-
-class ObjectReceiveStream(UnreliableObjectReceiveStream[T_co]):
- """
- A receive message stream which guarantees that messages are received in the same order in
- which they were sent, and that no messages are missed.
- """
-
-
-class ObjectSendStream(UnreliableObjectSendStream[T_contra]):
- """
- A send message stream which guarantees that messages are delivered in the same order in which
- they were sent, without missing any messages in the middle.
- """
-
-
-class ObjectStream(
- ObjectReceiveStream[T_Item],
- ObjectSendStream[T_Item],
- UnreliableObjectStream[T_Item],
-):
- """
- A bidirectional message stream which guarantees the order and reliability of message delivery.
- """
-
- @abstractmethod
- async def send_eof(self) -> None:
- """
- Send an end-of-file indication to the peer.
-
- You should not try to send any further data to this stream after calling this method.
- This method is idempotent (does nothing on successive calls).
- """
-
-
-class ByteReceiveStream(AsyncResource, TypedAttributeProvider):
- """
- An interface for receiving bytes from a single peer.
-
- Iterating this byte stream will yield a byte string of arbitrary length, but no more than
- 65536 bytes.
- """
-
- def __aiter__(self) -> ByteReceiveStream:
- return self
-
- async def __anext__(self) -> bytes:
- try:
- return await self.receive()
- except EndOfStream:
- raise StopAsyncIteration
-
- @abstractmethod
- async def receive(self, max_bytes: int = 65536) -> bytes:
- """
- Receive at most ``max_bytes`` bytes from the peer.
-
- .. note:: Implementors of this interface should not return an empty :class:`bytes` object,
- and users should ignore them.
-
- :param max_bytes: maximum number of bytes to receive
- :return: the received bytes
- :raises ~anyio.EndOfStream: if this stream has been closed from the other end
- """
-
-
-class ByteSendStream(AsyncResource, TypedAttributeProvider):
- """An interface for sending bytes to a single peer."""
-
- @abstractmethod
- async def send(self, item: bytes) -> None:
- """
- Send the given bytes to the peer.
-
- :param item: the bytes to send
- """
-
-
-class ByteStream(ByteReceiveStream, ByteSendStream):
- """A bidirectional byte stream."""
-
- @abstractmethod
- async def send_eof(self) -> None:
- """
- Send an end-of-file indication to the peer.
-
- You should not try to send any further data to this stream after calling this method.
- This method is idempotent (does nothing on successive calls).
- """
-
-
-#: Type alias for all unreliable bytes-oriented receive streams.
-AnyUnreliableByteReceiveStream = Union[
- UnreliableObjectReceiveStream[bytes], ByteReceiveStream
-]
-#: Type alias for all unreliable bytes-oriented send streams.
-AnyUnreliableByteSendStream = Union[UnreliableObjectSendStream[bytes], ByteSendStream]
-#: Type alias for all unreliable bytes-oriented streams.
-AnyUnreliableByteStream = Union[UnreliableObjectStream[bytes], ByteStream]
-#: Type alias for all bytes-oriented receive streams.
-AnyByteReceiveStream = Union[ObjectReceiveStream[bytes], ByteReceiveStream]
-#: Type alias for all bytes-oriented send streams.
-AnyByteSendStream = Union[ObjectSendStream[bytes], ByteSendStream]
-#: Type alias for all bytes-oriented streams.
-AnyByteStream = Union[ObjectStream[bytes], ByteStream]
-
-
-class Listener(Generic[T_co], AsyncResource, TypedAttributeProvider):
- """An interface for objects that let you accept incoming connections."""
-
- @abstractmethod
- async def serve(
- self,
- handler: Callable[[T_co], Any],
- task_group: TaskGroup | None = None,
- ) -> None:
- """
- Accept incoming connections as they come in and start tasks to handle them.
-
- :param handler: a callable that will be used to handle each accepted connection
- :param task_group: the task group that will be used to start tasks for handling each
- accepted connection (if omitted, an ad-hoc task group will be created)
- """
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/vision.cpp b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/vision.cpp
deleted file mode 100644
index 4a08821e0121a77556aa7a263ec8ebfa928b13b6..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/vision.cpp
+++ /dev/null
@@ -1,21 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-/*!
-* Copyright (c) Facebook, Inc. and its affiliates.
-* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
-*/
-
-#include "ms_deform_attn.h"
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward");
- m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward");
-}
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/fcn_hr18.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/fcn_hr18.py
deleted file mode 100644
index c3e299bc89ada56ca14bbffcbdb08a586b8ed9e9..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/models/fcn_hr18.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://msra/hrnetv2_w18',
- backbone=dict(
- type='HRNet',
- norm_cfg=norm_cfg,
- norm_eval=False,
- extra=dict(
- stage1=dict(
- num_modules=1,
- num_branches=1,
- block='BOTTLENECK',
- num_blocks=(4, ),
- num_channels=(64, )),
- stage2=dict(
- num_modules=1,
- num_branches=2,
- block='BASIC',
- num_blocks=(4, 4),
- num_channels=(18, 36)),
- stage3=dict(
- num_modules=4,
- num_branches=3,
- block='BASIC',
- num_blocks=(4, 4, 4),
- num_channels=(18, 36, 72)),
- stage4=dict(
- num_modules=3,
- num_branches=4,
- block='BASIC',
- num_blocks=(4, 4, 4, 4),
- num_channels=(18, 36, 72, 144)))),
- decode_head=dict(
- type='FCNHead',
- in_channels=[18, 36, 72, 144],
- in_index=(0, 1, 2, 3),
- channels=sum([18, 36, 72, 144]),
- input_transform='resize_concat',
- kernel_size=1,
- num_convs=1,
- concat_input=False,
- dropout_ratio=-1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/TH5314/newbing/src/components/tone-selector.tsx b/spaces/TH5314/newbing/src/components/tone-selector.tsx
deleted file mode 100644
index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/components/tone-selector.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-import React from 'react'
-import { BingConversationStyle } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-
-type ToneItem = {
- type: BingConversationStyle,
- name: string
-}
-
-const ToneList: ToneItem[] = [
- { name: '有创造力', type: BingConversationStyle.Creative },
- { name: '更平衡', type: BingConversationStyle.Balanced },
- { name: '更精确', type: BingConversationStyle.Precise }
-]
-
-interface ToneSelectorProps {
- type: BingConversationStyle | ''
- onChange?: (type: BingConversationStyle) => void
-}
-
-export function ToneSelector({ type, onChange }: ToneSelectorProps) {
- return (
-
-
- 选择对话样式
-
-
-
- {
- ToneList.map(tone => (
-
onChange?.(tone.type)}>
-
-
- ))
- }
-
-
-
- )
-}
diff --git a/spaces/Taithrah/Minimal/app.py b/spaces/Taithrah/Minimal/app.py
deleted file mode 100644
index 0fd61cfee77ee26b8a016a8cc6a851866f3386ad..0000000000000000000000000000000000000000
--- a/spaces/Taithrah/Minimal/app.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import time
-
-import gradio as gr
-from gradio.themes.utils.theme_dropdown import create_theme_dropdown
-
-dropdown, js = create_theme_dropdown()
-
-with gr.Blocks(theme='Taithrah/Minimal') as demo:
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=10):
- gr.Markdown(
- """
- # Theme preview: `Minimal`
- To use this theme, set `theme='Taithrah/Minimal'` in `gr.Blocks()` or `gr.Interface()`.
- You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version
- of this theme.
- """
- )
- with gr.Column(scale=3):
- with gr.Box():
- dropdown.render()
- toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True)
-
- dropdown.change(None, dropdown, None, _js=js)
- toggle_dark.click(
- None,
- _js="""
- () => {
- document.body.classList.toggle('dark');
- }
- """,
- )
-
- name = gr.Textbox(
- label="Name",
- info="Full name, including middle name. No special characters.",
- placeholder="John Doe",
- value="John Doe",
- interactive=True,
- )
-
- with gr.Row():
- slider1 = gr.Slider(label="Slider 1")
- slider2 = gr.Slider(label="Slider 2")
- gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
-
- with gr.Row():
- with gr.Column(variant="panel", scale=1):
- gr.Markdown("## Panel 1")
- radio = gr.Radio(
- ["A", "B", "C"],
- label="Radio",
- info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
- )
- drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False)
- drop_2 = gr.Dropdown(
- ["Option A", "Option B", "Option C"],
- multiselect=True,
- value=["Option A"],
- label="Dropdown",
- interactive=True,
- )
- check = gr.Checkbox(label="Go")
- with gr.Column(variant="panel", scale=2):
- img = gr.Image(
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg",
- label="Image",
- ).style(height=320)
- with gr.Row():
- go_btn = gr.Button("Go", label="Primary Button", variant="primary")
- clear_btn = gr.Button(
- "Clear", label="Secondary Button", variant="secondary"
- )
-
- def go(*args):
- time.sleep(3)
- return "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpgjpg"
-
- go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go")
-
- def clear():
- time.sleep(0.2)
- return None
-
- clear_btn.click(clear, None, img)
-
- with gr.Row():
- btn1 = gr.Button("Button 1").style(size="sm")
- btn2 = gr.UploadButton().style(size="sm")
- stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style(
- size="sm"
- )
-
- with gr.Row():
- gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
- gr.JSON(
- value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON"
- )
- gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
- gr.File()
- with gr.Row():
- gr.ColorPicker()
- gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4")
- gr.Gallery(
- [
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
- "lion",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
- "logo",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
- "tower",
- ),
- ]
- ).style(height="200px", grid=2)
-
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
- chat_btn = gr.Button("Add messages")
-
- def chat(history):
- time.sleep(2)
- yield [["How are you?", "I am good."]]
-
- chat_btn.click(
- lambda history: history
- + [["How are you?", "I am good."]]
- + (time.sleep(2) or []),
- chatbot,
- chatbot,
- )
- with gr.Column(scale=1):
- with gr.Accordion("Advanced Settings"):
- gr.Markdown("Hello")
- gr.Number(label="Chatbot control 1")
- gr.Number(label="Chatbot control 2")
- gr.Number(label="Chatbot control 3")
-
-
-if __name__ == "__main__":
- demo.queue().launch()
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/install_lib.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/install_lib.py
deleted file mode 100644
index 2e9d8757a582b1dcdb47a34c35c6cfb3ed23ba90..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/install_lib.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import os
-import sys
-from itertools import product, starmap
-import distutils.command.install_lib as orig
-
-
-class install_lib(orig.install_lib):
- """Don't add compiled flags to filenames of non-Python files"""
-
- def run(self):
- self.build()
- outfiles = self.install()
- if outfiles is not None:
- # always compile, in case we have any extension stubs to deal with
- self.byte_compile(outfiles)
-
- def get_exclusions(self):
- """
- Return a collections.Sized collections.Container of paths to be
- excluded for single_version_externally_managed installations.
- """
- all_packages = (
- pkg
- for ns_pkg in self._get_SVEM_NSPs()
- for pkg in self._all_packages(ns_pkg)
- )
-
- excl_specs = product(all_packages, self._gen_exclusion_paths())
- return set(starmap(self._exclude_pkg_path, excl_specs))
-
- def _exclude_pkg_path(self, pkg, exclusion_path):
- """
- Given a package name and exclusion path within that package,
- compute the full exclusion path.
- """
- parts = pkg.split('.') + [exclusion_path]
- return os.path.join(self.install_dir, *parts)
-
- @staticmethod
- def _all_packages(pkg_name):
- """
- >>> list(install_lib._all_packages('foo.bar.baz'))
- ['foo.bar.baz', 'foo.bar', 'foo']
- """
- while pkg_name:
- yield pkg_name
- pkg_name, sep, child = pkg_name.rpartition('.')
-
- def _get_SVEM_NSPs(self):
- """
- Get namespace packages (list) but only for
- single_version_externally_managed installations and empty otherwise.
- """
- # TODO: is it necessary to short-circuit here? i.e. what's the cost
- # if get_finalized_command is called even when namespace_packages is
- # False?
- if not self.distribution.namespace_packages:
- return []
-
- install_cmd = self.get_finalized_command('install')
- svem = install_cmd.single_version_externally_managed
-
- return self.distribution.namespace_packages if svem else []
-
- @staticmethod
- def _gen_exclusion_paths():
- """
- Generate file paths to be excluded for namespace packages (bytecode
- cache files).
- """
- # always exclude the package module itself
- yield '__init__.py'
-
- yield '__init__.pyc'
- yield '__init__.pyo'
-
- if not hasattr(sys, 'implementation'):
- return
-
- base = os.path.join(
- '__pycache__', '__init__.' + sys.implementation.cache_tag)
- yield base + '.pyc'
- yield base + '.pyo'
- yield base + '.opt-1.pyc'
- yield base + '.opt-2.pyc'
-
- def copy_tree(
- self, infile, outfile,
- preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1
- ):
- assert preserve_mode and preserve_times and not preserve_symlinks
- exclude = self.get_exclusions()
-
- if not exclude:
- return orig.install_lib.copy_tree(self, infile, outfile)
-
- # Exclude namespace package __init__.py* files from the output
-
- from setuptools.archive_util import unpack_directory
- from distutils import log
-
- outfiles = []
-
- def pf(src, dst):
- if dst in exclude:
- log.warn("Skipping installation of %s (namespace package)",
- dst)
- return False
-
- log.info("copying %s -> %s", src, os.path.dirname(dst))
- outfiles.append(dst)
- return dst
-
- unpack_directory(infile, outfile, pf)
- return outfiles
-
- def get_outputs(self):
- outputs = orig.install_lib.get_outputs(self)
- exclude = self.get_exclusions()
- if exclude:
- return [f for f in outputs if f not in exclude]
- return outputs
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/tags.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/tags.py
deleted file mode 100644
index b9094d798e6f0e4c78be3bd6137201e21bf2b12c..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/tags.py
+++ /dev/null
@@ -1,151 +0,0 @@
-from __future__ import annotations
-
-import itertools
-import os
-from collections.abc import Iterable
-
-from ..wheelfile import WheelFile
-from .pack import read_tags, set_build_number
-
-
-def _compute_tags(original_tags: Iterable[str], new_tags: str | None) -> set[str]:
- """Add or replace tags. Supports dot-separated tags"""
- if new_tags is None:
- return set(original_tags)
-
- if new_tags.startswith("+"):
- return {*original_tags, *new_tags[1:].split(".")}
-
- if new_tags.startswith("-"):
- return set(original_tags) - set(new_tags[1:].split("."))
-
- return set(new_tags.split("."))
-
-
-def tags(
- wheel: str,
- python_tags: str | None = None,
- abi_tags: str | None = None,
- platform_tags: str | None = None,
- build_tag: str | None = None,
- remove: bool = False,
-) -> str:
- """Change the tags on a wheel file.
-
- The tags are left unchanged if they are not specified. To specify "none",
- use ["none"]. To append to the previous tags, a tag should start with a
- "+". If a tag starts with "-", it will be removed from existing tags.
- Processing is done left to right.
-
- :param wheel: The paths to the wheels
- :param python_tags: The Python tags to set
- :param abi_tags: The ABI tags to set
- :param platform_tags: The platform tags to set
- :param build_tag: The build tag to set
- :param remove: Remove the original wheel
- """
- with WheelFile(wheel, "r") as f:
- assert f.filename, f"{f.filename} must be available"
-
- wheel_info = f.read(f.dist_info_path + "/WHEEL")
-
- original_wheel_name = os.path.basename(f.filename)
- namever = f.parsed_filename.group("namever")
- build = f.parsed_filename.group("build")
- original_python_tags = f.parsed_filename.group("pyver").split(".")
- original_abi_tags = f.parsed_filename.group("abi").split(".")
- original_plat_tags = f.parsed_filename.group("plat").split(".")
-
- tags, existing_build_tag = read_tags(wheel_info)
-
- impls = {tag.split("-")[0] for tag in tags}
- abivers = {tag.split("-")[1] for tag in tags}
- platforms = {tag.split("-")[2] for tag in tags}
-
- if impls != set(original_python_tags):
- msg = f"Wheel internal tags {impls!r} != filename tags {original_python_tags!r}"
- raise AssertionError(msg)
-
- if abivers != set(original_abi_tags):
- msg = f"Wheel internal tags {abivers!r} != filename tags {original_abi_tags!r}"
- raise AssertionError(msg)
-
- if platforms != set(original_plat_tags):
- msg = (
- f"Wheel internal tags {platforms!r} != filename tags {original_plat_tags!r}"
- )
- raise AssertionError(msg)
-
- if existing_build_tag != build:
- msg = (
- f"Incorrect filename '{build}' "
- f"& *.dist-info/WHEEL '{existing_build_tag}' build numbers"
- )
- raise AssertionError(msg)
-
- # Start changing as needed
- if build_tag is not None:
- build = build_tag
-
- final_python_tags = sorted(_compute_tags(original_python_tags, python_tags))
- final_abi_tags = sorted(_compute_tags(original_abi_tags, abi_tags))
- final_plat_tags = sorted(_compute_tags(original_plat_tags, platform_tags))
-
- final_tags = [
- namever,
- ".".join(final_python_tags),
- ".".join(final_abi_tags),
- ".".join(final_plat_tags),
- ]
- if build:
- final_tags.insert(1, build)
-
- final_wheel_name = "-".join(final_tags) + ".whl"
-
- if original_wheel_name != final_wheel_name:
- tags = [
- f"{a}-{b}-{c}"
- for a, b, c in itertools.product(
- final_python_tags, final_abi_tags, final_plat_tags
- )
- ]
-
- original_wheel_path = os.path.join(
- os.path.dirname(f.filename), original_wheel_name
- )
- final_wheel_path = os.path.join(os.path.dirname(f.filename), final_wheel_name)
-
- with WheelFile(original_wheel_path, "r") as fin, WheelFile(
- final_wheel_path, "w"
- ) as fout:
- fout.comment = fin.comment # preserve the comment
- for item in fin.infolist():
- if item.filename == f.dist_info_path + "/RECORD":
- continue
- if item.filename == f.dist_info_path + "/WHEEL":
- content = fin.read(item)
- content = set_tags(content, tags)
- content = set_build_number(content, build)
- fout.writestr(item, content)
- else:
- fout.writestr(item, fin.read(item))
-
- if remove:
- os.remove(original_wheel_path)
-
- return final_wheel_name
-
-
-def set_tags(in_string: bytes, tags: Iterable[str]) -> bytes:
- """Set the tags in the .dist-info/WHEEL file contents.
-
- :param in_string: The string to modify.
- :param tags: The tags to set.
- """
-
- lines = [line for line in in_string.splitlines() if not line.startswith(b"Tag:")]
- for tag in tags:
- lines.append(b"Tag: " + tag.encode("ascii"))
- in_string = b"\r\n".join(lines) + b"\r\n"
-
- return in_string
diff --git a/spaces/Terminus0501/vits-uma-genshin-honkai/attentions.py b/spaces/Terminus0501/vits-uma-genshin-honkai/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/Terminus0501/vits-uma-genshin-honkai/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/ThankGod/face-id/README.md b/spaces/ThankGod/face-id/README.md
deleted file mode 100644
index 74d466d728d38b212137c33043333a453a8cd0be..0000000000000000000000000000000000000000
--- a/spaces/ThankGod/face-id/README.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: face id demo app
-emoji: 🤓
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
----
-
-[Try Demo Image classification Here](https://huggingface.co/spaces/ThankGod/image-classifier)
-
-## Credits
-- Hugging face 🤗 for hosting this demo.
-- Hugging face transformer model for image classification transformer model
-- Gradio for the beautiful visualization dashboards.
-
-## References
-- https://gradio.app/
-- https://huggingface.co/
diff --git a/spaces/Tirendaz/pytorch_cat_vs_dog/README.md b/spaces/Tirendaz/pytorch_cat_vs_dog/README.md
deleted file mode 100644
index 7999d003ab79d64284e87c5f275769c29066bd1b..0000000000000000000000000000000000000000
--- a/spaces/Tirendaz/pytorch_cat_vs_dog/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pytorch Cat Vs Dog
-emoji: 🏢
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/VIPLab/Caption-Anything/caption_anything/text_refiner/text_refiner.py b/spaces/VIPLab/Caption-Anything/caption_anything/text_refiner/text_refiner.py
deleted file mode 100644
index 3af34334eb24df07a22278f7e9956f9097be04b3..0000000000000000000000000000000000000000
--- a/spaces/VIPLab/Caption-Anything/caption_anything/text_refiner/text_refiner.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from langchain.llms.openai import OpenAI
-import torch
-from PIL import Image, ImageDraw, ImageOps
-from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering
-import pdb
-
-class TextRefiner:
- def __init__(self, device, api_key=""):
- print(f"Initializing TextRefiner to {device}")
- self.llm = OpenAI(model_name="gpt-3.5-turbo", temperature=0, openai_api_key=api_key)
- self.prompt_tag = {
- "imagination": {"True": "could",
- "False": "could not"}
- }
- self.short_prompts = {
- "length": "around {length} words",
- "sentiment": "of {sentiment} sentiment",
- "language": "in {language}",
- }
-
- self.long_prompts = {
- "imagination": "The new sentence could extend the original description by using your imagination to create additional details, or think about what might have happened before or after the scene in the image, but should not conflict with the original sentence",
- }
-
- self.wiki_prompts = "I want you to act as a Wikipedia page. I will give you a sentence and you will parse the single main object in the sentence and provide a summary of that object in the format of a Wikipedia page. Your summary should be informative and factual, covering the most important aspects of the object. Start your summary with an introductory paragraph that gives an overview of the object. The overall length of the response should be around 100 words. You should not describe the parsing process and only provide the final summary. The sentence is \"{query}\"."
-
- self.control_prompts = "As a text reviser, you will convert an image description into a new sentence or long paragraph. The new text is {prompts}. {long_prompts} The sentence is \"{query}\" (give me the revised sentence only)"
-
- def parse(self, response):
- out = response.strip()
- return out
-
- def parse2(self, response):
- out = response.strip()
- return out
-
- def prepare_input(self, query, short_prompts, long_prompts):
- input = self.control_prompts.format(**{'prompts': ', '.join(short_prompts), 'long_prompts': '. '.join(long_prompts), 'query': query})
- print('prompt: ', input)
- return input
-
- def inference(self, query: str, controls: dict, context: list=[], enable_wiki=False):
- """
- query: the caption of the region of interest, generated by captioner
- controls: a dict of control singals, e.g., {"length": 5, "sentiment": "positive"}
- """
- prompts = []
- long_prompts = []
- for control, value in controls.items():
- # if control in self.prompt_tag:
- # value = self.prompt_tag[control][value]
- if control in self.short_prompts:
- prompts.append(self.short_prompts[control].format(**{control: value}))
- else:
- if value in [True, "True", "true"]:
- long_prompts.append(self.long_prompts[control])
- input = self.prepare_input(query, prompts, long_prompts)
- response = self.llm(input)
- response = self.parse(response)
-
- response_wiki = ""
- if enable_wiki:
- tmp_configs = {"query": query}
- prompt_wiki = self.wiki_prompts.format(**tmp_configs)
- response_wiki = self.llm(prompt_wiki)
- response_wiki = self.parse2(response_wiki)
- out = {
- 'raw_caption': query,
- 'caption': response,
- 'wiki': response_wiki
- }
- print(out)
- return out
-
-if __name__ == "__main__":
- model = TextRefiner(device='cpu')
- controls = {
- "length": "30",
- "sentiment": "negative",
- # "imagination": "True",
- "imagination": "False",
- "language": "English",
- }
- # model.inference(query='a dog is sitting on a brown bench', controls=controls)
- model.inference(query='a cat is sleeping', controls=controls)
-
\ No newline at end of file
diff --git a/spaces/XGBooster/WhisperingDiffusion/README.md b/spaces/XGBooster/WhisperingDiffusion/README.md
deleted file mode 100644
index 3917873db2d9d69ce1a7220c51dc38f70fc46b92..0000000000000000000000000000000000000000
--- a/spaces/XGBooster/WhisperingDiffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: WhisperingDiffusion
-emoji: ⚡
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/XzJosh/Ava-Bert-VITS2/modules.py b/spaces/XzJosh/Ava-Bert-VITS2/modules.py
deleted file mode 100644
index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Ava-Bert-VITS2/modules.py
+++ /dev/null
@@ -1,452 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
-class TransformerCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels = 0
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/text/chinese.py b/spaces/XzJosh/Bekki-Bert-VITS2/text/chinese.py
deleted file mode 100644
index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Bekki-Bert-VITS2/text/chinese.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import re
-
-import cn2an
-from pypinyin import lazy_pinyin, Style
-
-from text import symbols
-from text.symbols import punctuation
-from text.tone_sandhi import ToneSandhi
-
-current_file_path = os.path.dirname(__file__)
-pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in
- open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()}
-
-import jieba.posseg as psg
-
-
-rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- '$': '.',
- '“': "'",
- '”': "'",
- '‘': "'",
- '’': "'",
- '(': "'",
- ')': "'",
- '(': "'",
- ')': "'",
- '《': "'",
- '》': "'",
- '【': "'",
- '】': "'",
- '[': "'",
- ']': "'",
- '—': "-",
- '~': "-",
- '~': "-",
- '「': "'",
- '」': "'",
-
-}
-
-tone_modifier = ToneSandhi()
-
-def replace_punctuation(text):
- text = text.replace("嗯", "恩").replace("呣","母")
- pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys()))
-
- replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
-
- replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text)
-
- return replaced_text
-
-def g2p(text):
- pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation))
- sentences = [i for i in re.split(pattern, text) if i.strip()!='']
- phones, tones, word2ph = _g2p(sentences)
- assert sum(word2ph) == len(phones)
- assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch.
- phones = ['_'] + phones + ["_"]
- tones = [0] + tones + [0]
- word2ph = [1] + word2ph + [1]
- return phones, tones, word2ph
-
-
-def _get_initials_finals(word):
- initials = []
- finals = []
- orig_initials = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.INITIALS)
- orig_finals = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for c, v in zip(orig_initials, orig_finals):
- initials.append(c)
- finals.append(v)
- return initials, finals
-
-
-def _g2p(segments):
- phones_list = []
- tones_list = []
- word2ph = []
- for seg in segments:
- pinyins = []
- # Replace all English words in the sentence
- seg = re.sub('[a-zA-Z]+', '', seg)
- seg_cut = psg.lcut(seg)
- initials = []
- finals = []
- seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
- for word, pos in seg_cut:
- if pos == 'eng':
- continue
- sub_initials, sub_finals = _get_initials_finals(word)
- sub_finals = tone_modifier.modified_tone(word, pos,
- sub_finals)
- initials.append(sub_initials)
- finals.append(sub_finals)
-
- # assert len(sub_initials) == len(sub_finals) == len(word)
- initials = sum(initials, [])
- finals = sum(finals, [])
- #
- for c, v in zip(initials, finals):
- raw_pinyin = c+v
- # NOTE: post process for pypinyin outputs
- # we discriminate i, ii and iii
- if c == v:
- assert c in punctuation
- phone = [c]
- tone = '0'
- word2ph.append(1)
- else:
- v_without_tone = v[:-1]
- tone = v[-1]
-
- pinyin = c+v_without_tone
- assert tone in '12345'
-
- if c:
- # 多音节
- v_rep_map = {
- "uei": 'ui',
- 'iou': 'iu',
- 'uen': 'un',
- }
- if v_without_tone in v_rep_map.keys():
- pinyin = c+v_rep_map[v_without_tone]
- else:
- # 单音节
- pinyin_rep_map = {
- 'ing': 'ying',
- 'i': 'yi',
- 'in': 'yin',
- 'u': 'wu',
- }
- if pinyin in pinyin_rep_map.keys():
- pinyin = pinyin_rep_map[pinyin]
- else:
- single_rep_map = {
- 'v': 'yu',
- 'e': 'e',
- 'i': 'y',
- 'u': 'w',
- }
- if pinyin[0] in single_rep_map.keys():
- pinyin = single_rep_map[pinyin[0]]+pinyin[1:]
-
- assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
- phone = pinyin_to_symbol_map[pinyin].split(' ')
- word2ph.append(len(phone))
-
- phones_list += phone
- tones_list += [int(tone)] * len(phone)
- return phones_list, tones_list, word2ph
-
-
-
-def text_normalize(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- text = replace_punctuation(text)
- return text
-
-def get_bert_feature(text, word2ph):
- from text import chinese_bert
- return chinese_bert.get_bert_feature(text, word2ph)
-
-if __name__ == '__main__':
- from text.chinese_bert import get_bert_feature
- text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
- text = text_normalize(text)
- print(text)
- phones, tones, word2ph = g2p(text)
- bert = get_bert_feature(text, word2ph)
-
- print(phones, tones, word2ph, bert.shape)
-
-
-# # 示例用法
-# text = "这是一个示例文本:,你好!这是一个测试...."
-# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Jiaran-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
deleted file mode 100644
index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Jiaran-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-language:
-- zh
-tags:
-- bert
-license: "apache-2.0"
----
-
-# Please use 'Bert' related functions to load this model!
-
-## Chinese BERT with Whole Word Masking
-For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
-
-**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
-Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
-
-This repository is developed based on:https://github.com/google-research/bert
-
-You may also interested in,
-- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
-- Chinese MacBERT: https://github.com/ymcui/MacBERT
-- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
-- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
-- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
-
-More resources by HFL: https://github.com/ymcui/HFL-Anthology
-
-## Citation
-If you find the technical report or resource is useful, please cite the following technical report in your paper.
-- Primary: https://arxiv.org/abs/2004.13922
-```
-@inproceedings{cui-etal-2020-revisiting,
- title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
- author = "Cui, Yiming and
- Che, Wanxiang and
- Liu, Ting and
- Qin, Bing and
- Wang, Shijin and
- Hu, Guoping",
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
- month = nov,
- year = "2020",
- address = "Online",
- publisher = "Association for Computational Linguistics",
- url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
- pages = "657--668",
-}
-```
-- Secondary: https://arxiv.org/abs/1906.08101
-```
-@article{chinese-bert-wwm,
- title={Pre-Training with Whole Word Masking for Chinese BERT},
- author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
- journal={arXiv preprint arXiv:1906.08101},
- year={2019}
- }
-```
\ No newline at end of file
diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/monotonic_align/__init__.py
deleted file mode 100644
index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/LAPLACE-Bert-VITS2/monotonic_align/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-def maximum_path(neg_cent, mask):
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/Yilin98/Whisper-Small-Swedish/README.md b/spaces/Yilin98/Whisper-Small-Swedish/README.md
deleted file mode 100644
index 6724dad17fc072cf51e792034768532bf749be11..0000000000000000000000000000000000000000
--- a/spaces/Yilin98/Whisper-Small-Swedish/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Whisper Small Swedish
-emoji: 😻
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/yolo.py b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/yolo.py
deleted file mode 100644
index fa05fcf9a8d9c747bccb6474a7dafd0ee7fd7810..0000000000000000000000000000000000000000
--- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/models/yolo.py
+++ /dev/null
@@ -1,357 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-YOLO-specific modules
-
-Usage:
- $ python models/yolo.py --cfg yolov5s.yaml
-"""
-
-import argparse
-import contextlib
-import os
-import platform
-import sys
-from copy import deepcopy
-from pathlib import Path
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[1] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-if platform.system() != 'Windows':
- ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
-
-from models.common import *
-from models.experimental import *
-from utils.autoanchor import check_anchor_order
-from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
-from utils.plots import feature_visualization
-from utils.torch_utils import (fuse_conv_and_bn, initialize_weights, model_info, profile, scale_img, select_device,
- time_sync)
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-
-
-class Detect(nn.Module):
- stride = None # strides computed during build
- dynamic = False # force grid reconstruction
- export = False # export mode
-
- def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
- super().__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.empty(1)] * self.nl # init grid
- self.anchor_grid = [torch.empty(1)] * self.nl # init anchor grid
- self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
- self.inplace = inplace # use inplace ops (e.g. slice assignment)
-
- def forward(self, x):
- z = [] # inference output
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
-
- y = x[i].sigmoid()
- if self.inplace:
- y[..., 0:2] = (y[..., 0:2] * 2 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
- wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)
-
- def _make_grid(self, nx=20, ny=20, i=0, torch_1_10=check_version(torch.__version__, '1.10.0')):
- d = self.anchors[i].device
- t = self.anchors[i].dtype
- shape = 1, self.na, ny, nx, 2 # grid shape
- y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t)
- yv, xv = torch.meshgrid(y, x, indexing='ij') if torch_1_10 else torch.meshgrid(y, x) # torch>=0.7 compatibility
- grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5
- anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape)
- return grid, anchor_grid
-
-
-class BaseModel(nn.Module):
- # YOLOv5 base model
- def forward(self, x, profile=False, visualize=False):
- return self._forward_once(x, profile, visualize) # single-scale inference, train
-
- def _forward_once(self, x, profile=False, visualize=False):
- y, dt = [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
- if profile:
- self._profile_one_layer(m, x, dt)
- x = m(x) # run
- y.append(x if m.i in self.save else None) # save output
- if visualize:
- feature_visualization(x, m.type, m.i, save_dir=visualize)
- return x
-
- def _profile_one_layer(self, m, x, dt):
- c = m == self.model[-1] # is final layer, copy input as inplace fix
- o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
- t = time_sync()
- for _ in range(10):
- m(x.copy() if c else x)
- dt.append((time_sync() - t) * 100)
- if m == self.model[0]:
- LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module")
- LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
- if c:
- LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
-
- def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
- LOGGER.info('Fusing layers... ')
- for m in self.model.modules():
- if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, 'bn') # remove batchnorm
- m.forward = m.forward_fuse # update forward
- self.info()
- return self
-
- def info(self, verbose=False, img_size=640): # print model information
- model_info(self, verbose, img_size)
-
- def _apply(self, fn):
- # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
- self = super()._apply(fn)
- m = self.model[-1] # Detect()
- if isinstance(m, Detect):
- m.stride = fn(m.stride)
- m.grid = list(map(fn, m.grid))
- if isinstance(m.anchor_grid, list):
- m.anchor_grid = list(map(fn, m.anchor_grid))
- return self
-
-
-class DetectionModel(BaseModel):
- # YOLOv5 detection model
- def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
- super().__init__()
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg, encoding='ascii', errors='ignore') as f:
- self.yaml = yaml.safe_load(f) # model dict
-
- # Define model
- ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
- if nc and nc != self.yaml['nc']:
- LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- if anchors:
- LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')
- self.yaml['anchors'] = round(anchors) # override yaml value
- self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
- self.names = [str(i) for i in range(self.yaml['nc'])] # default names
- self.inplace = self.yaml.get('inplace', True)
-
- # Build strides, anchors
- m = self.model[-1] # Detect()
- if isinstance(m, Detect):
- s = 256 # 2x min stride
- m.inplace = self.inplace
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.empty(1, ch, s, s))]) # forward
- check_anchor_order(m) # must be in pixel-space (not grid-space)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases() # only run once
-
- # Init weights, biases
- initialize_weights(self)
- self.info()
- LOGGER.info('')
-
- def forward(self, x, augment=False, profile=False, visualize=False):
- if augment:
- return self._forward_augment(x) # augmented inference, None
- return self._forward_once(x, profile, visualize) # single-scale inference, train
-
- def _forward_augment(self, x):
- img_size = x.shape[-2:] # height, width
- s = [1, 0.83, 0.67] # scales
- f = [None, 3, None] # flips (2-ud, 3-lr)
- y = [] # outputs
- for si, fi in zip(s, f):
- xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
- yi = self._forward_once(xi)[0] # forward
- # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
- yi = self._descale_pred(yi, fi, si, img_size)
- y.append(yi)
- y = self._clip_augmented(y) # clip augmented tails
- return torch.cat(y, 1), None # augmented inference, train
-
- def _descale_pred(self, p, flips, scale, img_size):
- # de-scale predictions following augmented inference (inverse operation)
- if self.inplace:
- p[..., :4] /= scale # de-scale
- if flips == 2:
- p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
- elif flips == 3:
- p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
- else:
- x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
- if flips == 2:
- y = img_size[0] - y # de-flip ud
- elif flips == 3:
- x = img_size[1] - x # de-flip lr
- p = torch.cat((x, y, wh, p[..., 4:]), -1)
- return p
-
- def _clip_augmented(self, y):
- # Clip YOLOv5 augmented inference tails
- nl = self.model[-1].nl # number of detection layers (P3-P5)
- g = sum(4 ** x for x in range(nl)) # grid points
- e = 1 # exclude layer count
- i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices
- y[0] = y[0][:, :-i] # large
- i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices
- y[-1] = y[-1][:, i:] # small
- return y
-
- def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1).detach() # conv.bias(255) to (3,85)
- b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
-
-Model = DetectionModel # retain YOLOv5 'Model' class for backwards compatibility
-
-
-class ClassificationModel(BaseModel):
- # YOLOv5 classification model
- def __init__(self, cfg=None, model=None, nc=1000, cutoff=10): # yaml, model, number of classes, cutoff index
- super().__init__()
- self._from_detection_model(model, nc, cutoff) if model is not None else self._from_yaml(cfg)
-
- def _from_detection_model(self, model, nc=1000, cutoff=10):
- # Create a YOLOv5 classification model from a YOLOv5 detection model
- if isinstance(model, DetectMultiBackend):
- model = model.model # unwrap DetectMultiBackend
- model.model = model.model[:cutoff] # backbone
- m = model.model[-1] # last layer
- ch = m.conv.in_channels if hasattr(m, 'conv') else m.cv1.conv.in_channels # ch into module
- c = Classify(ch, nc) # Classify()
- c.i, c.f, c.type = m.i, m.f, 'models.common.Classify' # index, from, type
- model.model[-1] = c # replace
- self.model = model.model
- self.stride = model.stride
- self.save = []
- self.nc = nc
-
- def _from_yaml(self, cfg):
- # Create a YOLOv5 classification model from a *.yaml file
- self.model = None
-
-
-def parse_model(d, ch): # model_dict, input_channels(3)
- LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
- anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- with contextlib.suppress(NameError):
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
-
- n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in (Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
- BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x):
- c1, c2 = ch[f], args[0]
- if c2 != no: # if not output
- c2 = make_divisible(c2 * gw, 8)
-
- args = [c1, c2, *args[1:]]
- if m in [BottleneckCSP, C3, C3TR, C3Ghost, C3x]:
- args.insert(2, n) # number of repeats
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum(ch[x] for x in f)
- elif m is Detect:
- args.append([ch[x] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- elif m is Contract:
- c2 = ch[f] * args[0] ** 2
- elif m is Expand:
- c2 = ch[f] // args[0] ** 2
- else:
- c2 = ch[f]
-
- m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum(x.numel() for x in m_.parameters()) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- ch.append(c2)
- return nn.Sequential(*layers), sorted(save)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
- parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--profile', action='store_true', help='profile model speed')
- parser.add_argument('--line-profile', action='store_true', help='profile model speed layer by layer')
- parser.add_argument('--test', action='store_true', help='test all yolo*.yaml')
- opt = parser.parse_args()
- opt.cfg = check_yaml(opt.cfg) # check YAML
- print_args(vars(opt))
- device = select_device(opt.device)
-
- # Create model
- im = torch.rand(opt.batch_size, 3, 640, 640).to(device)
- model = Model(opt.cfg).to(device)
-
- # Options
- if opt.line_profile: # profile layer by layer
- model(im, profile=True)
-
- elif opt.profile: # profile forward-backward
- results = profile(input=im, ops=[model], n=3)
-
- elif opt.test: # test all models
- for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'):
- try:
- _ = Model(cfg)
- except Exception as e:
- print(f'Error in {cfg}: {e}')
-
- else: # report fused model summary
- model.fuse()
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/iter_based_runner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/iter_based_runner.py
deleted file mode 100644
index 1df4de8c0285669dec9b014dfd1f3dd1600f0831..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/iter_based_runner.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import platform
-import shutil
-import time
-import warnings
-
-import torch
-from torch.optim import Optimizer
-
-import annotator.uniformer.mmcv as mmcv
-from .base_runner import BaseRunner
-from .builder import RUNNERS
-from .checkpoint import save_checkpoint
-from .hooks import IterTimerHook
-from .utils import get_host_info
-
-
-class IterLoader:
-
- def __init__(self, dataloader):
- self._dataloader = dataloader
- self.iter_loader = iter(self._dataloader)
- self._epoch = 0
-
- @property
- def epoch(self):
- return self._epoch
-
- def __next__(self):
- try:
- data = next(self.iter_loader)
- except StopIteration:
- self._epoch += 1
- if hasattr(self._dataloader.sampler, 'set_epoch'):
- self._dataloader.sampler.set_epoch(self._epoch)
- time.sleep(2) # Prevent possible deadlock during epoch transition
- self.iter_loader = iter(self._dataloader)
- data = next(self.iter_loader)
-
- return data
-
- def __len__(self):
- return len(self._dataloader)
-
-
-@RUNNERS.register_module()
-class IterBasedRunner(BaseRunner):
- """Iteration-based Runner.
-
- This runner train models iteration by iteration.
- """
-
- def train(self, data_loader, **kwargs):
- self.model.train()
- self.mode = 'train'
- self.data_loader = data_loader
- self._epoch = data_loader.epoch
- data_batch = next(data_loader)
- self.call_hook('before_train_iter')
- outputs = self.model.train_step(data_batch, self.optimizer, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('model.train_step() must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
- self.call_hook('after_train_iter')
- self._inner_iter += 1
- self._iter += 1
-
- @torch.no_grad()
- def val(self, data_loader, **kwargs):
- self.model.eval()
- self.mode = 'val'
- self.data_loader = data_loader
- data_batch = next(data_loader)
- self.call_hook('before_val_iter')
- outputs = self.model.val_step(data_batch, **kwargs)
- if not isinstance(outputs, dict):
- raise TypeError('model.val_step() must return a dict')
- if 'log_vars' in outputs:
- self.log_buffer.update(outputs['log_vars'], outputs['num_samples'])
- self.outputs = outputs
- self.call_hook('after_val_iter')
- self._inner_iter += 1
-
- def run(self, data_loaders, workflow, max_iters=None, **kwargs):
- """Start running.
-
- Args:
- data_loaders (list[:obj:`DataLoader`]): Dataloaders for training
- and validation.
- workflow (list[tuple]): A list of (phase, iters) to specify the
- running order and iterations. E.g, [('train', 10000),
- ('val', 1000)] means running 10000 iterations for training and
- 1000 iterations for validation, iteratively.
- """
- assert isinstance(data_loaders, list)
- assert mmcv.is_list_of(workflow, tuple)
- assert len(data_loaders) == len(workflow)
- if max_iters is not None:
- warnings.warn(
- 'setting max_iters in run is deprecated, '
- 'please set max_iters in runner_config', DeprecationWarning)
- self._max_iters = max_iters
- assert self._max_iters is not None, (
- 'max_iters must be specified during instantiation')
-
- work_dir = self.work_dir if self.work_dir is not None else 'NONE'
- self.logger.info('Start running, host: %s, work_dir: %s',
- get_host_info(), work_dir)
- self.logger.info('Hooks will be executed in the following order:\n%s',
- self.get_hook_info())
- self.logger.info('workflow: %s, max: %d iters', workflow,
- self._max_iters)
- self.call_hook('before_run')
-
- iter_loaders = [IterLoader(x) for x in data_loaders]
-
- self.call_hook('before_epoch')
-
- while self.iter < self._max_iters:
- for i, flow in enumerate(workflow):
- self._inner_iter = 0
- mode, iters = flow
- if not isinstance(mode, str) or not hasattr(self, mode):
- raise ValueError(
- 'runner has no method named "{}" to run a workflow'.
- format(mode))
- iter_runner = getattr(self, mode)
- for _ in range(iters):
- if mode == 'train' and self.iter >= self._max_iters:
- break
- iter_runner(iter_loaders[i], **kwargs)
-
- time.sleep(1) # wait for some hooks like loggers to finish
- self.call_hook('after_epoch')
- self.call_hook('after_run')
-
- def resume(self,
- checkpoint,
- resume_optimizer=True,
- map_location='default'):
- """Resume model from checkpoint.
-
- Args:
- checkpoint (str): Checkpoint to resume from.
- resume_optimizer (bool, optional): Whether resume the optimizer(s)
- if the checkpoint file includes optimizer(s). Default to True.
- map_location (str, optional): Same as :func:`torch.load`.
- Default to 'default'.
- """
- if map_location == 'default':
- device_id = torch.cuda.current_device()
- checkpoint = self.load_checkpoint(
- checkpoint,
- map_location=lambda storage, loc: storage.cuda(device_id))
- else:
- checkpoint = self.load_checkpoint(
- checkpoint, map_location=map_location)
-
- self._epoch = checkpoint['meta']['epoch']
- self._iter = checkpoint['meta']['iter']
- self._inner_iter = checkpoint['meta']['iter']
- if 'optimizer' in checkpoint and resume_optimizer:
- if isinstance(self.optimizer, Optimizer):
- self.optimizer.load_state_dict(checkpoint['optimizer'])
- elif isinstance(self.optimizer, dict):
- for k in self.optimizer.keys():
- self.optimizer[k].load_state_dict(
- checkpoint['optimizer'][k])
- else:
- raise TypeError(
- 'Optimizer should be dict or torch.optim.Optimizer '
- f'but got {type(self.optimizer)}')
-
- self.logger.info(f'resumed from epoch: {self.epoch}, iter {self.iter}')
-
- def save_checkpoint(self,
- out_dir,
- filename_tmpl='iter_{}.pth',
- meta=None,
- save_optimizer=True,
- create_symlink=True):
- """Save checkpoint to file.
-
- Args:
- out_dir (str): Directory to save checkpoint files.
- filename_tmpl (str, optional): Checkpoint file template.
- Defaults to 'iter_{}.pth'.
- meta (dict, optional): Metadata to be saved in checkpoint.
- Defaults to None.
- save_optimizer (bool, optional): Whether save optimizer.
- Defaults to True.
- create_symlink (bool, optional): Whether create symlink to the
- latest checkpoint file. Defaults to True.
- """
- if meta is None:
- meta = {}
- elif not isinstance(meta, dict):
- raise TypeError(
- f'meta should be a dict or None, but got {type(meta)}')
- if self.meta is not None:
- meta.update(self.meta)
- # Note: meta.update(self.meta) should be done before
- # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise
- # there will be problems with resumed checkpoints.
- # More details in https://github.com/open-mmlab/mmcv/pull/1108
- meta.update(epoch=self.epoch + 1, iter=self.iter)
-
- filename = filename_tmpl.format(self.iter + 1)
- filepath = osp.join(out_dir, filename)
- optimizer = self.optimizer if save_optimizer else None
- save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta)
- # in some environments, `os.symlink` is not supported, you may need to
- # set `create_symlink` to False
- if create_symlink:
- dst_file = osp.join(out_dir, 'latest.pth')
- if platform.system() != 'Windows':
- mmcv.symlink(filename, dst_file)
- else:
- shutil.copy(filepath, dst_file)
-
- def register_training_hooks(self,
- lr_config,
- optimizer_config=None,
- checkpoint_config=None,
- log_config=None,
- momentum_config=None,
- custom_hooks_config=None):
- """Register default hooks for iter-based training.
-
- Checkpoint hook, optimizer stepper hook and logger hooks will be set to
- `by_epoch=False` by default.
-
- Default hooks include:
-
- +----------------------+-------------------------+
- | Hooks | Priority |
- +======================+=========================+
- | LrUpdaterHook | VERY_HIGH (10) |
- +----------------------+-------------------------+
- | MomentumUpdaterHook | HIGH (30) |
- +----------------------+-------------------------+
- | OptimizerStepperHook | ABOVE_NORMAL (40) |
- +----------------------+-------------------------+
- | CheckpointSaverHook | NORMAL (50) |
- +----------------------+-------------------------+
- | IterTimerHook | LOW (70) |
- +----------------------+-------------------------+
- | LoggerHook(s) | VERY_LOW (90) |
- +----------------------+-------------------------+
- | CustomHook(s) | defaults to NORMAL (50) |
- +----------------------+-------------------------+
-
- If custom hooks have same priority with default hooks, custom hooks
- will be triggered after default hooks.
- """
- if checkpoint_config is not None:
- checkpoint_config.setdefault('by_epoch', False)
- if lr_config is not None:
- lr_config.setdefault('by_epoch', False)
- if log_config is not None:
- for info in log_config['hooks']:
- info.setdefault('by_epoch', False)
- super(IterBasedRunner, self).register_training_hooks(
- lr_config=lr_config,
- momentum_config=momentum_config,
- optimizer_config=optimizer_config,
- checkpoint_config=checkpoint_config,
- log_config=log_config,
- timer_config=IterTimerHook(),
- custom_hooks_config=custom_hooks_config)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py
deleted file mode 100644
index a5f6be31e14488e4b8a006b7142a82c872388d82..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/reppoints_detector.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class RepPointsDetector(SingleStageDetector):
- """RepPoints: Point Set Representation for Object Detection.
-
- This detector is the implementation of:
- - RepPoints detector (https://arxiv.org/pdf/1904.11490)
- """
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(RepPointsDetector,
- self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg,
- pretrained)
diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/config.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/config.py
deleted file mode 100644
index 091d790e963959c326917688ee267e6a4ec136d1..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/config.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import os
-
-SMPL_DATA_PATH = "./body_models/smpl"
-
-SMPL_KINTREE_PATH = os.path.join(SMPL_DATA_PATH, "kintree_table.pkl")
-SMPL_MODEL_PATH = os.path.join(SMPL_DATA_PATH, "SMPL_NEUTRAL.pkl")
-JOINT_REGRESSOR_TRAIN_EXTRA = os.path.join(SMPL_DATA_PATH, 'J_regressor_extra.npy')
-
-ROT_CONVENTION_TO_ROT_NUMBER = {
- 'legacy': 23,
- 'no_hands': 21,
- 'full_hands': 51,
- 'mitten_hands': 33,
-}
-
-GENDERS = ['neutral', 'male', 'female']
-NUM_BETAS = 10
\ No newline at end of file
diff --git a/spaces/agueroooooooooo/Transport_Mode_Detector/data_loader.py b/spaces/agueroooooooooo/Transport_Mode_Detector/data_loader.py
deleted file mode 100644
index 936b6edb989de7d7e5922fd1dc21745b1780658e..0000000000000000000000000000000000000000
--- a/spaces/agueroooooooooo/Transport_Mode_Detector/data_loader.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import random
-from operator import itemgetter
-
-from data_enrich import DataEnrich
-
-
-class DataLoader:
-
- label_mapping = {
- 'car': 0,
- 'walk': 1,
- 'bus': 2,
- 'train': 3,
- 'subway': 4,
- 'bike': 5,
- 'run': 6,
- 'boat': 7,
- 'airplane': 8,
- 'motorcycle': 9,
- 'taxi': 10
- }
-
- fields_to_feed = ["dist", "speed", "accel", "timedelta", "jerk", "bearing", "bearing_rate"]
- labels_to_remove = ["boat", "motorcycle", "airplane", "run", "bike"]
-
-
- def __init__(self, test_ratio=0.2, val_ratio=0.1, batchsize=4, read_from_pickle=True):
- de = DataEnrich()
- self._raw = de.get_enriched_data(read_from_pickle)
- self._test_ratio = test_ratio
- self._val_ratio = val_ratio
- self._batchsize = batchsize
-
- def _remove_traj_containing_labels(self):
- cleaned = []
- for elem in self._raw:
- if len(elem) == 0:
- continue
- if all(x not in list(elem["label"]) for x in self.labels_to_remove):
- cleaned.append(elem)
- self._raw = cleaned
-
- def _merge_labels(self, target_label, label_to_remove):
- for elem in self._raw:
- if label_to_remove in list(elem["label"]):
- elem["label"] = elem["label"].replace(to_replace=label_to_remove, value=target_label)
-
- def _labels_to_int_repr(self):
- for elem in self._raw:
- elem["label"] = elem["label"].apply(lambda x: self.label_mapping[x])
-
- def _get_split_indices(self, traj):
- train_size = int((1 - self._test_ratio) * len(traj))
- val_size = len(traj) - int((1 - self._val_ratio) * len(traj))
-
- indices = [x for x in range(len(traj))]
-
- indices_for_training = random.sample(indices, train_size)
- indices_for_validation = random.sample(indices_for_training, val_size)
- indices_for_training = set(indices_for_training) - set(indices_for_validation)
- indices_for_testing = set(indices) - indices_for_training
- indices_for_testing = list(indices_for_testing)
-
- return list(indices_for_training), list(indices_for_testing), list(indices_for_validation)
-
- def _set_splitted_data(self, traj, labels):
-
- i_train, i_test, i_val = self._get_split_indices(traj)
-
- random.shuffle(i_train)
-
- self.test_data = list(itemgetter(*i_test)(traj))
- self.val_data = list(itemgetter(*i_val)(traj))
- self.train_data = list(itemgetter(*i_train)(traj))
- self.test_labels = list(itemgetter(*i_test)(labels))
- self.val_labels = list(itemgetter(*i_val)(labels))
- self.train_labels = list(itemgetter(*i_train)(labels))
-
- def _split_too_long_traj(self, traj, labels, max_points):
- if len(traj) > max_points*2:
- splitted_traj, splitted_labels = [],[]
- num_subsets = len(traj) // max_points
- print("Splitting trajectory with length ", len(traj), "in ", num_subsets, "trajectories")
- for i in range(num_subsets):
- end_pointer = len(traj)-1 if ((i+1)*max_points)+max_points > len(traj) else (i*max_points)+max_points
- traj_subset = traj[i*max_points:end_pointer]
- labels_subset = labels[i*max_points:end_pointer]
- assert len(traj_subset) == len(labels_subset)
- splitted_traj.append(traj_subset)
- splitted_labels.append(labels_subset)
- return splitted_traj, splitted_labels
- return [traj], [labels]
-
- def prepare_data(self):
- trajs = []
- labels = []
-
- self._remove_traj_containing_labels()
- self._merge_labels("car", "taxi")
- self._labels_to_int_repr()
-
- for elem in self._raw:
- assert len(elem) > 0
- data_ = elem[self.fields_to_feed].values.tolist()
- label_ = elem["label"].values.tolist()
- data_, label_ = self._split_too_long_traj(data_, label_, 350)
- trajs.extend(data_)
- labels.extend(label_)
-
- self._set_splitted_data(trajs, labels)
-
- def batches(self):
- for i in range(0, len(self.train_data), self._batchsize):
-
- if len(self.train_data[i:i + self._batchsize]) < self._batchsize:
- break # drop last incomplete batch
-
- labels_sorted = sorted(self.train_labels[i:i + self._batchsize:], key=len, reverse=True)
- train_sorted = sorted(self.train_data[i:i + self._batchsize:], key=len, reverse=True)
- for p in range(len(labels_sorted)):
- assert len(labels_sorted[p]) == len(train_sorted[p])
- yield train_sorted, labels_sorted
-
- def val_batches(self):
- for i in range(0, len(self.val_data), self._batchsize):
-
- if len(self.val_data[i:i + self._batchsize]) < self._batchsize:
- break # drop last incomplete batch
-
- labels_sorted = sorted(self.val_labels[i:i + self._batchsize:], key=len, reverse=True)
- val_sorted = sorted(self.val_data[i:i + self._batchsize:], key=len, reverse=True)
- for p in range(len(labels_sorted)):
- assert len(labels_sorted[p]) == len(val_sorted[p])
- yield val_sorted, labels_sorted
-
- def test_batches(self):
- for i in range(0, len(self.test_data), self._batchsize):
-
- if len(self.test_data[i:i + self._batchsize]) < self._batchsize:
- break # drop last incomplete batch
-
- labels_sorted = sorted(self.test_labels[i:i + self._batchsize:], key=len, reverse=True)
- test_sorted = sorted(self.test_data[i:i + self._batchsize:], key=len, reverse=True)
- for p in range(len(labels_sorted)):
- assert len(labels_sorted[p]) == len(test_sorted[p])
- yield test_sorted, labels_sorted
-
- def get_train_size(self):
- return len(self.train_data)
-
- def get_val_size(self):
- return len(self.val_data)
-
- def get_test_size(self):
- return len(self.test_data)
\ No newline at end of file
diff --git a/spaces/aijack/jojo/e4e/models/discriminator.py b/spaces/aijack/jojo/e4e/models/discriminator.py
deleted file mode 100644
index 16bf3722c7f2e35cdc9bd177a33ed0975e67200d..0000000000000000000000000000000000000000
--- a/spaces/aijack/jojo/e4e/models/discriminator.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from torch import nn
-
-
-class LatentCodesDiscriminator(nn.Module):
- def __init__(self, style_dim, n_mlp):
- super().__init__()
-
- self.style_dim = style_dim
-
- layers = []
- for i in range(n_mlp-1):
- layers.append(
- nn.Linear(style_dim, style_dim)
- )
- layers.append(nn.LeakyReLU(0.2))
- layers.append(nn.Linear(512, 1))
- self.mlp = nn.Sequential(*layers)
-
- def forward(self, w):
- return self.mlp(w)
diff --git a/spaces/akhaliq/lama/saicinpainting/training/modules/spatial_transform.py b/spaces/akhaliq/lama/saicinpainting/training/modules/spatial_transform.py
deleted file mode 100644
index 2de024ba08c549605a08b64d096f1f0db7b7722a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/saicinpainting/training/modules/spatial_transform.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from kornia.geometry.transform import rotate
-
-
-class LearnableSpatialTransformWrapper(nn.Module):
- def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True):
- super().__init__()
- self.impl = impl
- self.angle = torch.rand(1) * angle_init_range
- if train_angle:
- self.angle = nn.Parameter(self.angle, requires_grad=True)
- self.pad_coef = pad_coef
-
- def forward(self, x):
- if torch.is_tensor(x):
- return self.inverse_transform(self.impl(self.transform(x)), x)
- elif isinstance(x, tuple):
- x_trans = tuple(self.transform(elem) for elem in x)
- y_trans = self.impl(x_trans)
- return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x))
- else:
- raise ValueError(f'Unexpected input type {type(x)}')
-
- def transform(self, x):
- height, width = x.shape[2:]
- pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef)
- x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect')
- x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded))
- return x_padded_rotated
-
- def inverse_transform(self, y_padded_rotated, orig_x):
- height, width = orig_x.shape[2:]
- pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef)
-
- y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated))
- y_height, y_width = y_padded.shape[2:]
- y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w]
- return y
-
-
-if __name__ == '__main__':
- layer = LearnableSpatialTransformWrapper(nn.Identity())
- x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float()
- y = layer(x)
- assert x.shape == y.shape
- assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1])
- print('all ok')
diff --git a/spaces/akhaliq/webui-orangemixs/app.py b/spaces/akhaliq/webui-orangemixs/app.py
deleted file mode 100644
index 1cc31288c1f4279e7e87097a28d1853c41b787cf..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/webui-orangemixs/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-from subprocess import getoutput
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
-elif("T4" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-
-os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py")
-os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-# ---------------------------------------------------------------------------------------------------------------------------------------------------
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- os.system(f"wget -q https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/AbyssOrangeMix2_sfw.ckpt")
- #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt")
- os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml")
-
- os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
-
\ No newline at end of file
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treewalkers/etree.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treewalkers/etree.py
deleted file mode 100644
index 837b27ec486924eb9ccef53c6a5d578bd787aefd..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treewalkers/etree.py
+++ /dev/null
@@ -1,131 +0,0 @@
-from __future__ import absolute_import, division, unicode_literals
-
-from collections import OrderedDict
-import re
-
-from pip._vendor.six import string_types
-
-from . import base
-from .._utils import moduleFactoryFactory
-
-tag_regexp = re.compile("{([^}]*)}(.*)")
-
-
-def getETreeBuilder(ElementTreeImplementation):
- ElementTree = ElementTreeImplementation
- ElementTreeCommentType = ElementTree.Comment("asd").tag
-
- class TreeWalker(base.NonRecursiveTreeWalker): # pylint:disable=unused-variable
- """Given the particular ElementTree representation, this implementation,
- to avoid using recursion, returns "nodes" as tuples with the following
- content:
-
- 1. The current element
-
- 2. The index of the element relative to its parent
-
- 3. A stack of ancestor elements
-
- 4. A flag "text", "tail" or None to indicate if the current node is a
- text node; either the text or tail of the current element (1)
- """
- def getNodeDetails(self, node):
- if isinstance(node, tuple): # It might be the root Element
- elt, _, _, flag = node
- if flag in ("text", "tail"):
- return base.TEXT, getattr(elt, flag)
- else:
- node = elt
-
- if not(hasattr(node, "tag")):
- node = node.getroot()
-
- if node.tag in ("DOCUMENT_ROOT", "DOCUMENT_FRAGMENT"):
- return (base.DOCUMENT,)
-
- elif node.tag == "":
- return (base.DOCTYPE, node.text,
- node.get("publicId"), node.get("systemId"))
-
- elif node.tag == ElementTreeCommentType:
- return base.COMMENT, node.text
-
- else:
- assert isinstance(node.tag, string_types), type(node.tag)
- # This is assumed to be an ordinary element
- match = tag_regexp.match(node.tag)
- if match:
- namespace, tag = match.groups()
- else:
- namespace = None
- tag = node.tag
- attrs = OrderedDict()
- for name, value in list(node.attrib.items()):
- match = tag_regexp.match(name)
- if match:
- attrs[(match.group(1), match.group(2))] = value
- else:
- attrs[(None, name)] = value
- return (base.ELEMENT, namespace, tag,
- attrs, len(node) or node.text)
-
- def getFirstChild(self, node):
- if isinstance(node, tuple):
- element, key, parents, flag = node
- else:
- element, key, parents, flag = node, None, [], None
-
- if flag in ("text", "tail"):
- return None
- else:
- if element.text:
- return element, key, parents, "text"
- elif len(element):
- parents.append(element)
- return element[0], 0, parents, None
- else:
- return None
-
- def getNextSibling(self, node):
- if isinstance(node, tuple):
- element, key, parents, flag = node
- else:
- return None
-
- if flag == "text":
- if len(element):
- parents.append(element)
- return element[0], 0, parents, None
- else:
- return None
- else:
- if element.tail and flag != "tail":
- return element, key, parents, "tail"
- elif key < len(parents[-1]) - 1:
- return parents[-1][key + 1], key + 1, parents, None
- else:
- return None
-
- def getParentNode(self, node):
- if isinstance(node, tuple):
- element, key, parents, flag = node
- else:
- return None
-
- if flag == "text":
- if not parents:
- return element
- else:
- return element, key, parents, None
- else:
- parent = parents.pop()
- if not parents:
- return parent
- else:
- assert list(parents[-1]).count(parent) == 1
- return parent, list(parents[-1]).index(parent), parents, None
-
- return locals()
-
-
-getETreeModule = moduleFactoryFactory(getETreeBuilder)
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/modeline.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/modeline.py
deleted file mode 100644
index 047d86d6be6a25ae26fba26d959bf2a1609f7dc8..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/modeline.py
+++ /dev/null
@@ -1,43 +0,0 @@
-"""
- pygments.modeline
- ~~~~~~~~~~~~~~~~~
-
- A simple modeline parser (based on pymodeline).
-
- :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import re
-
-__all__ = ['get_filetype_from_buffer']
-
-
-modeline_re = re.compile(r'''
- (?: vi | vim | ex ) (?: [<=>]? \d* )? :
- .* (?: ft | filetype | syn | syntax ) = ( [^:\s]+ )
-''', re.VERBOSE)
-
-
-def get_filetype_from_line(l):
- m = modeline_re.search(l)
- if m:
- return m.group(1)
-
-
-def get_filetype_from_buffer(buf, max_lines=5):
- """
- Scan the buffer for modelines and return filetype if one is found.
- """
- lines = buf.splitlines()
- for l in lines[-1:-max_lines-1:-1]:
- ret = get_filetype_from_line(l)
- if ret:
- return ret
- for i in range(max_lines, -1, -1):
- if i < len(lines):
- ret = get_filetype_from_line(lines[i])
- if ret:
- return ret
-
- return None
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py
deleted file mode 100644
index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_stack.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from typing import List, TypeVar
-
-T = TypeVar("T")
-
-
-class Stack(List[T]):
- """A small shim over builtin list."""
-
- @property
- def top(self) -> T:
- """Get top of stack."""
- return self[-1]
-
- def push(self, item: T) -> None:
- """Push an item on to the stack (append in stack nomenclature)."""
- self.append(item)
diff --git a/spaces/ali-ghamdan/deoldify/fastai/torch_core.py b/spaces/ali-ghamdan/deoldify/fastai/torch_core.py
deleted file mode 100644
index 6b089e09e4e08c2b6d50b70ef3223fadae2f48cb..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/deoldify/fastai/torch_core.py
+++ /dev/null
@@ -1,430 +0,0 @@
-"Utility functions to help deal with tensors"
-from .imports.torch import *
-from .core import *
-from collections import OrderedDict
-from torch.nn.parallel import DistributedDataParallel
-
-AffineMatrix = Tensor
-BoolOrTensor = Union[bool,Tensor]
-FloatOrTensor = Union[float,Tensor]
-IntOrTensor = Union[int,Tensor]
-ItemsList = Collection[Union[Tensor,ItemBase,'ItemsList',float,int]]
-LambdaFunc = Callable[[Tensor],Tensor]
-LayerFunc = Callable[[nn.Module],None]
-ModuleList = Collection[nn.Module]
-NPArray = np.ndarray
-OptOptimizer = Optional[optim.Optimizer]
-ParamList = Collection[nn.Parameter]
-Rank0Tensor = NewType('OneEltTensor', Tensor)
-SplitFunc = Callable[[nn.Module], List[nn.Module]]
-SplitFuncOrIdxList = Union[Callable, Collection[ModuleList]]
-TensorOrNumber = Union[Tensor,Number]
-TensorOrNumList = Collection[TensorOrNumber]
-TensorImage = Tensor
-TensorImageSize = Tuple[int,int,int]
-Tensors = Union[Tensor, Collection['Tensors']]
-Weights = Dict[str,Tensor]
-
-AffineFunc = Callable[[KWArgs], AffineMatrix]
-HookFunc = Callable[[nn.Module, Tensors, Tensors], Any]
-LogitTensorImage = TensorImage
-LossFunction = Callable[[Tensor, Tensor], Rank0Tensor]
-MetricFunc = Callable[[Tensor,Tensor],TensorOrNumber]
-MetricFuncList = Collection[MetricFunc]
-MetricsList = Collection[TensorOrNumber]
-OptLossFunc = Optional[LossFunction]
-OptMetrics = Optional[MetricsList]
-OptSplitFunc = Optional[SplitFunc]
-PixelFunc = Callable[[TensorImage, ArgStar, KWArgs], TensorImage]
-
-LightingFunc = Callable[[LogitTensorImage, ArgStar, KWArgs], LogitTensorImage]
-
-fastai_types = {
- AnnealFunc:'AnnealFunc', ArgStar:'ArgStar', BatchSamples:'BatchSamples',
- FilePathList:'FilePathList', Floats:'Floats', ImgLabel:'ImgLabel', ImgLabels:'ImgLabels', KeyFunc:'KeyFunc',
- KWArgs:'KWArgs', ListOrItem:'ListOrItem', ListRules:'ListRules', ListSizes:'ListSizes',
- NPArrayableList:'NPArrayableList', NPArrayList:'NPArrayList', NPArrayMask:'NPArrayMask', NPImage:'NPImage',
- OptDataFrame:'OptDataFrame', OptListOrItem:'OptListOrItem', OptRange:'OptRange', OptStrTuple:'OptStrTuple',
- OptStats:'OptStats', PathOrStr:'PathOrStr', PBar:'PBar', Point:'Point', Points:'Points', Sizes:'Sizes',
- SplitArrayList:'SplitArrayList', StartOptEnd:'StartOptEnd', StrList:'StrList', Tokens:'Tokens',
- OptStrList:'OptStrList', AffineMatrix:'AffineMatrix', BoolOrTensor:'BoolOrTensor', FloatOrTensor:'FloatOrTensor',
- IntOrTensor:'IntOrTensor', ItemsList:'ItemsList', LambdaFunc:'LambdaFunc',
- LayerFunc:'LayerFunc', ModuleList:'ModuleList', OptOptimizer:'OptOptimizer', ParamList:'ParamList',
- Rank0Tensor:'Rank0Tensor', SplitFunc:'SplitFunc', SplitFuncOrIdxList:'SplitFuncOrIdxList',
- TensorOrNumber:'TensorOrNumber', TensorOrNumList:'TensorOrNumList', TensorImage:'TensorImage',
- TensorImageSize:'TensorImageSize', Tensors:'Tensors', Weights:'Weights', AffineFunc:'AffineFunc',
- HookFunc:'HookFunc', LogitTensorImage:'LogitTensorImage', LossFunction:'LossFunction', MetricFunc:'MetricFunc',
- MetricFuncList:'MetricFuncList', MetricsList:'MetricsList', OptLossFunc:'OptLossFunc', OptMetrics:'OptMetrics',
- OptSplitFunc:'OptSplitFunc', PixelFunc:'PixelFunc', LightingFunc:'LightingFunc', IntsOrStrs:'IntsOrStrs',
- PathLikeOrBinaryStream:'PathLikeOrBinaryStream'
-}
-
-bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
-bias_types = (nn.Linear, nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.ConvTranspose1d, nn.ConvTranspose2d, nn.ConvTranspose3d)
-def is_pool_type(l:Callable): return re.search(r'Pool[123]d$', l.__class__.__name__)
-no_wd_types = bn_types + (nn.LayerNorm,)
-defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
-AdamW = partial(optim.Adam, betas=(0.9,0.99))
-
-#Monkey-patch `torch.cuda.set_device` so that it updates `defaults.device`
-_old_torch_cuda_set_device = torch.cuda.set_device
-def _new_torch_cuda_set_device(device):
- _old_torch_cuda_set_device(device)
- defaults.device = torch.device('cuda', device) if isinstance(device, int) else device
-torch.cuda.set_device = _new_torch_cuda_set_device
-
-def tensor(x:Any, *rest)->Tensor:
- "Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly."
- if len(rest): x = (x,)+rest
- # XXX: Pytorch bug in dataloader using num_workers>0; TODO: create repro and report
- if is_listy(x) and len(x)==0: return tensor(0)
- res = torch.tensor(x) if is_listy(x) else as_tensor(x)
- if res.dtype is torch.int32:
- warn('Tensor is int32: upgrading to int64; for better performance use int64 input')
- return res.long()
- return res
-
-class Module(nn.Module, metaclass=PrePostInitMeta):
- "Same as `nn.Module`, but no need for subclasses to call `super().__init__`"
- def __pre_init__(self): super().__init__()
- def __init__(self): pass
-
-def np_address(x:np.ndarray)->int:
- "Address of `x` in memory."
- return x.__array_interface__['data'][0]
-
-def to_detach(b:Tensors, cpu:bool=True):
- "Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`."
- def _inner(x, cpu=True):
- if not isinstance(x,Tensor): return x
- x = x.detach()
- return x.cpu() if cpu else x
- return recurse(_inner, b, cpu=cpu)
-
-def to_data(b:ItemsList):
- "Recursively map lists of items in `b ` to their wrapped data."
- return recurse(lambda x: x.data if isinstance(x,ItemBase) else x, b)
-
-def to_cpu(b:ItemsList):
- "Recursively map lists of tensors in `b ` to the cpu."
- return recurse(lambda x: x.cpu() if isinstance(x,Tensor) else x, b)
-
-def to_half(b:Collection[Tensor])->Collection[Tensor]:
- "Recursively map lists of tensors in `b ` to FP16."
- return recurse(lambda x: x.half() if x.dtype not in [torch.int64, torch.int32, torch.int16] else x, b)
-
-def to_float(b:Collection[Tensor])->Collection[Tensor]:
- "Recursively map lists of tensors in `b ` to FP16."
- return recurse(lambda x: x.float() if x.dtype not in [torch.int64, torch.int32, torch.int16] else x, b)
-
-def to_device(b:Tensors, device:torch.device):
- "Recursively put `b` on `device`."
- device = ifnone(device, defaults.device)
- return recurse(lambda x: x.to(device, non_blocking=True), b)
-
-def data_collate(batch:ItemsList)->Tensor:
- "Convert `batch` items to tensor data."
- return torch.utils.data.dataloader.default_collate(to_data(batch))
-
-def requires_grad(m:nn.Module, b:Optional[bool]=None)->Optional[bool]:
- "If `b` is not set return `requires_grad` of first param, else set `requires_grad` on all params as `b`"
- ps = list(m.parameters())
- if not ps: return None
- if b is None: return ps[0].requires_grad
- for p in ps: p.requires_grad=b
-
-def trainable_params(m:nn.Module)->ParamList:
- "Return list of trainable params in `m`."
- res = filter(lambda p: p.requires_grad, m.parameters())
- return res
-
-def children(m:nn.Module)->ModuleList:
- "Get children of `m`."
- return list(m.children())
-
-def num_children(m:nn.Module)->int:
- "Get number of children modules in `m`."
- return len(children(m))
-
-def range_children(m:nn.Module)->Iterator[int]:
- "Return iterator of len of children of `m`."
- return range(num_children(m))
-
-class ParameterModule(Module):
- "Register a lone parameter `p` in a module."
- def __init__(self, p:nn.Parameter): self.val = p
- def forward(self, x): return x
-
-def children_and_parameters(m:nn.Module):
- "Return the children of `m` and its direct parameters not registered in modules."
- children = list(m.children())
- children_p = sum([[id(p) for p in c.parameters()] for c in m.children()],[])
- for p in m.parameters():
- if id(p) not in children_p: children.append(ParameterModule(p))
- return children
-
-def flatten_model(m:nn.Module):
- if num_children(m):
- mapped = map(flatten_model,children_and_parameters(m))
- return sum(mapped,[])
- else:
- return [m]
-
-#flatten_model = lambda m: sum(map(flatten_model,children_and_parameters(m)),[]) if num_children(m) else [m]
-
-def first_layer(m:nn.Module)->nn.Module:
- "Retrieve first layer in a module `m`."
- return flatten_model(m)[0]
-
-def last_layer(m:nn.Module)->nn.Module:
- "Retrieve last layer in a module `m`."
- return flatten_model(m)[-1]
-
-def split_model_idx(model:nn.Module, idxs:Collection[int])->ModuleList:
- "Split `model` according to the indexes in `idxs`."
- layers = flatten_model(model)
- if idxs[0] != 0: idxs = [0] + idxs
- if idxs[-1] != len(layers): idxs.append(len(layers))
- return [nn.Sequential(*layers[i:j]) for i,j in zip(idxs[:-1],idxs[1:])]
-
-def split_model(model:nn.Module=None, splits:Collection[Union[nn.Module,ModuleList]]=None):
- "Split `model` according to the layers in `splits`."
- splits = listify(splits)
- if isinstance(splits[0], nn.Module):
- layers = flatten_model(model)
- idxs = [layers.index(first_layer(s)) for s in splits]
- return split_model_idx(model, idxs)
- return [nn.Sequential(*s) for s in splits]
-
-def get_param_groups(layer_groups:Collection[nn.Module])->List[List[nn.Parameter]]:
- return [sum([list(trainable_params(c)) for c in l.children()], []) for l in layer_groups]
-
-def split_no_wd_params(layer_groups:Collection[nn.Module])->List[List[nn.Parameter]]:
- "Separate the parameters in `layer_groups` between `no_wd_types` and bias (`bias_types`) from the rest."
- split_params = []
- for l in layer_groups:
- l1,l2 = [],[]
- for c in l.children():
- if isinstance(c, no_wd_types): l2 += list(trainable_params(c))
- elif isinstance(c, bias_types):
- bias = c.bias if hasattr(c, 'bias') else None
- l1 += [p for p in trainable_params(c) if not (p is bias)]
- if bias is not None: l2.append(bias)
- else: l1 += list(trainable_params(c))
- #Since we scan the children separately, we might get duplicates (tied weights). We need to preserve the order
- #for the optimizer load of state_dict
- l1,l2 = uniqueify(l1),uniqueify(l2)
- split_params += [l1, l2]
- return split_params
-
-def set_bn_eval(m:nn.Module)->None:
- "Set bn layers in eval mode for all recursive children of `m`."
- for l in m.children():
- if isinstance(l, bn_types) and not next(l.parameters()).requires_grad:
- l.eval()
- set_bn_eval(l)
-
-def batch_to_half(b:Collection[Tensor])->Collection[Tensor]:
- "Set the input of batch `b` to half precision."
- return [to_half(b[0]), b[1]]
-
-def bn2float(module:nn.Module)->nn.Module:
- "If `module` is batchnorm don't use half precision."
- if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): module.float()
- for child in module.children(): bn2float(child)
- return module
-
-def model2half(model:nn.Module)->nn.Module:
- "Convert `model` to half precision except the batchnorm layers."
- return bn2float(model.half())
-
-def init_default(m:nn.Module, func:LayerFunc=nn.init.kaiming_normal_)->nn.Module:
- "Initialize `m` weights with `func` and set `bias` to 0."
- if func:
- if hasattr(m, 'weight'): func(m.weight)
- if hasattr(m, 'bias') and hasattr(m.bias, 'data'): m.bias.data.fill_(0.)
- return m
-
-def cond_init(m:nn.Module, init_func:LayerFunc):
- "Initialize the non-batchnorm layers of `m` with `init_func`."
- if (not isinstance(m, bn_types)) and requires_grad(m): init_default(m, init_func)
-
-def apply_leaf(m:nn.Module, f:LayerFunc):
- "Apply `f` to children of `m`."
- c = children(m)
- if isinstance(m, nn.Module): f(m)
- for l in c: apply_leaf(l,f)
-
-def apply_init(m, init_func:LayerFunc):
- "Initialize all non-batchnorm layers of `m` with `init_func`."
- apply_leaf(m, partial(cond_init, init_func=init_func))
-
-def in_channels(m:nn.Module) -> List[int]:
- "Return the shape of the first weight layer in `m`."
- for l in flatten_model(m):
- if hasattr(l, 'weight'): return l.weight.shape[1]
- raise Exception('No weight layer')
-
-class ModelOnCPU():
- "A context manager to evaluate `model` on the CPU inside."
- def __init__(self, model:nn.Module): self.model = model
- def __enter__(self):
- self.device = one_param(self.model).device
- return self.model.cpu()
- def __exit__(self, type, value, traceback):
- self.model = self.model.to(self.device)
-
-class NoneReduceOnCPU():
- "A context manager to evaluate `loss_func` with none reduce and weights on the CPU inside."
- def __init__(self, loss_func:LossFunction):
- self.loss_func,self.device,self.old_red = loss_func,None,None
-
- def __enter__(self):
- if hasattr(self.loss_func, 'weight') and self.loss_func.weight is not None:
- self.device = self.loss_func.weight.device
- self.loss_func.weight = self.loss_func.weight.cpu()
- if hasattr(self.loss_func, 'reduction'):
- self.old_red = getattr(self.loss_func, 'reduction')
- setattr(self.loss_func, 'reduction', 'none')
- return self.loss_func
- else: return partial(self.loss_func, reduction='none')
-
- def __exit__(self, type, value, traceback):
- if self.device is not None: self.loss_func.weight = self.loss_func.weight.to(self.device)
- if self.old_red is not None: setattr(self.loss_func, 'reduction', self.old_red)
-
-def model_type(dtype):
- "Return the torch type corresponding to `dtype`."
- return (torch.float32 if np.issubdtype(dtype, np.floating) else
- torch.int64 if np.issubdtype(dtype, np.integer)
- else None)
-
-def np2model_tensor(a):
- "Tranform numpy array `a` to a tensor of the same type."
- dtype = model_type(a.dtype)
- res = as_tensor(a)
- if not dtype: return res
- return res.type(dtype)
-
-def _pca(x, k=2):
- "Compute PCA of `x` with `k` dimensions."
- x = x-torch.mean(x,0)
- U,S,V = torch.svd(x.t())
- return torch.mm(x,U[:,:k])
-torch.Tensor.pca = _pca
-
-def trange_of(x):
- "Create a tensor from `range_of(x)`."
- return torch.arange(len(x))
-
-def to_np(x):
- "Convert a tensor to a numpy array."
- return x.data.cpu().numpy()
-
-# monkey patching to allow matplotlib to plot tensors
-def tensor__array__(self, dtype=None):
- res = to_np(self)
- if dtype is None: return res
- else: return res.astype(dtype, copy=False)
-Tensor.__array__ = tensor__array__
-Tensor.ndim = property(lambda x: len(x.shape))
-
-def grab_idx(x,i,batch_first:bool=True):
- "Grab the `i`-th batch in `x`, `batch_first` stating the batch dimension."
- if batch_first: return ([o[i].cpu() for o in x] if is_listy(x) else x[i].cpu())
- else: return ([o[:,i].cpu() for o in x] if is_listy(x) else x[:,i].cpu())
-
-def logit(x:Tensor)->Tensor:
- "Logit of `x`, clamped to avoid inf."
- x = x.clamp(1e-7, 1-1e-7)
- return -(1/x-1).log()
-
-def logit_(x:Tensor)->Tensor:
- "Inplace logit of `x`, clamped to avoid inf"
- x.clamp_(1e-7, 1-1e-7)
- return (x.reciprocal_().sub_(1)).log_().neg_()
-
-def set_all_seed(seed:int)->None:
- "Sets the seeds for all pseudo random generators in fastai lib"
- np.random.seed(seed)
- torch.manual_seed(seed)
- random.seed(seed)
-
-def uniform(low:Number, high:Number=None, size:Optional[List[int]]=None)->FloatOrTensor:
- "Draw 1 or shape=`size` random floats from uniform dist: min=`low`, max=`high`."
- if high is None: high=low
- return random.uniform(low,high) if size is None else torch.FloatTensor(*listify(size)).uniform_(low,high)
-
-def log_uniform(low, high, size:Optional[List[int]]=None)->FloatOrTensor:
- "Draw 1 or shape=`size` random floats from uniform dist: min=log(`low`), max=log(`high`)."
- res = uniform(log(low), log(high), size)
- return exp(res) if size is None else res.exp_()
-
-def rand_bool(p:float, size:Optional[List[int]]=None)->BoolOrTensor:
- "Draw 1 or shape=`size` random booleans (`True` occuring with probability `p`)."
- return uniform(0,1,size)
IntOrTensor:
- "Generate int or tensor `size` of ints between `low` and `high` (included)."
- return random.randint(low,high) if size is None else torch.randint(low,high+1,size)
-
-def one_param(m: nn.Module)->Tensor:
- "Return the first parameter of `m`."
- return next(m.parameters())
-
-def try_int(o:Any)->Any:
- "Try to convert `o` to int, default to `o` if not possible."
- # NB: single-item rank-1 array/tensor can be converted to int, but we don't want to do this
- if isinstance(o, (np.ndarray,Tensor)): return o if o.ndim else int(o)
- if isinstance(o, collections.abc.Sized) or getattr(o,'__array_interface__',False): return o
- try: return int(o)
- except: return o
-
-def get_model(model:nn.Module):
- "Return the model maybe wrapped inside `model`."
- return model.module if isinstance(model, (DistributedDataParallel, nn.DataParallel)) else model
-
-def flatten_check(out:Tensor, targ:Tensor) -> Tensor:
- "Check that `out` and `targ` have the same number of elements and flatten them."
- out,targ = out.contiguous().view(-1),targ.contiguous().view(-1)
- assert len(out) == len(targ), f"Expected output and target to have the same number of elements but got {len(out)} and {len(targ)}."
- return out,targ
-
-#Monkey-patch nn.DataParallel.reset
-def _data_parallel_reset(self):
- if hasattr(self.module, 'reset'): self.module.reset()
-nn.DataParallel.reset = _data_parallel_reset
-
-def remove_module_load(state_dict):
- """create new OrderedDict that does not contain `module.`"""
- new_state_dict = OrderedDict()
- for k, v in state_dict.items(): new_state_dict[k[7:]] = v
- return new_state_dict
-
-def num_distrib():
- "Return the number of processes in distributed training (if applicable)."
- return int(os.environ.get('WORLD_SIZE', 0))
-
-def rank_distrib():
- "Return the distributed rank of this process (if applicable)."
- return int(os.environ.get('RANK', 0))
-
-def add_metrics(last_metrics:Collection[Rank0Tensor], mets:Union[Rank0Tensor, Collection[Rank0Tensor]]):
- "Return a dictionary for updating `last_metrics` with `mets`."
- last_metrics,mets = listify(last_metrics),listify(mets)
- return {'last_metrics': last_metrics + mets}
-
-def try_save(state:Dict, path:Path=None, file:PathLikeOrBinaryStream=None):
- target = open(path/file, 'wb') if is_pathlike(file) else file
- try: torch.save(state, target)
- except OSError as e:
- raise Exception(f"{e}\n Can't write {path/file}. Pass an absolute writable pathlib obj `fname`.")
-
-def np_func(f):
- "Convert a function taking and returning numpy arrays to one taking and returning tensors"
- def _inner(*args, **kwargs):
- nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args]
- return tensor(f(*nargs, **kwargs))
- functools.update_wrapper(_inner, f)
- return _inner
-
diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py
deleted file mode 100644
index fb47a767d4e2949fee60d4c3e41ea3f559108184..0000000000000000000000000000000000000000
--- a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py
+++ /dev/null
@@ -1,475 +0,0 @@
-import sys
-import torch
-import random
-import string
-
-# import bcolors
-from omegaconf import OmegaConf
-from transformers import GPT2LMHeadModel, GPT2Tokenizer
-
-from src.crazyneuraluser.UBAR_code.config import global_config as cfg
-from src.crazyneuraluser.UBAR_code.reader import MultiWozReader
-from src.crazyneuraluser.UBAR_code.db_ops import MultiWozDB
-
-from typing import List
-
-
-class bcolors:
- HEADER = "\033[95m"
- OKBLUE = "\033[94m"
- OKCYAN = "\033[96m"
- GREEN = "\033[92m"
- YELLOW = "\033[93m"
- RED = "\033[91m"
- ENDC = "\033[0m"
- BOLD = "\033[1m"
- UNDERLINE = "\033[4m"
-
-
-class UbarSystemModel: # may inherit convlab or not, just like andy's
- def __init__(self, name: str, checkpoint_path: str, model_config_path: str):
-
- self.tokenizer = GPT2Tokenizer.from_pretrained("alistairmcleay/UBAR-distilgpt2")
- self.model = GPT2LMHeadModel.from_pretrained("alistairmcleay/UBAR-distilgpt2")
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.name = name
- self.turn_domain = ["general"] # returns a list of one string that is the domain e.g. 'taxi'
- # (this is because of the way the db_ops.py deals with the domain. It should really be a string.)
-
- self.ubar_status = {"dialogue_terminate": False}
-
- self.print_intermediary_info = False
-
- self.config = OmegaConf.load(model_config_path)
- self.previous_turn = {"user": [], "bspn": [], "aspn": [], "db": []}
-
- # NB: best to use corpus goals to guide interactions - baselines/simulate_agent.py allows that.
-
- # initialize multiwoz reader and db_ops
- self.reader = MultiWozReader(self.tokenizer)
- self.db = MultiWozDB(self.config.dbs_path)
-
- def lexicalize_sys_response(self, sys_response, domain_hits, decoded_belief_state_subseq) -> str:
- lexicalized_sys_response = ""
-
- # Track entities already filled e.g. if there are 3 restaurants track which have already been added to a slot
- max_idx_of_added_entities = -1
-
- # Fill slots with values from the DB (lexicalization)
- for token in sys_response.split():
- token = token.strip(" .,;:")
- if token.startswith("["): # It is a slot to be filled
-
- # Note in hotel there is specific price data too but to simplify things
- # we just use the price range (e.g. moderate)
- db_price_key = "price"
- # if domain is restaurant then use "pricerange"
- if self.turn_domain[0] == "restaurant":
- db_price_key = "pricerange"
-
- slots_to_db_keys_map = {
- "[value_price]": db_price_key,
- "[value_pricerange]": db_price_key,
- "[value_food]": "food",
- "[value_area]": "area",
- "[value_type]": "type",
- "[value_phone]": "phone",
- "[value_address]": "address",
- "[value_leave]": "leave",
- "[value_postcode]": "postcode",
- "[value_id]": "id",
- "[value_arrive]": "arrive",
- "[value_stars]": "stars",
- "[value_day]": "day",
- "[value_destination]": "destination",
- "[value_car]": "taxi_types",
- "[value_departure]": "departure",
- "[value_people]": "people",
- "[value_stay]": "stay",
- "[value_department]": "department",
- "[value_time]": "time",
- "[value_name]": "name",
- "[value_reference]": "reference",
- }
- # Hospital domain is a strange outlier data structure
- if self.turn_domain == ["hospital"] and token == "[value_address]":
- token = "1 Addenbrooks Street"
- elif self.turn_domain == ["hospital"] and token == "[value_postcode]":
- token = "CB11QD"
-
- # So does taxi
- elif self.turn_domain == ["taxi"] and token == "[value_phone]" and domain_hits != []:
- token = domain_hits[0]["taxi_phone"]
-
- # Deal with value_name differently because there can be multiple
- elif token == "[value_name]" and domain_hits != []:
- token = domain_hits[max_idx_of_added_entities + 1]["name"]
- max_idx_of_added_entities += 1
-
- # This slot tells the user how many db hits there were matching their constraints
- elif token == "[value_choice]" and domain_hits != []:
- token = len(domain_hits)
-
- # Randomly generate the reference
- elif token == "[value_reference]" and domain_hits != []:
- token = "".join(random.choices(string.ascii_uppercase, k=10))
-
- else:
- # First check can we fill the token from the db results
- db_success = False
- if domain_hits != []:
- for slot, db_key in slots_to_db_keys_map.items():
- if token == slot and db_key in domain_hits[0]:
- token = domain_hits[0][db_key]
- db_success = True
-
- # If we cannot, then try to fill it from the belief state by looking for a match
- # in the belief state and then if there is a match adding the next token.
- # This is not perfect as some are more than one word but its probably good enough.
- if not db_success:
- # The DB doesn't contain a postcode for the police station so fill it here
- if token == "[value_postcode]" and self.turn_domain == ["police"]:
- token = "CB11QD"
- continue
- decoded_belief_states = decoded_belief_state_subseq.split()
- for idx, belief_state_slot in enumerate(decoded_belief_states):
- if token in slots_to_db_keys_map.keys():
- if slots_to_db_keys_map[token] == belief_state_slot:
- curr_slot_resp = ""
- # We dont know the length of the value we need to extract from the belief state
- for belief_state_token in decoded_belief_states[idx + 1 :]:
- if (
- belief_state_token not in slots_to_db_keys_map.values()
- and belief_state_token != ""
- ):
- curr_slot_resp += belief_state_token + " "
- else:
- break
- token = curr_slot_resp[:-1]
- continue
-
- # Otherwise just leave the slot as it is as we have failed to fill it
-
- lexicalized_sys_response += str(token)
- lexicalized_sys_response += " "
-
- return lexicalized_sys_response
-
- def set_turn_domain(self, belief_span_ids_subseq, sys_act_span_ids_subseq=None) -> None:
- """
- IMPORTANT: use_system_act is not None when actually querying the DB to
- lexicalise the system response. When it is None the Belief state NOT the system act is used to determine
- the domain. In self.response() the DB is queried twice. The first time is using the Belief state as the system
- act has not yet been generated, and it is only used to find out if there are matches in the DB for the current
- domain + constraints. Then, after the system act is generated, we call the DB to actually get the results to
- lexicalise the system response. It is much more important that the domain is correct for the second call, and
- the system act is much more accurate at determining the domain.
- """
-
- if sys_act_span_ids_subseq is None:
- decoded_belief_state_subseq = self.tokenizer.decode(belief_span_ids_subseq[1:-1])
- decoded_prev_belief_state_subseq = self.tokenizer.decode(self.previous_turn["bspn"][1:-1])
-
- # If it is the first turn and the belief state is empty then set the domain to general
- if self.previous_turn["bspn"] == [] and len(belief_span_ids_subseq) == 2:
- self.turn_domain = ["general"]
- return
-
- # If the belief state doesn't change then keep the same domain
- if belief_span_ids_subseq == self.previous_turn["bspn"]:
- return
-
- # The domain has changed, get the new one (from the right)
- else:
- # remove substring from string
- if decoded_prev_belief_state_subseq in decoded_belief_state_subseq:
- decoded_new_tokens = decoded_belief_state_subseq.replace("decoded_prev_belief_state_subseq", "")
- most_recent_domain_in_belief_state = [
- [token.strip("[]") for token in decoded_new_tokens.split() if token.startswith("[")][-1]
- ]
- self.turn_domain = most_recent_domain_in_belief_state
- else:
- # Sometimes the previous belief state is not in the current belief state as
- # the output changes very slightly (say by one word) - in this case just keep the same domain
- # TODO: Could probably handle this better.
- if self.print_intermediary_info:
- print(
- bcolors.YELLOW
- + "!Previous belief state not in current belief state! Details below:"
- + bcolors.ENDC
- )
- print("Previous Belief State: " + decoded_prev_belief_state_subseq)
- print("Current Belief State: " + decoded_belief_state_subseq)
-
- else:
- decoded_sys_act_subseq = self.tokenizer.decode(sys_act_span_ids_subseq[1:-1])
-
- most_recent_domain_in_sys_act = [
- [token.strip("[]") for token in decoded_sys_act_subseq.split() if token.startswith("[")][0]
- ]
- self.turn_domain = most_recent_domain_in_sys_act
-
- def get_domain_hits(self, decoded_belief_state_subseq) -> dict:
- # Get hits from db based on belief state, unless its a general turn (no hits then)
- constraint_dict = self.reader.bspan_to_constraint_dict(decoded_belief_state_subseq)
- query_turn_domain = self.turn_domain[0] # db.queryJsons needs a string not a list (single domain)
- # If the constraint dict doesn't contain any constraints for the current domain then pass an empty dict
- if query_turn_domain in constraint_dict:
- domain_hits = self.db.queryJsons(query_turn_domain, constraint_dict[query_turn_domain])
- else:
- domain_hits = self.db.queryJsons(query_turn_domain, {})
-
- return domain_hits
-
- def print_turn_intermediate_info(self, generated_subseq_ids_map) -> None:
- print(bcolors.OKCYAN + "Turn domain: " + bcolors.ENDC + "[" + str(self.turn_domain[0]) + "]")
-
- belief_state = self.tokenizer.decode(generated_subseq_ids_map["bspn"])
- print(bcolors.OKCYAN + "Belief state: " + bcolors.ENDC + belief_state)
-
- db_output = self.tokenizer.decode(generated_subseq_ids_map["db"])
- print(bcolors.OKCYAN + "DB Output: " + bcolors.ENDC + db_output)
-
- sys_act = self.tokenizer.decode(generated_subseq_ids_map["aspn"])
- print(bcolors.OKCYAN + "System Act: " + bcolors.ENDC + sys_act)
-
- def _init_ubar_status(self) -> dict:
- return {"dialogue_terminate": False}
-
- def init_session(self):
- self.ubar_status = self._init_ubar_status()
- self.previous_turn = {"user": [], "bspn": [], "aspn": [], "db": []}
- self.turn_domain = ["general"]
-
- def is_terminated(self) -> bool:
- """This should tell an external client whether the user model considers they have completed the task."""
- # return False
- return self.ubar_status["dialogue_terminate"]
-
- def _activate_dialogue_terminate(self) -> None:
- """Turn on the ubar status about dialogue termination"""
- self.ubar_status["dialogue_terminate"] = True
-
- def add_torch_input_eval(self, inputs):
- # inputs: context
- inputs["context_tensor"] = torch.tensor([inputs["context"]]).to(self.device)
- return inputs
-
- def prepare_input_for_model(self, user_utterance: str, turn_id: int) -> torch.Tensor:
- # TODO: CONVERT DIALOGUE HISTORY TO TOKEN IDS
-
- tokenised_user_utterance = self.tokenizer.encode(" " + user_utterance + " ")
- # In this application turn always only contains ["user"], not ["bspn", "aspn", "db"] etc.
- turn = {"user": tokenised_user_utterance}
-
- first_turn = turn_id == 0
- inputs = self.reader.convert_turn_eval(turn, self.previous_turn, first_turn)
- inputs = self.add_torch_input_eval(inputs)
-
- return inputs
-
- def decode_generated_bspn(self, generated) -> List[int]:
- eos_b_id = self.tokenizer.encode([""])[0]
- if eos_b_id in generated:
- eos_b_idx = generated.index(eos_b_id)
- else:
- eos_b_idx = len(generated) - 1
- return generated[: eos_b_idx + 1]
-
- def decode_grenerated_act_resp(self, generated) -> dict:
- """
- decode generated
- return decoded['resp'] ('bspn', 'aspn')
- """
- decoded = {}
- eos_a_id = self.tokenizer.encode([""])[0]
- eos_r_id = self.tokenizer.encode([""])[0]
- # eos_b_id = self.tokenizer.encode([""])[0]
-
- # eos_r may not exists if gpt2 generated repetitive words.
- if eos_r_id in generated:
- eos_r_idx = generated.index(eos_r_id)
- else:
- eos_r_idx = len(generated) - 1
-
- if cfg.use_true_curr_aspn: # only predict resp
- decoded["resp"] = generated[: eos_r_idx + 1]
- else: # predicted aspn, resp
- eos_a_idx = generated.index(eos_a_id)
- decoded["aspn"] = generated[: eos_a_idx + 1]
- decoded["resp"] = generated[eos_a_idx + 1 : eos_r_idx + 1]
- return decoded
-
- def generate_ids_subseq_map(self, inputs):
-
- context_input_subseq = inputs["context"]
- # decoded_context_input_subseq = self.tokenizer.decode(context_input_subseq)
- # Check if model has put duplicate tags in the context and if so remove one of the duplicates
- # Yes this is kind of hacky, but UBAR seems to learn to duplicate certain tags - I don't know why
- # Also instead of decoding and encoding here tags could be checked with their ids - but time is short...
- # cleaned_decoded_list = []
- # prev_token = ""
- # for token in decoded_context_input_subseq.split():
- # if token.startswith("<") and token.endswith(">"): # It is a tag
- # if token == prev_token: # It is a duplicate tag
- # continue
- # cleaned_decoded_list.append(token)
- # prev_token = token
- # decoded_context_input_subseq = " ".join(cleaned_decoded_list)
- # context_input_subseq = self.tokenizer.encode(decoded_context_input_subseq)
-
- context_input_subeq_tensor = inputs["context_tensor"]
-
- # TODO: FIND OUT BY COMPARING WITH MODEL.VALIDATE() how to calculate context_length
- context_length = len(context_input_subseq)
-
- belief_state_ids = self.model.generate(
- input_ids=context_input_subeq_tensor,
- max_length=context_length + 60,
- temperature=0.7,
- top_p=1,
- num_beams=1,
- pad_token_id=self.tokenizer.eos_token_id,
- eos_token_id=self.tokenizer.encode([""])[0],
- )
- gen_belief_state_token_ids = belief_state_ids[0].cpu().numpy().tolist() # type: list[int]
- belief_span_ids_subseq = self.decode_generated_bspn(
- gen_belief_state_token_ids[context_length - 1 :]
- ) # type: list[int]
-
- self.set_turn_domain(belief_span_ids_subseq)
-
- db_result = self.reader.bspan_to_DBpointer(
- self.tokenizer.decode(belief_span_ids_subseq), self.turn_domain
- ) # type: str
- db_ids_subseq = self.tokenizer.convert_tokens_to_ids(
- self.tokenizer.tokenize(" " + db_result + " ")
- ) + self.tokenizer.encode([""])
-
- # TODO: context_input_subseq is already a tensor but the other two subseqs aren't - why?
- act_response_gen_input_subseq = context_input_subseq + belief_span_ids_subseq + db_ids_subseq
- act_response_gen_input_subseq_tensor = torch.tensor([act_response_gen_input_subseq]).to(self.device)
- context_length = len(act_response_gen_input_subseq)
-
- outputs_db = self.model.generate(
- input_ids=act_response_gen_input_subseq_tensor,
- max_length=context_length + 80,
- temperature=0.7,
- top_p=1,
- num_beams=1,
- pad_token_id=self.tokenizer.eos_token_id,
- eos_token_id=self.tokenizer.encode([""])[0],
- )
- generated_act_resp_token_ids = outputs_db[0].cpu().numpy().tolist() # type: list[int]
- generated_act_resp_token_ids = generated_act_resp_token_ids[context_length - 1 :]
-
- try:
- generated_subseq_ids_map = self.decode_grenerated_act_resp(generated_act_resp_token_ids)
- # TODO: IF YOU WANT Option b) then you just read the ['resp'] key and convert to string using huggingface;
- # that would be sys_response; Obviously, this applies to Option a as well
- generated_subseq_ids_map["bspn"] = belief_span_ids_subseq
- # TODO: Option a) STORE THESE MAPPINGS IN SELF.CONTEXT IF YOU WANT TO HAVE
- # {U_1, BS_1, DB_1, A_1, R_1, U_2, BS_2... history}
-
- generated_subseq_ids_map["db"] = db_ids_subseq
- generated_subseq_ids_map["labels"] = context_input_subseq
-
- except ValueError:
- generated_subseq_ids_map = {"resp": [], "bspn": [], "aspn": [], "db": [], "labels": []}
-
- # IMPORTANT: this is how all of the previous state is updated (appended) after each turn
- # Update self.previous_turn to track state to be fed into GPT2
- for k, v in generated_subseq_ids_map.items():
- self.previous_turn[k] = v
-
- if self.print_intermediary_info:
- self.print_turn_intermediate_info(generated_subseq_ids_map)
-
- return generated_subseq_ids_map
-
- def response(self, usr_utterance: str, turn_id: int) -> str:
-
- if usr_utterance == "Goodbye":
- self._activate_dialogue_terminate()
- return "Session Terminated by User"
-
- inputs = self.prepare_input_for_model(usr_utterance, turn_id)
-
- generated_subseq_ids_map = self.generate_ids_subseq_map(inputs)
- belief_span_ids_subseq = generated_subseq_ids_map["bspn"]
-
- sys_response = self.tokenizer.decode(generated_subseq_ids_map["resp"][1:-1])
-
- prev_turn_domain = self.turn_domain
- sys_act_span_ids_subseq = generated_subseq_ids_map["aspn"]
- self.set_turn_domain(belief_span_ids_subseq, sys_act_span_ids_subseq)
-
- if self.turn_domain != ["general"]:
- # If the domain changes when reading the system response, then we need to re-do the generation process
- # for both the belief state and the system action and response. We do this because self.get_domain_hits()
- # will break if the domain is different when querying the DB for the second time here than when it was
- # originally queried above, due to the constraint dict it uses that is generated from the belief state
- # How can the belief state domain and the system act domain be different? Bunch of things, for example:
- # When asking for the police the belief state may be empty (so 'general' domain)
- # but then the system action will have [police].
- if prev_turn_domain != self.turn_domain:
- if self.print_intermediary_info:
- print(
- bcolors.RED
- + "Domain changed from {} to {}".format(prev_turn_domain, self.turn_domain)
- + bcolors.RED
- )
- generated_subseq_ids_map = self.generate_ids_subseq_map(inputs)
- sys_response = self.tokenizer.decode(generated_subseq_ids_map["resp"][1:-1])
-
- decoded_belief_state_subseq = self.tokenizer.decode(belief_span_ids_subseq)
- domain_hits = self.get_domain_hits(decoded_belief_state_subseq)
- # print(bcolors.UNDERLINE + "Domain hits: \n" + bcolors.ENDC, domain_hits) # for debugging
-
- sys_response = self.lexicalize_sys_response(sys_response, domain_hits, decoded_belief_state_subseq)
-
- return sys_response
-
-
-def interact(checkpoint_path):
- sys_model = UbarSystemModel("UBAR_sys_model", checkpoint_path, "scripts/UBAR_code/interaction/config.yaml")
- # TODO: Fix this hardcoded variable (should be in config)
- sys_model.print_intermediary_info = True
-
- for dial_id in range(1, 11):
- print(f"In dialogue {dial_id}")
-
- # Reset state after each dialog
- sys_model.init_session()
-
- user_utt = input(bcolors.GREEN + "Enter user response here: " + bcolors.ENDC)
-
- for turn_id in range(100):
- try:
- sys_response = sys_model.response(user_utt, turn_id)
- # There are a lot of edge case bugs that are possible that could break the current turn. If so, continue
- # to ensure a large run across the dataset isn't ruined by a single bad turn.
- except Exception() as e:
- print(bcolors.RED + "Exception: {}".format(e) + bcolors.ENDC)
- continue
-
- if sys_model.is_terminated():
- print(bcolors.RED + sys_response + bcolors.ENDC)
- print(bcolors.RED + "---" * 30 + bcolors.ENDC)
- break
-
- print(bcolors.YELLOW + "System: " + bcolors.ENDC + sys_response)
- print("---" * 30)
-
- # next turn materials
- user_utt = input(bcolors.GREEN + "Enter user response here: " + bcolors.ENDC)
-
-
-if __name__ == "__main__":
- if len(sys.argv) == 1:
- print("Wrong argument!")
- print("Usage: python UBAR_interact.py checkpoint_path")
- sys.exit(1)
-
- checkpoint_path = sys.argv[1]
- interact(checkpoint_path)
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_many.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_many.c
deleted file mode 100644
index 76cc0434c62ad01f56ee8583ae0f6dc2752e119e..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_many.c
+++ /dev/null
@@ -1,210 +0,0 @@
-/** @file patest_many.c
- @ingroup test_src
- @brief Start and stop the PortAudio Driver multiple times.
- @author Phil Burk http://www.softsynth.com
-*/
-/*
- * $Id$
- *
- * This program uses the PortAudio Portable Audio Library.
- * For more information see: http://www.portaudio.com
- * Copyright (c) 1999-2000 Ross Bencina and Phil Burk
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-#include
-#include
-#include
-#include "portaudio.h"
-#define NUM_SECONDS (1)
-#define SAMPLE_RATE (44100)
-#ifndef M_PI
-#define M_PI (3.14159265)
-#endif
-#define TABLE_SIZE (200)
-typedef struct
-{
- short sine[TABLE_SIZE];
- int left_phase;
- int right_phase;
- unsigned int sampsToGo;
-}
-paTestData;
-PaError TestOnce( void );
-static int patest1Callback( const void *inputBuffer, void *outputBuffer,
- unsigned long framesPerBuffer,
- const PaStreamCallbackTimeInfo* timeInfo,
- PaStreamCallbackFlags statusFlags,
- void *userData );
-
-/* This routine will be called by the PortAudio engine when audio is needed.
-** It may called at interrupt level on some machines so don't do anything
-** that could mess up the system like calling malloc() or free().
-*/
-static int patest1Callback( const void *inputBuffer, void *outputBuffer,
- unsigned long framesPerBuffer,
- const PaStreamCallbackTimeInfo* timeInfo,
- PaStreamCallbackFlags statusFlags,
- void *userData )
-{
- paTestData *data = (paTestData*)userData;
- short *out = (short*)outputBuffer;
- unsigned int i;
- int finished = 0;
- (void) inputBuffer; /* Prevent "unused variable" warnings. */
-
- if( data->sampsToGo < framesPerBuffer )
- {
- /* final buffer... */
-
- for( i=0; isampsToGo; i++ )
- {
- *out++ = data->sine[data->left_phase]; /* left */
- *out++ = data->sine[data->right_phase]; /* right */
- data->left_phase += 1;
- if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE;
- data->right_phase += 3; /* higher pitch so we can distinguish left and right. */
- if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE;
- }
- /* zero remainder of final buffer */
- for( ; isine[data->left_phase]; /* left */
- *out++ = data->sine[data->right_phase]; /* right */
- data->left_phase += 1;
- if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE;
- data->right_phase += 3; /* higher pitch so we can distinguish left and right. */
- if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE;
- }
- data->sampsToGo -= framesPerBuffer;
- }
- return finished;
-}
-/*******************************************************************/
-#ifdef MACINTOSH
-int main(void);
-int main(void)
-{
- int i;
- PaError err;
- int numLoops = 10;
- printf("Loop %d times.\n", numLoops );
- for( i=0; i 1 )
- {
- numLoops = atoi(argv[1]);
- }
- for( i=0; idefaultLowOutputLatency;
- outputParameters.hostApiSpecificStreamInfo = NULL;
- err = Pa_OpenStream(
- &stream,
- NULL, /* no input */
- &outputParameters,
- SAMPLE_RATE,
- 1024, /* frames per buffer */
- paClipOff, /* we won't output out of range samples so don't bother clipping them */
- patest1Callback,
- &data );
- if( err != paNoError ) goto error;
-
- err = Pa_StartStream( stream );
- if( err != paNoError ) goto error;
- printf("Waiting for sound to finish.\n");
- Pa_Sleep(1000);
- err = Pa_CloseStream( stream );
- if( err != paNoError ) goto error;
- Pa_Terminate();
- return paNoError;
-error:
- Pa_Terminate();
- fprintf( stderr, "An error occurred while using the portaudio stream\n" );
- fprintf( stderr, "Error number: %d\n", err );
- fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) );
- return err;
-}
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/update_gitrevision.sh b/spaces/amarchheda/ChordDuplicate/portaudio/update_gitrevision.sh
deleted file mode 100644
index 80d4f3b12d70dda17af34b07fe5e0dc99722afaa..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/update_gitrevision.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-#
-# Write the Git commit SHA to an include file.
-# This should be run before compiling code on Linux or Macintosh.
-#
-revision_filename=src/common/pa_gitrevision.h
-
-# Run git first to make sure it is installed before corrupting the
-# include file.
-git rev-parse HEAD
-
-# Update the include file with the current Git revision.
-echo -n "#define PA_GIT_REVISION " > ${revision_filename}
-git rev-parse HEAD >> ${revision_filename}
-
-echo ${revision_filename} now contains
-cat ${revision_filename}
diff --git a/spaces/amsterdamNLP/CLIP-attention-rollout/clip_grounding/datasets/png.py b/spaces/amsterdamNLP/CLIP-attention-rollout/clip_grounding/datasets/png.py
deleted file mode 100644
index ee17deb2effe8c558e373764b5c9c75e3399c155..0000000000000000000000000000000000000000
--- a/spaces/amsterdamNLP/CLIP-attention-rollout/clip_grounding/datasets/png.py
+++ /dev/null
@@ -1,231 +0,0 @@
-"""
-Dataset object for Panoptic Narrative Grounding.
-
-Paper: https://openaccess.thecvf.com/content/ICCV2021/papers/Gonzalez_Panoptic_Narrative_Grounding_ICCV_2021_paper.pdf
-"""
-
-import os
-from os.path import join, isdir, exists
-
-import torch
-from torch.utils.data import Dataset
-import cv2
-from PIL import Image
-from skimage import io
-import numpy as np
-import textwrap
-import matplotlib.pyplot as plt
-from matplotlib import transforms
-from imgaug.augmentables.segmaps import SegmentationMapsOnImage
-import matplotlib.colors as mc
-
-from clip_grounding.utils.io import load_json
-from clip_grounding.datasets.png_utils import show_image_and_caption
-
-
-class PNG(Dataset):
- """Panoptic Narrative Grounding."""
-
- def __init__(self, dataset_root, split) -> None:
- """
- Initializer.
-
- Args:
- dataset_root (str): path to the folder containing PNG dataset
- split (str): MS-COCO split such as train2017/val2017
- """
- super().__init__()
-
- assert isdir(dataset_root)
- self.dataset_root = dataset_root
-
- assert split in ["val2017"], f"Split {split} not supported. "\
- "Currently, only supports split `val2017`."
- self.split = split
-
- self.ann_dir = join(self.dataset_root, "annotations")
- # feat_dir = join(self.dataset_root, "features")
-
- panoptic = load_json(join(self.ann_dir, "panoptic_{:s}.json".format(split)))
- images = panoptic["images"]
- self.images_info = {i["id"]: i for i in images}
- panoptic_anns = panoptic["annotations"]
- self.panoptic_anns = {int(a["image_id"]): a for a in panoptic_anns}
-
- # self.panoptic_pred_path = join(
- # feat_dir, split, "panoptic_seg_predictions"
- # )
- # assert isdir(self.panoptic_pred_path)
-
- panoptic_narratives_path = join(self.dataset_root, "annotations", f"png_coco_{split}.json")
- self.panoptic_narratives = load_json(panoptic_narratives_path)
-
- def __len__(self):
- return len(self.panoptic_narratives)
-
- def get_image_path(self, image_id: str):
- image_path = join(self.dataset_root, "images", self.split, f"{image_id.zfill(12)}.jpg")
- return image_path
-
- def __getitem__(self, idx: int):
- narr = self.panoptic_narratives[idx]
-
- image_id = narr["image_id"]
- image_path = self.get_image_path(image_id)
- assert exists(image_path)
-
- image = Image.open(image_path)
- caption = narr["caption"]
-
- # show_single_image(image, title=caption, titlesize=12)
-
- segments = narr["segments"]
-
- image_id = int(narr["image_id"])
- panoptic_ann = self.panoptic_anns[image_id]
- panoptic_ann = self.panoptic_anns[image_id]
- segment_infos = {}
- for s in panoptic_ann["segments_info"]:
- idi = s["id"]
- segment_infos[idi] = s
-
- image_info = self.images_info[image_id]
- panoptic_segm = io.imread(
- join(
- self.ann_dir,
- "panoptic_segmentation",
- self.split,
- "{:012d}.png".format(image_id),
- )
- )
- panoptic_segm = (
- panoptic_segm[:, :, 0]
- + panoptic_segm[:, :, 1] * 256
- + panoptic_segm[:, :, 2] * 256 ** 2
- )
-
- panoptic_ann = self.panoptic_anns[image_id]
- # panoptic_pred = io.imread(
- # join(self.panoptic_pred_path, "{:012d}.png".format(image_id))
- # )[:, :, 0]
-
-
- # # select a single utterance to visualize
- # segment = segments[7]
- # segment_ids = segment["segment_ids"]
- # segment_mask = np.zeros((image_info["height"], image_info["width"]))
- # for segment_id in segment_ids:
- # segment_id = int(segment_id)
- # segment_mask[panoptic_segm == segment_id] = 1.
-
- utterances = [s["utterance"] for s in segments]
- outputs = []
- for i, segment in enumerate(segments):
-
- # create segmentation mask on image
- segment_ids = segment["segment_ids"]
-
- # if no annotation for this word, skip
- if not len(segment_ids):
- continue
-
- segment_mask = np.zeros((image_info["height"], image_info["width"]))
- for segment_id in segment_ids:
- segment_id = int(segment_id)
- segment_mask[panoptic_segm == segment_id] = 1.
-
- # store the outputs
- text_mask = np.zeros(len(utterances))
- text_mask[i] = 1.
- segment_data = dict(
- image=image,
- text=utterances,
- image_mask=segment_mask,
- text_mask=text_mask,
- full_caption=caption,
- )
- outputs.append(segment_data)
-
- # # visualize segmentation mask with associated text
- # segment_color = "red"
- # segmap = SegmentationMapsOnImage(
- # segment_mask.astype(np.uint8), shape=segment_mask.shape,
- # )
- # image_with_segmap = segmap.draw_on_image(np.asarray(image), colors=[0, COLORS[segment_color]])[0]
- # image_with_segmap = Image.fromarray(image_with_segmap)
-
- # colors = ["black" for _ in range(len(utterances))]
- # colors[i] = segment_color
- # show_image_and_caption(image_with_segmap, utterances, colors)
-
- return outputs
-
-
-def overlay_segmask_on_image(image, image_mask, segment_color="red"):
- segmap = SegmentationMapsOnImage(
- image_mask.astype(np.uint8), shape=image_mask.shape,
- )
- rgb_color = mc.to_rgb(segment_color)
- rgb_color = 255 * np.array(rgb_color)
- image_with_segmap = segmap.draw_on_image(np.asarray(image), colors=[0, rgb_color])[0]
- image_with_segmap = Image.fromarray(image_with_segmap)
- return image_with_segmap
-
-
-def get_text_colors(text, text_mask, segment_color="red"):
- colors = ["black" for _ in range(len(text))]
- colors[text_mask.nonzero()[0][0]] = segment_color
- return colors
-
-
-def overlay_relevance_map_on_image(image, heatmap):
- width, height = image.size
-
- # resize the heatmap to image size
- heatmap = cv2.resize(heatmap, (width, height))
- heatmap = np.uint8(255 * heatmap)
- heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
- heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
-
- # create overlapped super image
- img = np.asarray(image)
- super_img = heatmap * 0.4 + img * 0.6
- super_img = np.uint8(super_img)
- super_img = Image.fromarray(super_img)
-
- return super_img
-
-
-def visualize_item(image, text, image_mask, text_mask, segment_color="red"):
-
- segmap = SegmentationMapsOnImage(
- image_mask.astype(np.uint8), shape=image_mask.shape,
- )
- rgb_color = mc.to_rgb(segment_color)
- rgb_color = 255 * np.array(rgb_color)
- image_with_segmap = segmap.draw_on_image(np.asarray(image), colors=[0, rgb_color])[0]
- image_with_segmap = Image.fromarray(image_with_segmap)
-
- colors = ["black" for _ in range(len(text))]
-
- text_idx = text_mask.argmax()
- colors[text_idx] = segment_color
- show_image_and_caption(image_with_segmap, text, colors)
-
-
-
-if __name__ == "__main__":
- from clip_grounding.utils.paths import REPO_PATH, DATASET_ROOTS
-
- PNG_ROOT = DATASET_ROOTS["PNG"]
- dataset = PNG(dataset_root=PNG_ROOT, split="val2017")
-
- item = dataset[0]
- sub_item = item[1]
- visualize_item(
- image=sub_item["image"],
- text=sub_item["text"],
- image_mask=sub_item["image_mask"],
- text_mask=sub_item["text_mask"],
- segment_color="red",
- )
diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/api/streaming_api.py b/spaces/antonovmaxim/text-generation-webui-space/extensions/api/streaming_api.py
deleted file mode 100644
index e50dfa2266594f9edc7fb2b6f8659f275236279f..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/extensions/api/streaming_api.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import asyncio
-import json
-from threading import Thread
-
-from websockets.server import serve
-
-from extensions.api.util import build_parameters, try_start_cloudflared
-from modules import shared
-from modules.text_generation import generate_reply
-
-PATH = '/api/v1/stream'
-
-
-async def _handle_connection(websocket, path):
-
- if path != PATH:
- print(f'Streaming api: unknown path: {path}')
- return
-
- async for message in websocket:
- message = json.loads(message)
-
- prompt = message['prompt']
- generate_params = build_parameters(message)
- stopping_strings = generate_params.pop('stopping_strings')
- generate_params['stream'] = True
-
- generator = generate_reply(
- prompt, generate_params, stopping_strings=stopping_strings, is_chat=False)
-
- # As we stream, only send the new bytes.
- skip_index = 0
- message_num = 0
-
- for a in generator:
- to_send = a[skip_index:]
- await websocket.send(json.dumps({
- 'event': 'text_stream',
- 'message_num': message_num,
- 'text': to_send
- }))
-
- await asyncio.sleep(0)
-
- skip_index += len(to_send)
- message_num += 1
-
- await websocket.send(json.dumps({
- 'event': 'stream_end',
- 'message_num': message_num
- }))
-
-
-async def _run(host: str, port: int):
- async with serve(_handle_connection, host, port, ping_interval=None):
- await asyncio.Future() # run forever
-
-
-def _run_server(port: int, share: bool = False):
- address = '0.0.0.0' if shared.args.listen else '127.0.0.1'
-
- def on_start(public_url: str):
- public_url = public_url.replace('https://', 'wss://')
- print(f'Starting streaming server at public url {public_url}{PATH}')
-
- if share:
- try:
- try_start_cloudflared(port, max_attempts=3, on_start=on_start)
- except Exception as e:
- print(e)
- else:
- print(f'Starting streaming server at ws://{address}:{port}{PATH}')
-
- asyncio.run(_run(host=address, port=port))
-
-
-def start_server(port: int, share: bool = False):
- Thread(target=_run_server, args=[port, share], daemon=True).start()
diff --git a/spaces/apsys/hetfit/docs/main.html b/spaces/apsys/hetfit/docs/main.html
deleted file mode 100644
index 21de8388271812186b81e701b752277016e70ae6..0000000000000000000000000000000000000000
--- a/spaces/apsys/hetfit/docs/main.html
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
-
-
- main.py
-
-
-
-
importstreamlitasst
-
-fromnets.envsimportSCI
-
-
-st.set_page_config(
- page_title="HET_sci",
- menu_items={
- 'About':'https://advpropsys.github.io'
- }
-)
-
-st.title('HETfit_scientific')
-st.markdown("#### Imagine a package which was engineered primarly for data driven plasma physics devices design, mainly hall effect thrusters, yup that's it"
- "\n### :orange[Don't be scared away though, it has much simpler interface than anything you ever used for such designs]")
-st.markdown('### Main concepts:')
-st.markdown("- Each observational/design session is called an **environment**, for now it can be either RCI or SCI (Real or scaled interface)"
- "\n In this overview we will only touch SCI, since RCI is using PINNs which are different topic"
- "\n- You specify most of the run parameters on this object init, :orange[**including generation of new samples**] via GAN"
- "\n- You may want to generate new features, do it !"
- "\n- Want to select best features for more effctive work? Done!"
- "\n- Compile environment with your model of choice, can be ***any*** torch model or sklearn one"
- "\n- Train !"
- "\n- Plot, inference, save, export to jit/onnx, measure performance - **they all are one liners** "
- )
-st.markdown('### tl;dr \n- Create environment'
- '\n```run = SCI(*args,**kwargs)```'
- '\n - Generate features ```run.feature_gen()``` '
- '\n - Select features ```run.feature_importance()```'
- '\n - Compile env ```run.compile()```'
- '\n - Train model in env ```run.train()```'
- '\n - Inference, plot, performance, ex. ```run.plot3d()```'
- '\n #### And yes, it all will work even without any additional arguments from user besides column indexes'
- )
-st.write('Comparison with *arXiv:2206.04440v3*')
-col1,col2=st.columns(2)
-col1.metric('Geometry accuracy on domain',value='83%',delta='15%')
-col2.metric('$d \mapsto h$ prediction',value='98%',delta='14%')
-
-st.header('Example:')
-
-st.markdown('Remeber indexes and column names on this example: $P$ - 1, $d$ - 3, $h$ - 3, $m_a$ - 6,$T$ - 7')
-st.code('run = SCI(*args,**kwargs)')
-
-run=SCI()
-st.code('run.feature_gen()')
-run.feature_gen()
-st.write('New features: (index-0:22 original samples, else is GAN generated)',run.df.iloc[1:,9:].astype(float))
-st.write('Most of real dataset is from *doi:0.2514/1.B37424*, hence the results mostly agree with it in specific')
-st.code('run.feature_importance(run.df.iloc[1:,1:7].astype(float),run.df.iloc[1:,7]) # Clear and easy example')
-
-st.write(run.feature_importance(run.df.iloc[1:,1:6].astype(float),run.df.iloc[1:,6]))
-st.markdown(' As we can see only $h$ and $d$ passed for $m_a$ model, not only that linear dependacy was proven experimantally, but now we got this from data driven source')
-st.code('run.compile(idx=(1,3,7))')
-run.compile(idx=(1,3,7))
-st.code('run.train(epochs=10)')
-run.train(epochs=10)
-st.code('run.plot3d()')
-st.write(run.plot3d())
-st.code('run.performance()')
-st.write(run.performance())
-
-st.write('Try it out yourself! Select a column from 1 to 10')
-number=st.number_input('Here',min_value=1,max_value=10,step=1)
-
-ifnumber:
- st.code(f'run.compile(idx=(1,3,{number}))')
- run.compile(idx=(1,3,number))
- st.code('run.train(epochs=10)')
- run.train(epochs=10)
- st.code('run.plot3d()')
- st.write(run.plot3d())
-
-
-
-st.markdown('In this intro we covered simplest user flow while using HETFit package, resulted data can be used to leverage PINN and analytical models of Hall effect thrusters'
- '\n #### :orange[To cite please contact author on https://github.com/advpropsys]')
-
-
-
-
-
-
-
diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/vits.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/vits.md
deleted file mode 100644
index 0c303f7a957f1a27be9028c1f596368919303ecd..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/vits.md
+++ /dev/null
@@ -1,38 +0,0 @@
-# VITS
-
-VITS (Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
-) is an End-to-End (encoder -> vocoder together) TTS model that takes advantage of SOTA DL techniques like GANs, VAE,
-Normalizing Flows. It does not require external alignment annotations and learns the text-to-audio alignment
-using MAS, as explained in the paper. The model architecture is a combination of GlowTTS encoder and HiFiGAN vocoder.
-It is a feed-forward model with x67.12 real-time factor on a GPU.
-
-🐸 YourTTS is a multi-speaker and multi-lingual TTS model that can perform voice conversion and zero-shot speaker adaptation.
-It can also learn a new language or voice with a ~ 1 minute long audio clip. This is a big open gate for training
-TTS models in low-resources languages. 🐸 YourTTS uses VITS as the backbone architecture coupled with a speaker encoder model.
-
-## Important resources & papers
-- 🐸 YourTTS: https://arxiv.org/abs/2112.02418
-- VITS: https://arxiv.org/pdf/2106.06103.pdf
-- Neural Spline Flows: https://arxiv.org/abs/1906.04032
-- Variational Autoencoder: https://arxiv.org/pdf/1312.6114.pdf
-- Generative Adversarial Networks: https://arxiv.org/abs/1406.2661
-- HiFiGAN: https://arxiv.org/abs/2010.05646
-- Normalizing Flows: https://blog.evjang.com/2018/01/nf1.html
-
-## VitsConfig
-```{eval-rst}
-.. autoclass:: TTS.tts.configs.vits_config.VitsConfig
- :members:
-```
-
-## VitsArgs
-```{eval-rst}
-.. autoclass:: TTS.tts.models.vits.VitsArgs
- :members:
-```
-
-## Vits Model
-```{eval-rst}
-.. autoclass:: TTS.tts.models.vits.Vits
- :members:
-```
diff --git a/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/conclusion.tex b/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/conclusion.tex
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/avans06/whisper-webui-translate/LICENSE.md b/spaces/avans06/whisper-webui-translate/LICENSE.md
deleted file mode 100644
index f5f4b8b5ecd27c09e4ef16e9662bcb7bb2bfc76f..0000000000000000000000000000000000000000
--- a/spaces/avans06/whisper-webui-translate/LICENSE.md
+++ /dev/null
@@ -1,195 +0,0 @@
-Apache License
-==============
-
-_Version 2.0, January 2004_
-_<>_
-
-### Terms and Conditions for use, reproduction, and distribution
-
-#### 1. Definitions
-
-“License” shall mean the terms and conditions for use, reproduction, and
-distribution as defined by Sections 1 through 9 of this document.
-
-“Licensor” shall mean the copyright owner or entity authorized by the copyright
-owner that is granting the License.
-
-“Legal Entity” shall mean the union of the acting entity and all other entities
-that control, are controlled by, or are under common control with that entity.
-For the purposes of this definition, “control” means **(i)** the power, direct or
-indirect, to cause the direction or management of such entity, whether by
-contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the
-outstanding shares, or **(iii)** beneficial ownership of such entity.
-
-“You” (or “Your”) shall mean an individual or Legal Entity exercising
-permissions granted by this License.
-
-“Source” form shall mean the preferred form for making modifications, including
-but not limited to software source code, documentation source, and configuration
-files.
-
-“Object” form shall mean any form resulting from mechanical transformation or
-translation of a Source form, including but not limited to compiled object code,
-generated documentation, and conversions to other media types.
-
-“Work” shall mean the work of authorship, whether in Source or Object form, made
-available under the License, as indicated by a copyright notice that is included
-in or attached to the work (an example is provided in the Appendix below).
-
-“Derivative Works” shall mean any work, whether in Source or Object form, that
-is based on (or derived from) the Work and for which the editorial revisions,
-annotations, elaborations, or other modifications represent, as a whole, an
-original work of authorship. For the purposes of this License, Derivative Works
-shall not include works that remain separable from, or merely link (or bind by
-name) to the interfaces of, the Work and Derivative Works thereof.
-
-“Contribution” shall mean any work of authorship, including the original version
-of the Work and any modifications or additions to that Work or Derivative Works
-thereof, that is intentionally submitted to Licensor for inclusion in the Work
-by the copyright owner or by an individual or Legal Entity authorized to submit
-on behalf of the copyright owner. For the purposes of this definition,
-“submitted” means any form of electronic, verbal, or written communication sent
-to the Licensor or its representatives, including but not limited to
-communication on electronic mailing lists, source code control systems, and
-issue tracking systems that are managed by, or on behalf of, the Licensor for
-the purpose of discussing and improving the Work, but excluding communication
-that is conspicuously marked or otherwise designated in writing by the copyright
-owner as “Not a Contribution.”
-
-“Contributor” shall mean Licensor and any individual or Legal Entity on behalf
-of whom a Contribution has been received by Licensor and subsequently
-incorporated within the Work.
-
-#### 2. Grant of Copyright License
-
-Subject to the terms and conditions of this License, each Contributor hereby
-grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
-irrevocable copyright license to reproduce, prepare Derivative Works of,
-publicly display, publicly perform, sublicense, and distribute the Work and such
-Derivative Works in Source or Object form.
-
-#### 3. Grant of Patent License
-
-Subject to the terms and conditions of this License, each Contributor hereby
-grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
-irrevocable (except as stated in this section) patent license to make, have
-made, use, offer to sell, sell, import, and otherwise transfer the Work, where
-such license applies only to those patent claims licensable by such Contributor
-that are necessarily infringed by their Contribution(s) alone or by combination
-of their Contribution(s) with the Work to which such Contribution(s) was
-submitted. If You institute patent litigation against any entity (including a
-cross-claim or counterclaim in a lawsuit) alleging that the Work or a
-Contribution incorporated within the Work constitutes direct or contributory
-patent infringement, then any patent licenses granted to You under this License
-for that Work shall terminate as of the date such litigation is filed.
-
-#### 4. Redistribution
-
-You may reproduce and distribute copies of the Work or Derivative Works thereof
-in any medium, with or without modifications, and in Source or Object form,
-provided that You meet the following conditions:
-
-* **(a)** You must give any other recipients of the Work or Derivative Works a copy of
-this License; and
-* **(b)** You must cause any modified files to carry prominent notices stating that You
-changed the files; and
-* **(c)** You must retain, in the Source form of any Derivative Works that You distribute,
-all copyright, patent, trademark, and attribution notices from the Source form
-of the Work, excluding those notices that do not pertain to any part of the
-Derivative Works; and
-* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any
-Derivative Works that You distribute must include a readable copy of the
-attribution notices contained within such NOTICE file, excluding those notices
-that do not pertain to any part of the Derivative Works, in at least one of the
-following places: within a NOTICE text file distributed as part of the
-Derivative Works; within the Source form or documentation, if provided along
-with the Derivative Works; or, within a display generated by the Derivative
-Works, if and wherever such third-party notices normally appear. The contents of
-the NOTICE file are for informational purposes only and do not modify the
-License. You may add Your own attribution notices within Derivative Works that
-You distribute, alongside or as an addendum to the NOTICE text from the Work,
-provided that such additional attribution notices cannot be construed as
-modifying the License.
-
-You may add Your own copyright statement to Your modifications and may provide
-additional or different license terms and conditions for use, reproduction, or
-distribution of Your modifications, or for any such Derivative Works as a whole,
-provided Your use, reproduction, and distribution of the Work otherwise complies
-with the conditions stated in this License.
-
-#### 5. Submission of Contributions
-
-Unless You explicitly state otherwise, any Contribution intentionally submitted
-for inclusion in the Work by You to the Licensor shall be under the terms and
-conditions of this License, without any additional terms or conditions.
-Notwithstanding the above, nothing herein shall supersede or modify the terms of
-any separate license agreement you may have executed with Licensor regarding
-such Contributions.
-
-#### 6. Trademarks
-
-This License does not grant permission to use the trade names, trademarks,
-service marks, or product names of the Licensor, except as required for
-reasonable and customary use in describing the origin of the Work and
-reproducing the content of the NOTICE file.
-
-#### 7. Disclaimer of Warranty
-
-Unless required by applicable law or agreed to in writing, Licensor provides the
-Work (and each Contributor provides its Contributions) on an “AS IS” BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
-including, without limitation, any warranties or conditions of TITLE,
-NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
-solely responsible for determining the appropriateness of using or
-redistributing the Work and assume any risks associated with Your exercise of
-permissions under this License.
-
-#### 8. Limitation of Liability
-
-In no event and under no legal theory, whether in tort (including negligence),
-contract, or otherwise, unless required by applicable law (such as deliberate
-and grossly negligent acts) or agreed to in writing, shall any Contributor be
-liable to You for damages, including any direct, indirect, special, incidental,
-or consequential damages of any character arising as a result of this License or
-out of the use or inability to use the Work (including but not limited to
-damages for loss of goodwill, work stoppage, computer failure or malfunction, or
-any and all other commercial damages or losses), even if such Contributor has
-been advised of the possibility of such damages.
-
-#### 9. Accepting Warranty or Additional Liability
-
-While redistributing the Work or Derivative Works thereof, You may choose to
-offer, and charge a fee for, acceptance of support, warranty, indemnity, or
-other liability obligations and/or rights consistent with this License. However,
-in accepting such obligations, You may act only on Your own behalf and on Your
-sole responsibility, not on behalf of any other Contributor, and only if You
-agree to indemnify, defend, and hold each Contributor harmless for any liability
-incurred by, or claims asserted against, such Contributor by reason of your
-accepting any such warranty or additional liability.
-
-_END OF TERMS AND CONDITIONS_
-
-### APPENDIX: How to apply the Apache License to your work
-
-To apply the Apache License to your work, attach the following boilerplate
-notice, with the fields enclosed by brackets `[]` replaced with your own
-identifying information. (Don't include the brackets!) The text should be
-enclosed in the appropriate comment syntax for the file format. We also
-recommend that a file or class name and description of purpose be included on
-the same “printed page” as the copyright notice for easier identification within
-third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/spaces/awacke1/1.ChatGPT-HuggingFace-Spaces-NLP-Transformers-Pipeline/app.py b/spaces/awacke1/1.ChatGPT-HuggingFace-Spaces-NLP-Transformers-Pipeline/app.py
deleted file mode 100644
index b6e81f385d4256b4a286f1a67a983d3912d949e0..0000000000000000000000000000000000000000
--- a/spaces/awacke1/1.ChatGPT-HuggingFace-Spaces-NLP-Transformers-Pipeline/app.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import gradio as gr
-import os
-import json
-import requests
-
-#Streaming endpoint
-API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
-
-#Testing with my Open AI Key
-OPENAI_API_KEY = os.getenv("ChatGPT") # Key 03-23
-
-def predict(inputs, top_p, temperature, openai_api_key, chat_counter, chatbot=[], history=[]): #repetition_penalty, top_k
-
- payload = {
- "model": "gpt-3.5-turbo",
- "messages": [{"role": "user", "content": f"{inputs}"}],
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}"
- }
-
- print(f"chat_counter - {chat_counter}")
- if chat_counter != 0 :
- messages=[]
- for data in chatbot:
- temp1 = {}
- temp1["role"] = "user"
- temp1["content"] = data[0]
- temp2 = {}
- temp2["role"] = "assistant"
- temp2["content"] = data[1]
- messages.append(temp1)
- messages.append(temp2)
- temp3 = {}
- temp3["role"] = "user"
- temp3["content"] = inputs
- messages.append(temp3)
- #messages
- payload = {
- "model": "gpt-3.5-turbo",
- "messages": messages, #[{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature, #1.0,
- "top_p": top_p, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
-
- chat_counter+=1
-
- history.append(inputs)
- print(f"payload is - {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- #response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter+=1
- continue
- #counter+=1
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- #if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0:
- # break
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter # resembles {chatbot: chat, state: history}
-
-
-def reset_textbox():
- return gr.update(value='')
-
-title = """
🔥ChatGPT API 🚀Streaming🚀
"""
-description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
-```
-User:
-Assistant:
-User:
-Assistant:
-...
-```
-In this app, you can explore the outputs of a gpt-3.5-turbo LLM.
-"""
-
-with gr.Blocks(css = """#col_container {width: 1000px; margin-left: auto; margin-right: auto;}
- #chatbot {height: 520px; overflow: auto;}""") as demo:
- gr.HTML(title)
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key
''')
- with gr.Column(elem_id = "col_container"):
- openai_api_key = gr.Textbox(type='password', label="Enter your OpenAI API key here")
- chatbot = gr.Chatbot(elem_id='chatbot') #c
- inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") #t
- state = gr.State([]) #s
- b1 = gr.Button()
-
- #inputs, top_p, temperature, top_k, repetition_penalty
- with gr.Accordion("Parameters", open=False):
- top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",)
- #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", )
- chat_counter = gr.Number(value=0, visible=False, precision=0)
-
- inputs.submit( predict, [inputs, top_p, temperature, openai_api_key, chat_counter, chatbot, state], [chatbot, state, chat_counter],)
- b1.click( predict, [inputs, top_p, temperature, openai_api_key, chat_counter, chatbot, state], [chatbot, state, chat_counter],)
- b1.click(reset_textbox, [], [inputs])
- inputs.submit(reset_textbox, [], [inputs])
-
- #gr.Markdown(description)
- demo.queue().launch(debug=True)
diff --git a/spaces/awacke1/ChatGPTStreamlit8/app.py b/spaces/awacke1/ChatGPTStreamlit8/app.py
deleted file mode 100644
index 9fcd544b1048b3bbe3efd012716150f93ac3564d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ChatGPTStreamlit8/app.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import streamlit as st
-import openai
-import os
-import base64
-import glob
-import json
-import mistune
-import pytz
-import math
-import requests
-
-from datetime import datetime
-from openai import ChatCompletion
-from xml.etree import ElementTree as ET
-from bs4 import BeautifulSoup
-from collections import deque
-from audio_recorder_streamlit import audio_recorder
-
-def generate_filename(prompt, file_type):
- central = pytz.timezone('US/Central')
- safe_date_time = datetime.now(central).strftime("%m%d_%I%M")
- safe_prompt = "".join(x for x in prompt if x.isalnum())[:45]
- return f"{safe_date_time}_{safe_prompt}.{file_type}"
-
-def chat_with_model(prompt, document_section):
- model = model_choice
- conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
- conversation.append({'role': 'user', 'content': prompt})
- if len(document_section)>0:
- conversation.append({'role': 'assistant', 'content': document_section})
- response = openai.ChatCompletion.create(model=model, messages=conversation)
- #return response
- return response['choices'][0]['message']['content']
-
-def transcribe_audio(openai_key, file_path, model):
- OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions"
- headers = {
- "Authorization": f"Bearer {openai_key}",
- }
- with open(file_path, 'rb') as f:
- data = {'file': f}
- response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model})
- if response.status_code == 200:
- st.write(response.json())
-
- response2 = chat_with_model(response.json().get('text'), '') # *************************************
- st.write('Responses:')
- #st.write(response)
- st.write(response2)
- return response.json().get('text')
- else:
- st.write(response.json())
- st.error("Error in API call.")
- return None
-
-def save_and_play_audio(audio_recorder):
- audio_bytes = audio_recorder()
- if audio_bytes:
- filename = generate_filename("Recording", "wav")
- with open(filename, 'wb') as f:
- f.write(audio_bytes)
- st.audio(audio_bytes, format="audio/wav")
- return filename
- return None
-
-def create_file(filename, prompt, response):
- if filename.endswith(".txt"):
- with open(filename, 'w') as file:
- file.write(f"{prompt}\n{response}")
- elif filename.endswith(".htm"):
- with open(filename, 'w') as file:
- file.write(f"{prompt} {response}")
- elif filename.endswith(".md"):
- with open(filename, 'w') as file:
- file.write(f"{prompt}\n\n{response}")
-
-def truncate_document(document, length):
- return document[:length]
-def divide_document(document, max_length):
- return [document[i:i+max_length] for i in range(0, len(document), max_length)]
-
-def get_table_download_link(file_path):
- with open(file_path, 'r') as file:
- data = file.read()
- b64 = base64.b64encode(data.encode()).decode()
- file_name = os.path.basename(file_path)
- ext = os.path.splitext(file_name)[1] # get the file extension
- if ext == '.txt':
- mime_type = 'text/plain'
- elif ext == '.py':
- mime_type = 'text/plain'
- elif ext == '.xlsx':
- mime_type = 'text/plain'
- elif ext == '.csv':
- mime_type = 'text/plain'
- elif ext == '.htm':
- mime_type = 'text/html'
- elif ext == '.md':
- mime_type = 'text/markdown'
- else:
- mime_type = 'application/octet-stream' # general binary data type
- href = f'{file_name}'
- return href
-
-def CompressXML(xml_text):
- root = ET.fromstring(xml_text)
- for elem in list(root.iter()):
- if isinstance(elem.tag, str) and 'Comment' in elem.tag:
- elem.parent.remove(elem)
- return ET.tostring(root, encoding='unicode', method="xml")
-
-def read_file_content(file,max_length):
- if file.type == "application/json":
- content = json.load(file)
- return str(content)
- elif file.type == "text/html" or file.type == "text/htm":
- content = BeautifulSoup(file, "html.parser")
- return content.text
- elif file.type == "application/xml" or file.type == "text/xml":
- tree = ET.parse(file)
- root = tree.getroot()
- xml = CompressXML(ET.tostring(root, encoding='unicode'))
- return xml
- elif file.type == "text/markdown" or file.type == "text/md":
- md = mistune.create_markdown()
- content = md(file.read().decode())
- return content
- elif file.type == "text/plain":
- return file.getvalue().decode()
- else:
- return ""
-
-
-
-def chat_with_file_contents(prompt, file_content):
- conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
- conversation.append({'role': 'user', 'content': prompt})
- if len(file_content)>0:
- conversation.append({'role': 'assistant', 'content': file_content})
- response = openai.ChatCompletion.create(model=model_choice, messages=conversation)
- return response['choices'][0]['message']['content']
-
-
-# Sidebar and global
-openai.api_key = os.getenv('OPENAI_KEY')
-st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide")
-menu = ["htm", "txt", "xlsx", "csv", "md", "py"] #619
-choice = st.sidebar.selectbox("Output File Type:", menu)
-model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301'))
-
-# Audio, transcribe, GPT:
-filename = save_and_play_audio(audio_recorder)
-if filename is not None:
- transcription = transcribe_audio(openai.api_key, filename, "whisper-1")
- st.write(transcription)
- gptOutput = chat_with_model(transcription, '') # *************************************
- filename = generate_filename(transcription, choice)
- create_file(filename, transcription, gptOutput)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
-
-def main():
- user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100)
-
- collength, colupload = st.columns([2,3]) # adjust the ratio as needed
- with collength:
- #max_length = 12000 - optimal for gpt35 turbo. 2x=24000 for gpt4. 8x=96000 for gpt4-32k.
- max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000)
- with colupload:
- uploaded_file = st.file_uploader("Add a file for context:", type=["xml", "json", "xlsx","csv","html", "htm", "md", "txt"])
-
- document_sections = deque()
- document_responses = {}
-
- if uploaded_file is not None:
- file_content = read_file_content(uploaded_file, max_length)
- document_sections.extend(divide_document(file_content, max_length))
-
- if len(document_sections) > 0:
-
- if st.button("👁️ View Upload"):
- st.markdown("**Sections of the uploaded file:**")
- for i, section in enumerate(list(document_sections)):
- st.markdown(f"**Section {i+1}**\n{section}")
-
- st.markdown("**Chat with the model:**")
- for i, section in enumerate(list(document_sections)):
- if i in document_responses:
- st.markdown(f"**Section {i+1}**\n{document_responses[i]}")
- else:
- if st.button(f"Chat about Section {i+1}"):
- st.write('Reasoning with your inputs...')
- response = chat_with_model(user_prompt, section) # *************************************
- st.write('Response:')
- st.write(response)
- document_responses[i] = response
- filename = generate_filename(f"{user_prompt}_section_{i+1}", choice)
- create_file(filename, user_prompt, response)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
- if st.button('💬 Chat'):
- st.write('Reasoning with your inputs...')
- response = chat_with_model(user_prompt, ''.join(list(document_sections))) # *************************************
- st.write('Response:')
- st.write(response)
-
- filename = generate_filename(user_prompt, choice)
- create_file(filename, user_prompt, response)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
- all_files = glob.glob("*.*")
- all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names
- all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order
-
- # sidebar of files
- file_contents=''
- next_action=''
- for file in all_files:
- col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed
- with col1:
- if st.button("🌐", key="md_"+file): # md emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='md'
- with col2:
- st.markdown(get_table_download_link(file), unsafe_allow_html=True)
- with col3:
- if st.button("📂", key="open_"+file): # open emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='open'
- with col4:
- if st.button("🔍", key="read_"+file): # search emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='search'
- with col5:
- if st.button("🗑", key="delete_"+file):
- os.remove(file)
- st.experimental_rerun()
-
- if len(file_contents) > 0:
- if next_action=='open':
- file_content_area = st.text_area("File Contents:", file_contents, height=500)
- if next_action=='md':
- st.markdown(file_contents)
- if next_action=='search':
- file_content_area = st.text_area("File Contents:", file_contents, height=500)
- st.write('Reasoning with your inputs...')
- response = chat_with_file_contents(user_prompt, file_contents)
- st.write('Response:')
- st.write(response)
- filename = generate_filename(file_content_area, choice)
- create_file(filename, file_content_area, response)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/awacke1/Health-Care-AI-and-Datasets/app.py b/spaces/awacke1/Health-Care-AI-and-Datasets/app.py
deleted file mode 100644
index d97e83e5db380ac3145393d7121cdb1776b35e90..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Health-Care-AI-and-Datasets/app.py
+++ /dev/null
@@ -1,339 +0,0 @@
-import streamlit as st
-from graphviz import Digraph
-
-st.markdown("""
-# Top Five ML Libraries:
-1. PyTorch: PyTorch is another popular open-source machine learning library that is widely used for building deep learning models. It is known for its ease of use and flexibility.
-2. TensorFlow: Developed by Google, TensorFlow is an open-source library that is widely used for building and training deep learning models.
-3. Scikit-learn: Scikit-learn is a popular Python library for machine learning. It provides a wide range of algorithms for classification, regression, clustering, and dimensionality reduction, along with tools for model selection and evaluation.
-4. Keras: Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It is designed to be easy to use and supports both convolutional and recurrent neural networks.
-5. MXNet: MXNet is a deep learning framework that is known for its speed and scalability. It supports multiple programming languages and is used by several large companies for machine learning tasks.
-
-# Pytorch demo with knowledge trees
-""")
-
-import matplotlib.pyplot as plt
-from mpl_toolkits.mplot3d import Axes3D
-import torch
-def plot_knowledge_trees(knowledge_trees):
- fig = plt.figure()
- ax = fig.add_subplot(111, projection='3d')
-
- xs = torch.arange(len(knowledge_trees))
- ys = torch.arange(len(knowledge_trees[0]))
- xs, ys = torch.meshgrid(xs, ys)
-
- zs = []
- for i in range(len(knowledge_trees)):
- zs.append([kt[i] for kt in knowledge_trees])
-
- zs = torch.tensor(zs)
- ax.plot_surface(xs.numpy(), ys.numpy(), zs.numpy(), cmap='coolwarm')
-
- ax.set_xlabel('States')
- ax.set_ylabel('Knowledge Trees')
- ax.set_zlabel('Number of Nodes')
-
- return fig
-
-def plot_population(states, populations):
- fig, ax = plt.subplots()
- ax.bar(states, populations)
-
- ax.set_xlabel('States')
- ax.set_ylabel('Population')
-
- return fig
-
-
-def main():
- st.title('State Populations and Knowledge Trees')
-
- # Define the state populations
- states = ['California', 'Texas', 'Florida', 'New York']
- populations = [39538223, 29145505, 21538187, 19849399]
-
- # Define the knowledge trees for each state
- knowledge_trees = [
- [100, 200, 300, 400],
- [150, 250, 350, 450],
- [120, 220, 320, 420],
- [130, 230, 330, 430]
- ]
-
- # Generate the plots
- st.write('## State Populations')
- fig_population = plot_population(states, populations)
- st.pyplot(fig_population)
-
- st.write('## Descending Order Lists of Knowledge Trees')
- fig_knowledge_trees = plot_knowledge_trees(knowledge_trees)
- st.pyplot(fig_knowledge_trees)
-
-if __name__ == '__main__':
- main()
-
-st.markdown("""
-# Health Related Examples:
-1. Health Conditions By State: https://huggingface.co/spaces/awacke1/HealthConditionsTest
-2. Mental Health ICD10 and DSM Codes: https://huggingface.co/spaces/awacke1/Mental-Health-ICD10-to-DSM
-3. Health Care and AI Datasets: https://huggingface.co/spaces/awacke1/Health-Care-AI-and-Datasets
-4. Zero Shot Classifier Facebook: https://huggingface.co/spaces/awacke1/Zero-shot-classification-facebook-bart-large-mnli
-5. Zero Shot Classifier Valhalla: https://huggingface.co/spaces/awacke1/Zero-Shot-Classification-valhalla-distilbart-mnli-12-1
-
-""")
-
-st.markdown("""
-# 👋 Two easy ways to turbo boost your AI learning journey! 💻
-# 🌐 AI Pair Programming
-## Open 2 Browsers to:
-1. __🌐 ChatGPT__ [URL](https://chat.openai.com/chat) or [URL2](https://platform.openai.com/playground) and
-2. __🌐 Huggingface__ [URL](https://huggingface.co/awacke1) in separate browser windows.
-1. 🤖 Use prompts to generate a streamlit program on Huggingface or locally to test it.
-2. 🔧 For advanced work, add Python 3.10 and VSCode locally, and debug as gradio or streamlit apps.
-3. 🚀 Use these two superpower processes to reduce the time it takes you to make a new AI program! ⏱️
-# 🎥 YouTube University Method:
-1. 🏋️♀️ Plan two hours each weekday to exercise your body and brain.
-2. 🎬 Make a playlist of videos you want to learn from on YouTube. Save the links to edit later.
-3. 🚀 Try watching the videos at a faster speed while exercising, and sample the first five minutes of each video.
-4. 📜 Reorder the playlist so the most useful videos are at the front, and take breaks to exercise.
-5. 📝 Practice note-taking in markdown to instantly save what you want to remember. Share your notes with others!
-6. 👥 AI Pair Programming Using Long Answer Language Models with Human Feedback:
-## 🎥 2023 AI/ML Advanced Learning Playlists:
-1. [2023 QA Models and Long Form Question Answering NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFovrkkx8HMTLNgYdjCMNYmX_)
-2. [FHIR Bioinformatics Development Using AI/ML and Python, Streamlit, and Gradio - 2022](https://www.youtube.com/playlist?list=PLHgX2IExbFovoMUC3hYXeFegpk_Y0Lz0Q)
-3. [2023 ChatGPT for Coding Assistant Streamlit, Gradio and Python Apps](https://www.youtube.com/playlist?list=PLHgX2IExbFouOEnppexiKZVdz_k5b0pvI)
-4. [2023 BigScience Bloom - Large Language Model for AI Systems and NLP](https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14)
-5. [2023 Streamlit Pro Tips for AI UI UX for Data Science, Engineering, and Mathematics](https://www.youtube.com/playlist?list=PLHgX2IExbFou3cP19hHO9Xb-cN8uwr5RM)
-6. [2023 Fun, New and Interesting AI, Videos, and AI/ML Techniques](https://www.youtube.com/playlist?list=PLHgX2IExbFotoMt32SrT3Xynt5BXTGnEP)
-7. [2023 Best Minds in AGI AI Gamification and Large Language Models](https://www.youtube.com/playlist?list=PLHgX2IExbFotmFeBTpyje1uI22n0GAkXT)
-8. [2023 State of the Art for Vision Image Classification, Text Classification and Regression, Extractive Question Answering and Tabular Classification](https://www.youtube.com/playlist?list=PLHgX2IExbFotPcPu6pauNHOoZTTbnAQ2F)
-9. [2023 AutoML DataRobot and AI Platforms for Building Models, Features, Test, and Transparency](https://www.youtube.com/playlist?list=PLHgX2IExbFovsY2oGbDwdEhPrakkC8i3g)
-""")
-
-
-st.markdown("""
-# 📚 Clinical Terminology and Ontologies [Example 🩺⚕️NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology)
-## Health Vocabularies, Systems of Coding, and Databases with Bibliographies
-##__Keywords__:
-1. __Clinical Terminology__: 💬 Words that doctors use to talk to each other about patients.
-2. __Ontologies for Medications and Conditions__: 📚 A fancy way of organizing knowledge about medicine and health problems.
-3. __Health Vocabularies__: 📝 A special list of words used in healthcare to talk about health issues.
-4. __Systems of Coding__: 💻 A way of giving things like sicknesses and treatments special codes, so that doctors can remember them easily.
-5. __Databases__: 🗄️ A computer system that stores information about patients, health research, and other healthcare things.
-6. __Bibliographies__: 📖 A list of books or articles that doctors use to learn about new health information.
-1. ## 1️⃣ National Library of Medicine's **RxNorm**:
- - Standardized nomenclature for clinical drugs developed by NLM
- - Provides links between drug names and related information such as ingredients, strengths, and dosages
- - **Data type: controlled vocabulary**
- - Access through **NLM's RxNorm website**: https://www.nlm.nih.gov/research/umls/rxnorm/index.html
-2. ## 2️⃣ Centers for Medicare and Medicaid Services' Healthcare Common Procedure Coding System (HCPCS):
- - Coding system used to identify healthcare **services, procedures, and supplies**
- - Includes **codes for drugs, biologicals, and other items** used in medical care
- - **Data type: coding system**
- - Access through **CMS website**: https://www.cms.gov/Medicare/Coding/MedHCPCSGenInfo
-3. ## 3️⃣ Unified Medical Language System (UMLS):
- - Set of files and software tools developed by NLM for integrating and mapping biomedical vocabularies
- - Includes RxNorm and other drug vocabularies, as well as other terminologies used in medicine
- - **Data type: controlled vocabulary**
- - Access through UMLS Metathesaurus: https://www.nlm.nih.gov/research/umls/index.html
-4. ## 4️⃣ PubMed:
- - Database of **biomedical literature** maintained by the National Center for Biotechnology Information (NCBI)
- - Includes information about **drugs, including drug names, chemical structures, and pharmacological actions**
- - **Data type: bibliographic database**
- - Access through **PubMed website**: https://pubmed.ncbi.nlm.nih.gov/
-5. ## 5️⃣ PubChem:
- - Database of chemical substances maintained by NCBI
- - Includes information about drugs, including **chemical structures, properties, and activities**
- - **Data type: chemical database**
- - Access through **PubChem website**: https://pubchem.ncbi.nlm.nih.gov/
-6. ## 6️⃣ Behavioral Health Code Terminology Sets:
- - Code terminology sets specific to behavioral health
- - Includes **DSM** published by American Psychiatric Association, **ICD** published by World Health Organization, and **CPT** published by American Medical Association
- - **Data type: coding system**
- - Access through respective **organizations' websites**:
- 1. [DSM](https://www.psychiatry.org/psychiatrists/practice/dsm)
- 2. [ICD](https://www.who.int/standards/classifications/classification-of-diseases)
- 3. [CPT](https://www.ama-assn.org/practice-management/cpt/current-procedural-terminology-cpt)
-""")
-
-
-st.markdown("""
-1. # 📚Natural Language Processing🔤 - 🗣️🤖💭💬🌍🔍
- 1. 🤔 **🩺⚕️ Sentiment analysis** - Determine underlying sentiment of text. [Example](https://huggingface.co/spaces/awacke1/Sentiment-analysis-streamlit)
- 2. 📝 **Named Entity Recognition (NER)** - Identify and classify named entities in text. [Example](https://huggingface.co/spaces/awacke1/Named-entity-resolution)
- 3. 🔊 **🩺⚕️Automatic Speech Recognition (ASR)** - Transcribe spoken language into text.
- # Advanced NLP ASR Examples:
- 1. 🩺⚕️ https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test
- 2. https://huggingface.co/spaces/awacke1/ASRGenerateStory
- 3. 🩺⚕️ https://huggingface.co/spaces/awacke1/TTS-STT-Blocks
- 4. 🩺⚕️ https://huggingface.co/spaces/awacke1/CloneAnyVoice
- 5. https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla
- 4. 🌐 **Machine translation** - Translate text between languages automatically. [Example](https://huggingface.co/spaces/awacke1/Machine-translation)
- 5. 📄 **Text summarization** - Automatically summarize large volumes of text. [Example](https://huggingface.co/spaces/awacke1/Text-summarization)
- 6. ❓ **🩺⚕️ Question answering** - Answer questions posed in natural language. [Example](https://huggingface.co/spaces/awacke1/Question-answering)
- 7. 🤖 **Sentiment-aware chatbots** - Use sentiment analysis to detect user emotions and respond appropriately.
- 8. 📊 **🩺⚕️ Text classification** - Classify text into different categories. [Example](https://huggingface.co/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli)
- 9. 💬 **🩺⚕️ Text generation** - Generate natural language text. [Example](https://huggingface.co/spaces/awacke1/Sentence2Paragraph)
- 10. 🔎 **Topic modeling** - Automatically identify topics in a large corpus of text. [Example](https://huggingface.co/spaces/awacke1/Topic-modeling)
- - Examples
- 1. [NLP Video Summary](https://huggingface.co/spaces/awacke1/Video-Summary)
- 2. [TTS-STT ASR with Multiple Voices](https://huggingface.co/spaces/awacke1/TTS-STT-Blocks)
- 3. [NLP Transcript with Video Player](https://huggingface.co/spaces/awacke1/Streamlit-ASR-Video)
- 4. [NLP Clinical Ontology Biomedical NER](https://huggingface.co/spaces/awacke1/Biomed-NLP-AI-Clinical-Terminology)
- 5. [Document Understanding and NLP](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR)
- 6. [NLP ASR Wav2Vec2 Multilingual](https://huggingface.co/spaces/awacke1/ASR-High-Accuracy-Test)
- 7. [Live ASR](https://huggingface.co/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla)
- 8. [NLP and Visualization](https://huggingface.co/spaces/awacke1/Visualization-Plotly-Sunbursts-Treemaps-and-WebGL)
-""")
-
-
-
-
-
-st.markdown("""
-2. # 🔮Generative AI💭 (🎨Images and 📝Text) - 🎵🧩🔄📊🌌
- 1. 🆕 **🩺⚕️ Generation of new data**: Create new data that resembles existing data. [Example](https://huggingface.co/spaces/awacke1/GenAI-Generate-New-Data-Resembling-Example)
- 2. 🎨 **Creative potential**: Generate music, art, or literature. [Example](https://huggingface.co/spaces/awacke1/Creative-Potential-Music-Art-Lit)
- 3. 📊 **Data synthesis**: Synthesize data from multiple sources to create new datasets. [Example](https://huggingface.co/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources)
- 4. 📈 **🩺⚕️ Data augmentation**: Augment existing datasets to make them larger and more diverse. [Example](https://huggingface.co/spaces/awacke1/Data-Augmentation)
- 5. 🔀 **Domain transfer**: Transfer knowledge learned from one domain to another.
- 6. 🔍 **Unsupervised learning**: Learn patterns without labeled training data.
- 7. 🔄 **Adaptive learning**: Adapt to changes in data over time.
- 8. 🔊 **Noise injection**: Introduce noise to explore a wider range of possibilities.
- 9. 🕶️ **Latent space manipulation**: Control output by manipulating a model's latent space.
- 10. 🖼️ **Realistic output**: Produce output that is difficult to distinguish from human-created data.
- - Examples
- 1. Quantum AI Circuits: https://huggingface.co/spaces/awacke1/AI-Quantum?option=Circuit
- 2. Generate Story and Video: https://huggingface.co/spaces/awacke1/ASRGenerateStoryandVideo
- 3. ASR Generate Story: https://huggingface.co/spaces/awacke1/ASRGenerateStory
- 4. Music Generation: https://huggingface.co/spaces/awacke1/MusicMaker
-""")
-
-
-
-
-st.markdown("""
-3. # 📷Image Recognition🏞️
- 1. 📷 **Object detection**: Detect and identify multiple objects in an image for detailed analysis and classification.
- 2. 🏞️ **Scene recognition**: Recognize and classify entire scenes based on objects, colors, and shapes.
- 3. 😃 **Facial recognition**: Analyze facial features for accurate identification.
- 4. 😊 **Emotion recognition**: Identify emotions on a subject's face, including happiness, sadness, and anger.
- 5. 🔤 **Text recognition**: Identify and translate text in images for analysis.
- 6. 🎨 **Color recognition**: Detect colors and provide information on hue, saturation, and brightness.
- 7. 🔍 **Image segmentation**: Divide an image into multiple regions for individual analysis and classification.
- 8. 🌅 **Image restoration**: Remove noise and blur, restoring images to original clarity and quality.
- 9. 🔖 **Image classification**: Classify images into categories like animals, buildings, or landscapes.
- 10. 🎨 **Style transfer**: Apply the style of one image to another for unique and innovative results.
- - Examples
- 1. 🩺⚕️ Text-to-Image : [Image Classification](https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation)
- 2. Image Captions from 5 SOTA Generators: [URL](https://huggingface.co/spaces/awacke1/ImageCaptionPromptGenerator)
- 3. 🩺⚕️ Image to Multilingual OCR: [URL](https://huggingface.co/spaces/awacke1/Image-to-Multilingual-OCR)
- 4. WRN - Wide Residual Networks: [URL](https://huggingface.co/spaces/awacke1/ResnetPytorchImageRecognition)
- 5. AI Document Understanding: [URL](https://huggingface.co/spaces/awacke1/AIDocumentUnderstandingOCR)
- 6. Elixir Docker Bumblebee: [URL](https://huggingface.co/spaces/awacke1/DockerImageRecognitionToText)
- 7. Speech to Text to Story to Images to Video: [URL](https://huggingface.co/spaces/awacke1/Speeech2Text2Story2Images2Video)
- 8. Image to Line Drawings: [URL](https://huggingface.co/spaces/awacke1/Image-to-Line-Drawings)
- 9. Semantic Image Search: [URL](https://huggingface.co/spaces/awacke1/Image-Semantic-Search)
- 10. Zoom Clip Toon: [URL](https://huggingface.co/spaces/awacke1/Zoom-Clip-Toon-Image-to-Image)
- 11. Image to Reading Labels: [URL](https://huggingface.co/spaces/awacke1/ImageOCRMultilingual)
- 12. A Game For That - Gamification Using Snapshot Images: [URL](https://huggingface.co/spaces/awacke1/AGameForThat)
- 13. AI Visually Plays QBert, Pong, Seaquest and more: [URL](https://huggingface.co/spaces/awacke1/AI-Atari-Live-Streamlit)
- 14. AI Creates Generator Style Mix Art from Encyclopedia: [URL](https://huggingface.co/spaces/awacke1/Art-Generator-and-Style-Mixer)
- 15. BigGAN Image Gen and Search: [URL](https://huggingface.co/spaces/awacke1/AI-BigGAN-Image-Gen)
- 16. Art Style Line Drawings: [URL](https://huggingface.co/spaces/awacke1/ArtStyleFoodsandNutrition)
- 17. 🩺⚕️ Yolo Real Time Image Recognition from Webcam: https://huggingface.co/spaces/awacke1/Webcam-Object-Recognition-Yolo-n-Coco
-""")
-
-st.markdown("""
-4. # 🗣️Speech Recognition💬
- 1. 🔊 **Continuous Speech Recognition**: Transcribe spoken words in real-time without pausing.
- 2. 🗣️ **Speaker Identification**: Identify individual speakers through unique features in their speech.
- 3. 🧠 **Contextual Awareness**: Understand conversation context and interpret word meaning.
- 4. 🌎 **Multilingual Support**: Recognize and transcribe multiple languages for translation.
- 5. 🔇 **Noise Reduction**: Filter out background noise to improve transcription quality.
- 6. 🔒 **Voice Biometrics**: Verify speaker identity and provide secure access to personal data.
- 7. 🎛️ **Command and Control**: Interpret voice commands to automate tasks and interact with software.
- 8. 💬 **Natural Language Processing**: Understand complex human speech patterns.
- 9. 🧠 **Adaptive Learning**: Learn and adapt to improve accuracy over time.
- 10. ☁️ **Cloud-Based Deployment**: Real-time processing of large amounts of data, even on mobile devices.
-""")
-
-st.markdown("""
-5. # Reinforcement Learning
- 1. 🏆 **Reward-driven**: RL uses rewards or punishments to drive its learning process.
- 2. 🧪 **Trial-and-error learning**: RL is a trial-and-error learning method, where an agent tries different actions to find the best action that will maximize the cumulative reward.
- 3. 🤔 **Exploration-exploitation trade-off**: RL agents need to balance exploration and exploitation to find new possibilities while also exploiting successful actions.
- 4. 📈 **Markov Decision Processes**: RL uses MDPs to model decision-making processes.
- 5. 📊 **Policy optimization**: RL uses policy optimization techniques to find the best policy for a given task or learn the optimal policy from scratch.
- 6. 💰 **Value-based methods**: RL uses value-based methods to estimate the value of each state or action.
- 7. 🧠 **Model-based methods**: RL can use model-based methods to predict the outcomes of different actions.
- 8. 🤖 **Deep Reinforcement Learning**: DRL combines RL with deep learning techniques to learn complex decision-making tasks.
- 9. 🔄 **Transfer learning**: RL can use transfer learning techniques to transfer knowledge learned in one task to another task.
- 10. 🤝 **Multi-agent RL**: RL can handle multiple agents that interact with each other.
-""")
-
-st.markdown("""
-6. 🎲Game Theory🎲 – Traditional AI processes
- 1. 🤝 **Interdependence**: Game Theory considers decision-making among multiple agents, unlike traditional AI processes which focus on a single agent.
- 2. 🎯 **Strategic Behavior**: Game Theory assumes that agents aim to maximize their payoffs based on the actions of other agents. Traditional AI may not consider this strategic element.
- 3. 💰 **Payoffs**: Game Theory calculates payoffs for each agent based on their actions and the actions of other agents, unlike traditional AI which may focus on a single objective.
- 4. ⚖️ **Equilibrium**: Game Theory seeks to identify stable states in the game where no agent has an incentive to deviate from their current strategy. Traditional AI may not seek to find an equilibrium.
- 5. 🎲 **Game Formulation**: Game Theory formulates a game, including rules, players, and possible actions, unlike traditional AI which may not require such formulation.
- 6. 💡 **Solution Concepts**: Game Theory has various solution concepts, such as Nash Equilibrium and Pareto Efficiency, to identify the most desirable outcomes. Traditional AI may not have such concepts.
- 7. 📊 **Information**: Game Theory considers the information available to each agent in the game. Traditional AI may not consider information explicitly.
- 8. ⚔️ **Adversarial**: Game Theory models adversarial scenarios where agents have conflicting goals. Traditional AI may assume cooperation among agents.
- 9. ❓ **Uncertainty**: Game Theory deals with uncertainty and incomplete information in the game. Traditional AI may not consider uncertainty.
- 10. 🌐 **Complexity**: Game Theory deals with complex multi-agent interactions. Traditional AI may focus on single-agent optimization.
- - Examples
- 1. 🩺⚕️ Health Care Game: https://huggingface.co/spaces/awacke1/AI-RPG-Self-Play-RLML-Health-Battler-Game
- 2. 🩺⚕️ Sankey Snacks Math Chart Animator: https://huggingface.co/spaces/awacke1/Sankey-Snacks
- 3. Blackjack 21 : https://huggingface.co/spaces/awacke1/BlackjackSimulatorCardGameAI
- 4. Player Card Monster Battler: https://huggingface.co/spaces/awacke1/Player-Card-Monster-Battler-For-Math-and-AI
- 5. Emojitrition: https://huggingface.co/spaces/awacke1/Emojitrition-Fun-and-Easy-Nutrition
-""")
-
-
-
-
-st.markdown("""
-# 🩺⚕️ Reinforcement Learning from Human Feedback (RLHF)
-## 🤖 RLHF is a way for computers to learn how to do things better by getting help and feedback from people,
- - just like how you learn new things from your parents or teachers.
-🎮 Let's say the computer wants to learn how to play a video game.
- - It might start by trying different things and seeing what happens.
-👍 If it does something good, like getting a high score, it gets a reward.
-👎 If it does something bad, like losing a life, it gets a punishment.
-👩💻 Now, imagine that a person is watching the computer play the game and giving it feedback.
- -The person might say things like "Good job!" when the computer gets a high score
- - or "Oops, try again!" when it loses a life.
-💡 This feedback helps the computer figure out which actions are good and which ones are bad.
- -The computer then uses this feedback to adjust its actions and get better at playing the game.
-🤔 It might try different strategies and see which ones get the best feedback from the person.
- -Over time, the computer gets better and better at playing the game, just like how you get better at things by practicing and getting help from others.
-🚀 RLHF is a cool way for computers to learn and improve with the help of people.
- -Who knows, maybe one day you can teach a computer to do something amazing!
-# Examples
-## 🩺⚕️ Hospital Visualizations
-🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMinnesota
-🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsNewJersey
-🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-TopLargeHospitalsMentalHealth
-🩺⚕️ https://huggingface.co/spaces/awacke1/VizLib-GraphViz-Folium-MapTopLargeHospitalsinWI
-# Card Game Activity
-https://huggingface.co/spaces/awacke1/CardGameActivity-GraphViz
-https://huggingface.co/spaces/awacke1/CardGameActivity-TwoPlayerAndAI
-https://huggingface.co/spaces/awacke1/CardGameActivity
-https://huggingface.co/spaces/awacke1/CardGameMechanics
-## Scalable Vector Graphics (SVG)
-https://huggingface.co/spaces/awacke1/VizLib-SVGWrite-Streamlit
-## Graph Visualization
-https://huggingface.co/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle
-## Clinical Terminology, Question Answering, Smart on FHIR
-https://huggingface.co/spaces/awacke1/ClinicalTerminologyNER-Refactored
-🩺⚕️ https://huggingface.co/spaces/awacke1/Assessment-By-Organs
-🩺⚕️ https://huggingface.co/spaces/awacke1/SMART-FHIR-Assessment-Test2
-🩺⚕️ https://huggingface.co/spaces/awacke1/FHIRLib-FHIRKit
-""")
-
-
-
diff --git a/spaces/awacke1/SpeechRecognitionwithWav2Vec2/README.md b/spaces/awacke1/SpeechRecognitionwithWav2Vec2/README.md
deleted file mode 100644
index 86f0f71c3977c8b05816b36d1d25bdbb5ba999d4..0000000000000000000000000000000000000000
--- a/spaces/awacke1/SpeechRecognitionwithWav2Vec2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SpeechRecognitionwithWav2Vec2
-emoji: 😻
-colorFrom: indigo
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Team.Click.Battle.Multiplayer/app.py b/spaces/awacke1/Team.Click.Battle.Multiplayer/app.py
deleted file mode 100644
index 56e922ae56fdc8bd6d35ddee1bbac2150ee6a180..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Team.Click.Battle.Multiplayer/app.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import streamlit as st
-import random
-import time
-import plotly.graph_objects as go
-
-teams = [
- ('Team 1', '😎', 'Cool Squad', 'New York City'),
- ('Team 2', '🚀', 'Rocketeers', 'Los Angeles'),
- ('Team 3', '🤖', 'Robo Gang', 'San Francisco'),
- ('Team 4', '🌟', 'Super Stars', 'Chicago'),
- ('Team 5', '🐉', 'Dragons', 'Houston')
-]
-
-def create_sankey(votes, turn):
- labels = [f"{team[1]} {team[2]}" for team in teams] + ['Upvotes', 'Downvotes']
- source = []
- target = []
- value = []
-
- for i, team in enumerate(teams):
- source += [i, i]
- target += [len(teams), len(teams) + 1]
- value += [votes[team[0]][0], votes[team[0]][1]]
-
- fig = go.Figure(data=[go.Sankey(
- node=dict(pad=15, thickness=20, line=dict(color='black', width=0.5), label=labels),
- link=dict(source=source, target=target, value=value))])
-
- fig.update_layout(title_text=f'Location Simulator by Nickname (Turn {turn})', title_font=dict(size=24, color='blue'))
-
- return fig
-
-def run_scenario(duration=100, click_card_limit=None):
- start_time = time.time()
- votes = {team[0]: [0, 0] for team in teams} # Initialize upvotes and downvotes
- click_cards = 0
- chat = []
- turn = 0
-
- st.header("Sankey Graph")
- fig = create_sankey(votes, turn)
- st.plotly_chart(fig, use_container_width=True)
-
- while time.time() - start_time < duration:
- if click_card_limit is None or click_cards < click_card_limit:
- click_cards += 1
-
- team = random.choice(teams)
- vote_type = random.choice(['upvote', 'downvote'])
- clicks = 1 + 3 * (click_cards > 0)
- click_cards -= clicks > 1
-
- if vote_type == 'upvote':
- votes[team[0]][0] += clicks
- else:
- votes[team[0]][1] += clicks
- chat.append((team, vote_type, clicks))
-
- turn += 1
- fig = create_sankey(votes, turn)
- st.plotly_chart(fig, use_container_width=True)
-
- time.sleep(random.uniform(0, 1)) # Random sleep between 0 and 1 seconds
-
- return votes, chat
-
-st.title("Team Upvotes and Downvotes Emoji Game")
-
-duration = st.slider("Duration (seconds)", min_value=0, max_value=100, value=10, step=1)
-click_card_limit = st.slider("Click Card Limit", min_value=0, max_value=100, value=10, step=1)
-
-st.write(f"Running scenario for {duration} seconds with {click_card_limit} click cards...")
-votes, chat = run_scenario(duration, click_card_limit)
-
-st.header("Results")
-for team, vote_counts in votes.items():
- st.write(f"{team}: {vote_counts[0]} upvotes, {vote_counts[1]} downvotes")
-
-st.header("Chat")
-for message in chat:
- team, vote_type, clicks = message
- st.write(f"{team[1]} {team[2]}: {clicks} {vote_type}s")
-
-st.header("Final Results")
-for team, vote_counts in votes.items():
- st.write(f"{team}: {vote_counts[0]} upvotes, {vote_counts[1]} downvotes")
-
diff --git a/spaces/awacke1/VideoFromImage/README.md b/spaces/awacke1/VideoFromImage/README.md
deleted file mode 100644
index 001762555fed2c68546236c9f30e6ac3b1f4dc10..0000000000000000000000000000000000000000
--- a/spaces/awacke1/VideoFromImage/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: MS Image2Video
-emoji: ⚡
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-duplicated_from: fffiloni/MS-Image2Video
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/GCodeLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/GCodeLoader.js
deleted file mode 100644
index af21142dbdbad31b3c344cb51ab2c7ae58576e6c..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/GCodeLoader.js
+++ /dev/null
@@ -1,225 +0,0 @@
-'use strict';
-
-/**
- * THREE.GCodeLoader is used to load gcode files usually used for 3D printing or CNC applications.
- *
- * Gcode files are composed by commands used by machines to create objects.
- *
- * @class THREE.GCodeLoader
- * @param {Manager} manager Loading manager.
- * @author tentone
- * @author joewalnes
- */
-THREE.GCodeLoader = function ( manager ) {
-
- this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager;
-
- this.splitLayer = false;
-
-};
-
-THREE.GCodeLoader.prototype.load = function ( url, onLoad, onProgress, onError ) {
-
- var self = this;
-
- var loader = new THREE.FileLoader( self.manager );
- loader.setPath( self.path );
- loader.load( url, function ( text ) {
-
- onLoad( self.parse( text ) );
-
- }, onProgress, onError );
-
-};
-
-THREE.GCodeLoader.prototype.setPath = function ( value ) {
-
- this.path = value;
- return this;
-
-};
-
-THREE.GCodeLoader.prototype.parse = function ( data ) {
-
- var state = { x: 0, y: 0, z: 0, e: 0, f: 0, extruding: false, relative: false };
- var layers = [];
-
- var currentLayer = undefined;
-
- var pathMaterial = new THREE.LineBasicMaterial( { color: 0xFF0000 } );
- pathMaterial.name = 'path';
-
- var extrudingMaterial = new THREE.LineBasicMaterial( { color: 0x00FF00 } );
- extrudingMaterial.name = 'extruded';
-
- function newLayer( line ) {
-
- currentLayer = { vertex: [], pathVertex: [], z: line.z };
- layers.push( currentLayer );
-
- }
-
- //Create lie segment between p1 and p2
- function addSegment( p1, p2 ) {
-
- if ( currentLayer === undefined ) {
-
- newLayer( p1 );
-
- }
-
- if ( line.extruding ) {
-
- currentLayer.vertex.push( p1.x, p1.y, p1.z );
- currentLayer.vertex.push( p2.x, p2.y, p2.z );
-
- } else {
-
- currentLayer.pathVertex.push( p1.x, p1.y, p1.z );
- currentLayer.pathVertex.push( p2.x, p2.y, p2.z );
-
- }
-
- }
-
- function delta( v1, v2 ) {
-
- return state.relative ? v2 : v2 - v1;
-
- }
-
- function absolute( v1, v2 ) {
-
- return state.relative ? v1 + v2 : v2;
-
- }
-
- var lines = data.replace( /;.+/g, '' ).split( '\n' );
-
- for ( var i = 0; i < lines.length; i ++ ) {
-
- var tokens = lines[ i ].split( ' ' );
- var cmd = tokens[ 0 ].toUpperCase();
-
- //Argumments
- var args = {};
- tokens.splice( 1 ).forEach( function ( token ) {
-
- if ( token[ 0 ] !== undefined ) {
-
- var key = token[ 0 ].toLowerCase();
- var value = parseFloat( token.substring( 1 ) );
- args[ key ] = value;
-
- }
-
- } );
-
- //Process commands
- //G0/G1 – Linear Movement
- if ( cmd === 'G0' || cmd === 'G1' ) {
-
- var line = {
- x: args.x !== undefined ? absolute( state.x, args.x ) : state.x,
- y: args.y !== undefined ? absolute( state.y, args.y ) : state.y,
- z: args.z !== undefined ? absolute( state.z, args.z ) : state.z,
- e: args.e !== undefined ? absolute( state.e, args.e ) : state.e,
- f: args.f !== undefined ? absolute( state.f, args.f ) : state.f,
- };
-
- //Layer change detection is or made by watching Z, it's made by watching when we extrude at a new Z position
- if ( delta( state.e, line.e ) > 0 ) {
-
- line.extruding = delta( state.e, line.e ) > 0;
-
- if ( currentLayer == undefined || line.z != currentLayer.z ) {
-
- newLayer( line );
-
- }
-
- }
-
- addSegment( state, line );
- state = line;
-
- } else if ( cmd === 'G2' || cmd === 'G3' ) {
-
- //G2/G3 - Arc Movement ( G2 clock wise and G3 counter clock wise )
- //console.warn( 'THREE.GCodeLoader: Arc command not supported' );
-
- } else if ( cmd === 'G90' ) {
-
- //G90: Set to Absolute Positioning
- state.relative = false;
-
- } else if ( cmd === 'G91' ) {
-
- //G91: Set to state.relative Positioning
- state.relative = true;
-
- } else if ( cmd === 'G92' ) {
-
- //G92: Set Position
- var line = state;
- line.x = args.x !== undefined ? args.x : line.x;
- line.y = args.y !== undefined ? args.y : line.y;
- line.z = args.z !== undefined ? args.z : line.z;
- line.e = args.e !== undefined ? args.e : line.e;
- state = line;
-
- } else {
-
- //console.warn( 'THREE.GCodeLoader: Command not supported:' + cmd );
-
- }
-
- }
-
- function addObject( vertex, extruding ) {
-
- var geometry = new THREE.BufferGeometry();
- geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( vertex, 3 ) );
-
- var segments = new THREE.LineSegments( geometry, extruding ? extrudingMaterial : pathMaterial );
- segments.name = 'layer' + i;
- object.add( segments );
-
- }
-
- var object = new THREE.Group();
- object.name = 'gcode';
-
- if ( this.splitLayer ) {
-
- for ( var i = 0; i < layers.length; i ++ ) {
-
- var layer = layers[ i ];
- addObject( layer.vertex, true );
- addObject( layer.pathVertex, false );
-
- }
-
- } else {
-
- var vertex = [], pathVertex = [];
-
- for ( var i = 0; i < layers.length; i ++ ) {
-
- var layer = layers[ i ];
-
- vertex = vertex.concat( layer.vertex );
- pathVertex = pathVertex.concat( layer.pathVertex );
-
- }
-
- addObject( vertex, true );
- addObject( pathVertex, false );
-
- }
-
- object.quaternion.setFromEuler( new THREE.Euler( - Math.PI / 2, 0, 0 ) );
-
- return object;
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/RGBELoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/RGBELoader.js
deleted file mode 100644
index 85cdcc1ff819d9fe16ad99f8d15e3a6ad8862b60..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/RGBELoader.js
+++ /dev/null
@@ -1,396 +0,0 @@
-/**
- * @author Nikos M. / https://github.com/foo123/
- */
-
-// https://github.com/mrdoob/three.js/issues/5552
-// http://en.wikipedia.org/wiki/RGBE_image_format
-
-THREE.HDRLoader = THREE.RGBELoader = function ( manager ) {
-
- this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager;
- this.type = THREE.UnsignedByteType;
-
-};
-
-// extend THREE.DataTextureLoader
-THREE.RGBELoader.prototype = Object.create( THREE.DataTextureLoader.prototype );
-
-// adapted from http://www.graphics.cornell.edu/~bjw/rgbe.html
-THREE.RGBELoader.prototype._parser = function ( buffer ) {
-
- var
- /* return codes for rgbe routines */
- RGBE_RETURN_SUCCESS = 0,
- RGBE_RETURN_FAILURE = - 1,
-
- /* default error routine. change this to change error handling */
- rgbe_read_error = 1,
- rgbe_write_error = 2,
- rgbe_format_error = 3,
- rgbe_memory_error = 4,
- rgbe_error = function ( rgbe_error_code, msg ) {
-
- switch ( rgbe_error_code ) {
-
- case rgbe_read_error: console.error( "THREE.RGBELoader Read Error: " + ( msg || '' ) );
- break;
- case rgbe_write_error: console.error( "THREE.RGBELoader Write Error: " + ( msg || '' ) );
- break;
- case rgbe_format_error: console.error( "THREE.RGBELoader Bad File Format: " + ( msg || '' ) );
- break;
- default:
- case rgbe_memory_error: console.error( "THREE.RGBELoader: Error: " + ( msg || '' ) );
-
- }
- return RGBE_RETURN_FAILURE;
-
- },
-
- /* offsets to red, green, and blue components in a data (float) pixel */
- RGBE_DATA_RED = 0,
- RGBE_DATA_GREEN = 1,
- RGBE_DATA_BLUE = 2,
-
- /* number of floats per pixel, use 4 since stored in rgba image format */
- RGBE_DATA_SIZE = 4,
-
- /* flags indicating which fields in an rgbe_header_info are valid */
- RGBE_VALID_PROGRAMTYPE = 1,
- RGBE_VALID_FORMAT = 2,
- RGBE_VALID_DIMENSIONS = 4,
-
- NEWLINE = "\n",
-
- fgets = function ( buffer, lineLimit, consume ) {
-
- lineLimit = ! lineLimit ? 1024 : lineLimit;
- var p = buffer.pos,
- i = - 1, len = 0, s = '', chunkSize = 128,
- chunk = String.fromCharCode.apply( null, new Uint16Array( buffer.subarray( p, p + chunkSize ) ) )
- ;
- while ( ( 0 > ( i = chunk.indexOf( NEWLINE ) ) ) && ( len < lineLimit ) && ( p < buffer.byteLength ) ) {
-
- s += chunk; len += chunk.length;
- p += chunkSize;
- chunk += String.fromCharCode.apply( null, new Uint16Array( buffer.subarray( p, p + chunkSize ) ) );
-
- }
-
- if ( - 1 < i ) {
-
- /*for (i=l-1; i>=0; i--) {
- byteCode = m.charCodeAt(i);
- if (byteCode > 0x7f && byteCode <= 0x7ff) byteLen++;
- else if (byteCode > 0x7ff && byteCode <= 0xffff) byteLen += 2;
- if (byteCode >= 0xDC00 && byteCode <= 0xDFFF) i--; //trail surrogate
- }*/
- if ( false !== consume ) buffer.pos += len + i + 1;
- return s + chunk.slice( 0, i );
-
- }
- return false;
-
- },
-
- /* minimal header reading. modify if you want to parse more information */
- RGBE_ReadHeader = function ( buffer ) {
-
- var line, match,
-
- // regexes to parse header info fields
- magic_token_re = /^#\?(\S+)$/,
- gamma_re = /^\s*GAMMA\s*=\s*(\d+(\.\d+)?)\s*$/,
- exposure_re = /^\s*EXPOSURE\s*=\s*(\d+(\.\d+)?)\s*$/,
- format_re = /^\s*FORMAT=(\S+)\s*$/,
- dimensions_re = /^\s*\-Y\s+(\d+)\s+\+X\s+(\d+)\s*$/,
-
- // RGBE format header struct
- header = {
-
- valid: 0, /* indicate which fields are valid */
-
- string: '', /* the actual header string */
-
- comments: '', /* comments found in header */
-
- programtype: 'RGBE', /* listed at beginning of file to identify it after "#?". defaults to "RGBE" */
-
- format: '', /* RGBE format, default 32-bit_rle_rgbe */
-
- gamma: 1.0, /* image has already been gamma corrected with given gamma. defaults to 1.0 (no correction) */
-
- exposure: 1.0, /* a value of 1.0 in an image corresponds to watts/steradian/m^2. defaults to 1.0 */
-
- width: 0, height: 0 /* image dimensions, width/height */
-
- };
-
- if ( buffer.pos >= buffer.byteLength || ! ( line = fgets( buffer ) ) ) {
-
- return rgbe_error( rgbe_read_error, "no header found" );
-
- }
- /* if you want to require the magic token then uncomment the next line */
- if ( ! ( match = line.match( magic_token_re ) ) ) {
-
- return rgbe_error( rgbe_format_error, "bad initial token" );
-
- }
- header.valid |= RGBE_VALID_PROGRAMTYPE;
- header.programtype = match[ 1 ];
- header.string += line + "\n";
-
- while ( true ) {
-
- line = fgets( buffer );
- if ( false === line ) break;
- header.string += line + "\n";
-
- if ( '#' === line.charAt( 0 ) ) {
-
- header.comments += line + "\n";
- continue; // comment line
-
- }
-
- if ( match = line.match( gamma_re ) ) {
-
- header.gamma = parseFloat( match[ 1 ], 10 );
-
- }
- if ( match = line.match( exposure_re ) ) {
-
- header.exposure = parseFloat( match[ 1 ], 10 );
-
- }
- if ( match = line.match( format_re ) ) {
-
- header.valid |= RGBE_VALID_FORMAT;
- header.format = match[ 1 ];//'32-bit_rle_rgbe';
-
- }
- if ( match = line.match( dimensions_re ) ) {
-
- header.valid |= RGBE_VALID_DIMENSIONS;
- header.height = parseInt( match[ 1 ], 10 );
- header.width = parseInt( match[ 2 ], 10 );
-
- }
-
- if ( ( header.valid & RGBE_VALID_FORMAT ) && ( header.valid & RGBE_VALID_DIMENSIONS ) ) break;
-
- }
-
- if ( ! ( header.valid & RGBE_VALID_FORMAT ) ) {
-
- return rgbe_error( rgbe_format_error, "missing format specifier" );
-
- }
- if ( ! ( header.valid & RGBE_VALID_DIMENSIONS ) ) {
-
- return rgbe_error( rgbe_format_error, "missing image size specifier" );
-
- }
-
- return header;
-
- },
-
- RGBE_ReadPixels_RLE = function ( buffer, w, h ) {
-
- var data_rgba, offset, pos, count, byteValue,
- scanline_buffer, ptr, ptr_end, i, l, off, isEncodedRun,
- scanline_width = w, num_scanlines = h, rgbeStart
- ;
-
- if (
- // run length encoding is not allowed so read flat
- ( ( scanline_width < 8 ) || ( scanline_width > 0x7fff ) ) ||
- // this file is not run length encoded
- ( ( 2 !== buffer[ 0 ] ) || ( 2 !== buffer[ 1 ] ) || ( buffer[ 2 ] & 0x80 ) )
- ) {
-
- // return the flat buffer
- return new Uint8Array( buffer );
-
- }
-
- if ( scanline_width !== ( ( buffer[ 2 ] << 8 ) | buffer[ 3 ] ) ) {
-
- return rgbe_error( rgbe_format_error, "wrong scanline width" );
-
- }
-
- data_rgba = new Uint8Array( 4 * w * h );
-
- if ( ! data_rgba || ! data_rgba.length ) {
-
- return rgbe_error( rgbe_memory_error, "unable to allocate buffer space" );
-
- }
-
- offset = 0; pos = 0; ptr_end = 4 * scanline_width;
- rgbeStart = new Uint8Array( 4 );
- scanline_buffer = new Uint8Array( ptr_end );
-
- // read in each successive scanline
- while ( ( num_scanlines > 0 ) && ( pos < buffer.byteLength ) ) {
-
- if ( pos + 4 > buffer.byteLength ) {
-
- return rgbe_error( rgbe_read_error );
-
- }
-
- rgbeStart[ 0 ] = buffer[ pos ++ ];
- rgbeStart[ 1 ] = buffer[ pos ++ ];
- rgbeStart[ 2 ] = buffer[ pos ++ ];
- rgbeStart[ 3 ] = buffer[ pos ++ ];
-
- if ( ( 2 != rgbeStart[ 0 ] ) || ( 2 != rgbeStart[ 1 ] ) || ( ( ( rgbeStart[ 2 ] << 8 ) | rgbeStart[ 3 ] ) != scanline_width ) ) {
-
- return rgbe_error( rgbe_format_error, "bad rgbe scanline format" );
-
- }
-
- // read each of the four channels for the scanline into the buffer
- // first red, then green, then blue, then exponent
- ptr = 0;
- while ( ( ptr < ptr_end ) && ( pos < buffer.byteLength ) ) {
-
- count = buffer[ pos ++ ];
- isEncodedRun = count > 128;
- if ( isEncodedRun ) count -= 128;
-
- if ( ( 0 === count ) || ( ptr + count > ptr_end ) ) {
-
- return rgbe_error( rgbe_format_error, "bad scanline data" );
-
- }
-
- if ( isEncodedRun ) {
-
- // a (encoded) run of the same value
- byteValue = buffer[ pos ++ ];
- for ( i = 0; i < count; i ++ ) {
-
- scanline_buffer[ ptr ++ ] = byteValue;
-
- }
- //ptr += count;
-
- } else {
-
- // a literal-run
- scanline_buffer.set( buffer.subarray( pos, pos + count ), ptr );
- ptr += count; pos += count;
-
- }
-
- }
-
-
- // now convert data from buffer into rgba
- // first red, then green, then blue, then exponent (alpha)
- l = scanline_width; //scanline_buffer.byteLength;
- for ( i = 0; i < l; i ++ ) {
-
- off = 0;
- data_rgba[ offset ] = scanline_buffer[ i + off ];
- off += scanline_width; //1;
- data_rgba[ offset + 1 ] = scanline_buffer[ i + off ];
- off += scanline_width; //1;
- data_rgba[ offset + 2 ] = scanline_buffer[ i + off ];
- off += scanline_width; //1;
- data_rgba[ offset + 3 ] = scanline_buffer[ i + off ];
- offset += 4;
-
- }
-
- num_scanlines --;
-
- }
-
- return data_rgba;
-
- }
- ;
-
- var byteArray = new Uint8Array( buffer ),
- byteLength = byteArray.byteLength;
- byteArray.pos = 0;
- var rgbe_header_info = RGBE_ReadHeader( byteArray );
-
- if ( RGBE_RETURN_FAILURE !== rgbe_header_info ) {
-
- var w = rgbe_header_info.width,
- h = rgbe_header_info.height,
- image_rgba_data = RGBE_ReadPixels_RLE( byteArray.subarray( byteArray.pos ), w, h )
- ;
- if ( RGBE_RETURN_FAILURE !== image_rgba_data ) {
-
- if ( this.type === THREE.UnsignedByteType ) {
-
- var data = image_rgba_data;
- var format = THREE.RGBEFormat; // handled as THREE.RGBAFormat in shaders
- var type = THREE.UnsignedByteType;
-
- } else if ( this.type === THREE.FloatType ) {
-
- var RGBEByteToRGBFloat = function ( sourceArray, sourceOffset, destArray, destOffset ) {
-
- var e = sourceArray[ sourceOffset + 3 ];
- var scale = Math.pow( 2.0, e - 128.0 ) / 255.0;
-
- destArray[ destOffset + 0 ] = sourceArray[ sourceOffset + 0 ] * scale;
- destArray[ destOffset + 1 ] = sourceArray[ sourceOffset + 1 ] * scale;
- destArray[ destOffset + 2 ] = sourceArray[ sourceOffset + 2 ] * scale;
-
- };
-
- var numElements = ( image_rgba_data.length / 4 ) * 3;
- var floatArray = new Float32Array( numElements );
-
- for ( var j = 0; j < numElements; j ++ ) {
-
- RGBEByteToRGBFloat( image_rgba_data, j * 4, floatArray, j * 3 );
-
- }
-
- var data = floatArray;
- var format = THREE.RGBFormat;
- var type = THREE.FloatType;
-
-
- } else {
-
- console.error( 'THREE.RGBELoader: unsupported type: ', this.type );
-
- }
-
- return {
- width: w, height: h,
- data: data,
- header: rgbe_header_info.string,
- gamma: rgbe_header_info.gamma,
- exposure: rgbe_header_info.exposure,
- format: format,
- type: type
- };
-
- }
-
- }
-
- return null;
-
-};
-
-THREE.RGBELoader.prototype.setType = function ( value ) {
-
- this.type = value;
- return this;
-
-};
-
-
diff --git a/spaces/batuhantosun/Guided-Backpropagation/utils.py b/spaces/batuhantosun/Guided-Backpropagation/utils.py
deleted file mode 100644
index ac94e6b780e040c72d79284afccf4b98a2fdf495..0000000000000000000000000000000000000000
--- a/spaces/batuhantosun/Guided-Backpropagation/utils.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import numpy as np
-import cv2
-
-
-def range_norm(img):
- min = img.min()
- max = img.max()
- eps = 1e-6
- return (img-min)/(max-min+eps)
-
-
-def grad2heatmapped(input_image, grad_image, ratio):
- input_image = np.array(input_image)
- grad_image = np.array(grad_image)
-
- # Invert negative pixels
- grad_image[grad_image<100] += 128
-
- # Apply thresholding and blur to obtain heatmap
- th = cv2.threshold(grad_image, 140, 255, cv2.THRESH_BINARY)[1]
- blur = cv2.GaussianBlur(th, (11,11), 11)
- heatmap = cv2.applyColorMap(blur, cv2.COLORMAP_JET)
-
- # Apply edge padding to heatmap to have 256x256 size
- heatmap = np.pad(heatmap, ((16,16),(16,16),(0,0)), 'edge')
-
- # Upsample heatmap to input_image size
- heatmap = cv2.resize(heatmap, (input_image.shape[1], input_image.shape[0]))
-
- # Superimpose heatmap on input_image
- heatmapped = cv2.addWeighted(input_image, 1-ratio, heatmap, ratio, 0)
-
- return heatmapped
diff --git "a/spaces/betterme/mestreamlit/0__\357\243\277Info.py" "b/spaces/betterme/mestreamlit/0__\357\243\277Info.py"
deleted file mode 100644
index 4fa8801bdddfacab96327b6fdcfed850c0466129..0000000000000000000000000000000000000000
--- "a/spaces/betterme/mestreamlit/0__\357\243\277Info.py"
+++ /dev/null
@@ -1,50 +0,0 @@
-import streamlit as st
-import inspect
-import textwrap
-
-
-class Page(object):
-
- def __init__(self, app_title="# App Title",
- app_info="> App Info",
- sidebar_title="## Sidebar Title",
- page_title="Page Title",
- page_icon='🔥',
- menu_items=None,
- show_code=False
- ):
- st.set_page_config(
- page_title=page_title,
- page_icon=page_icon,
- initial_sidebar_state='auto',
- menu_items=menu_items
- )
-
- if app_title: st.markdown(app_title)
- if app_info: st.markdown(app_info)
- if sidebar_title: st.sidebar.markdown(sidebar_title)
-
- if sidebar_title and show_code: self.show_code(self.main)
-
- def main(self):
- raise NotImplementedError('Method not implemented!')
-
- def show_code(self, demo):
- """Showing the code of the demo."""
- _ = st.sidebar.checkbox("Show code", False)
- if _:
- # Showing the code of the demo.
- st.markdown("---")
- st.markdown("## Main Code")
- sourcelines, _ = inspect.getsourcelines(demo)
- st.code(textwrap.dedent("".join(sourcelines[1:])))
- st.markdown("---")
-
-
-class SPage(Page):
-
- def main(self):
- st.markdown(f"{st}=={st.__version__}")
-
-
-SPage().main()
diff --git a/spaces/bhandsab/meta-llama-Llama-2-70b-hf/app.py b/spaces/bhandsab/meta-llama-Llama-2-70b-hf/app.py
deleted file mode 100644
index 4ec735e3ad87d311c5c40e845d963d4474a98f4c..0000000000000000000000000000000000000000
--- a/spaces/bhandsab/meta-llama-Llama-2-70b-hf/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/meta-llama/Llama-2-70b-hf").launch()
\ No newline at end of file
diff --git a/spaces/bigslime/stablediffusion-infinity/js/toolbar.js b/spaces/bigslime/stablediffusion-infinity/js/toolbar.js
deleted file mode 100644
index 6c721bc84d3a41a0761ead58e6034ba4dfd4a6ef..0000000000000000000000000000000000000000
--- a/spaces/bigslime/stablediffusion-infinity/js/toolbar.js
+++ /dev/null
@@ -1,581 +0,0 @@
-// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://rawgit.com/vitmalina/w2ui/master/dist/w2ui.es6.min.js"
-// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://cdn.jsdelivr.net/gh/vitmalina/w2ui@master/dist/w2ui.es6.min.js"
-
-// https://stackoverflow.com/questions/36280818/how-to-convert-file-to-base64-in-javascript
-function getBase64(file) {
- var reader = new FileReader();
- reader.readAsDataURL(file);
- reader.onload = function () {
- add_image(reader.result);
- // console.log(reader.result);
- };
- reader.onerror = function (error) {
- console.log("Error: ", error);
- };
-}
-
-function getText(file) {
- var reader = new FileReader();
- reader.readAsText(file);
- reader.onload = function () {
- window.postMessage(["load",reader.result],"*")
- // console.log(reader.result);
- };
- reader.onerror = function (error) {
- console.log("Error: ", error);
- };
-}
-
-document.querySelector("#upload_file").addEventListener("change", (event)=>{
- console.log(event);
- let file = document.querySelector("#upload_file").files[0];
- getBase64(file);
-})
-
-document.querySelector("#upload_state").addEventListener("change", (event)=>{
- console.log(event);
- let file = document.querySelector("#upload_state").files[0];
- getText(file);
-})
-
-open_setting = function() {
- if (!w2ui.foo) {
- new w2form({
- name: "foo",
- style: "border: 0px; background-color: transparent;",
- fields: [{
- field: "canvas_width",
- type: "int",
- required: true,
- html: {
- label: "Canvas Width"
- }
- },
- {
- field: "canvas_height",
- type: "int",
- required: true,
- html: {
- label: "Canvas Height"
- }
- },
- ],
- record: {
- canvas_width: 1200,
- canvas_height: 600,
- },
- actions: {
- Save() {
- this.validate();
- let record = this.getCleanRecord();
- window.postMessage(["resize",record.canvas_width,record.canvas_height],"*");
- w2popup.close();
- },
- custom: {
- text: "Cancel",
- style: "text-transform: uppercase",
- onClick(event) {
- w2popup.close();
- }
- }
- }
- });
- }
- w2popup.open({
- title: "Form in a Popup",
- body: "",
- style: "padding: 15px 0px 0px 0px",
- width: 500,
- height: 280,
- showMax: true,
- async onToggle(event) {
- await event.complete
- w2ui.foo.resize();
- }
- })
- .then((event) => {
- w2ui.foo.render("#form")
- });
-}
-
-var button_lst=["clear", "load", "save", "export", "upload", "selection", "canvas", "eraser", "outpaint", "accept", "cancel", "retry", "prev", "current", "next", "eraser_size_btn", "eraser_size", "resize_selection", "scale", "zoom_in", "zoom_out", "help"];
-var upload_button_lst=['clear', 'load', 'save', "upload", 'export', 'outpaint', 'resize_selection', 'help', "setting"];
-var resize_button_lst=['clear', 'load', 'save', "upload", 'export', "selection", "canvas", "eraser", 'outpaint', 'resize_selection',"zoom_in", "zoom_out", 'help', "setting"];
-var outpaint_button_lst=['clear', 'load', 'save', "canvas", "eraser", "upload", 'export', 'resize_selection', "zoom_in", "zoom_out",'help', "setting"];
-var outpaint_result_lst=["accept", "cancel", "retry", "prev", "current", "next"];
-var outpaint_result_func_lst=["accept", "retry", "prev", "current", "next"];
-
-function check_button(id,text="",checked=true,tooltip="")
-{
- return { type: "check", id: id, text: text, icon: checked?"fa-solid fa-square-check":"fa-regular fa-square", checked: checked, tooltip: tooltip };
-}
-
-var toolbar=new w2toolbar({
- box: "#toolbar",
- name: "toolbar",
- tooltip: "top",
- items: [
- { type: "button", id: "clear", text: "Reset", tooltip: "Reset Canvas", icon: "fa-solid fa-rectangle-xmark" },
- { type: "break" },
- { type: "button", id: "load", tooltip: "Load Canvas", icon: "fa-solid fa-file-import" },
- { type: "button", id: "save", tooltip: "Save Canvas", icon: "fa-solid fa-file-export" },
- { type: "button", id: "export", tooltip: "Export Image", icon: "fa-solid fa-floppy-disk" },
- { type: "break" },
- { type: "button", id: "upload", text: "Upload Image", icon: "fa-solid fa-upload" },
- { type: "break" },
- { type: "radio", id: "selection", group: "1", tooltip: "Selection", icon: "fa-solid fa-arrows-up-down-left-right", checked: true },
- { type: "radio", id: "canvas", group: "1", tooltip: "Canvas", icon: "fa-solid fa-image" },
- { type: "radio", id: "eraser", group: "1", tooltip: "Eraser", icon: "fa-solid fa-eraser" },
- { type: "break" },
- { type: "button", id: "outpaint", text: "Outpaint", tooltip: "Run Outpainting", icon: "fa-solid fa-brush" },
- { type: "break" },
- { type: "button", id: "accept", text: "Accept", tooltip: "Accept current result", icon: "fa-solid fa-check", hidden: true, disable:true,},
- { type: "button", id: "cancel", text: "Cancel", tooltip: "Cancel current outpainting/error", icon: "fa-solid fa-ban", hidden: true},
- { type: "button", id: "retry", text: "Retry", tooltip: "Retry", icon: "fa-solid fa-rotate", hidden: true, disable:true,},
- { type: "button", id: "prev", tooltip: "Prev Result", icon: "fa-solid fa-caret-left", hidden: true, disable:true,},
- { type: "html", id: "current", hidden: true, disable:true,
- async onRefresh(event) {
- await event.complete
- let fragment = query.html(`
-
-
-Oct 23, 2018 · 3d modeling software, modeller and general home 3d modeling software for beginners to advanced users. Home 3D modelers software make 3d models, which can be used in various. Bedroom sets 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs.
-
-The following is a list of a few apps that you can use for 3D modeling, designing and rendering. Mac 3d modeling apps. Rendering software for macOS 3d modeling apps.
-
-However if your printer isnt used for 3d printing, it would have a whole different application in the bedroom. To get started with 3D printing, youll want to learn 3D printing software, and to get your head. 3d modeling software, modeller and general home 3d modeling software for beginners to advanced users. Home 3D modelers software make 3d models, which can be used in various.
-
-3D modeling, rendering and imaging software. Free download. Bedroom set 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs.
-
-Since theyre generally known for their pencils, pencil modelling softwares are probably among the most frequently used software for rendering images. Modeling, rendering, and animation are key parts of animation and character design. When building models, it's essential to choose a tool that can be used effectively and efficiently.
-
-See our list of best 3d modeling software for more popular choices. Bedroom set 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs.
-
-First round of 3D renders. For additional resources, check out our list of recommended apps and software. Free 3D modelling software is available to everyone with a PC. This is the kind of software that provides a feature set comparable 4fefd39f24
-
-
-
diff --git a/spaces/blossom618/text_generator/App.py b/spaces/blossom618/text_generator/App.py
deleted file mode 100644
index 51bf358eea0ad0221716c05225ea3f5307cd0f19..0000000000000000000000000000000000000000
--- a/spaces/blossom618/text_generator/App.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-generator = pipeline('text-generation', model='gpt2')
-
- def generate (text):
- result=generator (text)
- return result [0] ['generated_text')
-
-gr.Interface (fn=generate, inputs=gr.inputs.Textbox(), outputs=gr.outputs. Textbox()) . launch ()
\ No newline at end of file
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py
deleted file mode 100644
index f7e67bcc89dd0c8e50d770e600b55f179fe19588..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Training of the 4 diffusion models described in
-"From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion"
-(paper link).
-"""
-
-from ._explorers import DiffusionExplorer
-
-
-@DiffusionExplorer
-def explorer(launcher):
- launcher.slurm_(gpus=4, partition='learnfair')
-
- launcher.bind_({'solver': 'diffusion/default',
- 'dset': 'internal/music_10k_32khz'})
-
- with launcher.job_array():
- launcher({'filter.use': True, 'filter.idx_band': 0, "processor.use": False, 'processor.power_std': 0.4})
- launcher({'filter.use': True, 'filter.idx_band': 1, "processor.use": False, 'processor.power_std': 0.4})
- launcher({'filter.use': True, 'filter.idx_band': 2, "processor.use": True, 'processor.power_std': 0.4})
- launcher({'filter.use': True, 'filter.idx_band': 3, "processor.use": True, 'processor.power_std': 0.75})
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py
deleted file mode 100644
index 2da70ae34e31dfe1a2ab4d5625a3e2b096aa5c7f..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import contextlib
-from unittest import mock
-import torch
-
-from detectron2.modeling import poolers
-from detectron2.modeling.proposal_generator import rpn
-from detectron2.modeling.roi_heads import keypoint_head, mask_head
-from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
-
-from .c10 import (
- Caffe2Compatible,
- Caffe2FastRCNNOutputsInference,
- Caffe2KeypointRCNNInference,
- Caffe2MaskRCNNInference,
- Caffe2ROIPooler,
- Caffe2RPN,
- caffe2_fast_rcnn_outputs_inference,
- caffe2_keypoint_rcnn_inference,
- caffe2_mask_rcnn_inference,
-)
-
-
-class GenericMixin(object):
- pass
-
-
-class Caffe2CompatibleConverter(object):
- """
- A GenericUpdater which implements the `create_from` interface, by modifying
- module object and assign it with another class replaceCls.
- """
-
- def __init__(self, replaceCls):
- self.replaceCls = replaceCls
-
- def create_from(self, module):
- # update module's class to the new class
- assert isinstance(module, torch.nn.Module)
- if issubclass(self.replaceCls, GenericMixin):
- # replaceCls should act as mixin, create a new class on-the-fly
- new_class = type(
- "{}MixedWith{}".format(self.replaceCls.__name__, module.__class__.__name__),
- (self.replaceCls, module.__class__),
- {}, # {"new_method": lambda self: ...},
- )
- module.__class__ = new_class
- else:
- # replaceCls is complete class, this allow arbitrary class swap
- module.__class__ = self.replaceCls
-
- # initialize Caffe2Compatible
- if isinstance(module, Caffe2Compatible):
- module.tensor_mode = False
-
- return module
-
-
-def patch(model, target, updater, *args, **kwargs):
- """
- recursively (post-order) update all modules with the target type and its
- subclasses, make a initialization/composition/inheritance/... via the
- updater.create_from.
- """
- for name, module in model.named_children():
- model._modules[name] = patch(module, target, updater, *args, **kwargs)
- if isinstance(model, target):
- return updater.create_from(model, *args, **kwargs)
- return model
-
-
-def patch_generalized_rcnn(model):
- ccc = Caffe2CompatibleConverter
- model = patch(model, rpn.RPN, ccc(Caffe2RPN))
- model = patch(model, poolers.ROIPooler, ccc(Caffe2ROIPooler))
-
- return model
-
-
-@contextlib.contextmanager
-def mock_fastrcnn_outputs_inference(
- tensor_mode, check=True, box_predictor_type=FastRCNNOutputLayers
-):
- with mock.patch.object(
- box_predictor_type,
- "inference",
- autospec=True,
- side_effect=Caffe2FastRCNNOutputsInference(tensor_mode),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True):
- with mock.patch(
- "{}.mask_rcnn_inference".format(patched_module), side_effect=Caffe2MaskRCNNInference()
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatmap_max_keypoint, check=True):
- with mock.patch(
- "{}.keypoint_rcnn_inference".format(patched_module),
- side_effect=Caffe2KeypointRCNNInference(use_heatmap_max_keypoint),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-class ROIHeadsPatcher:
- def __init__(self, heads, use_heatmap_max_keypoint):
- self.heads = heads
- self.use_heatmap_max_keypoint = use_heatmap_max_keypoint
- self.previous_patched = {}
-
- @contextlib.contextmanager
- def mock_roi_heads(self, tensor_mode=True):
- """
- Patching several inference functions inside ROIHeads and its subclasses
-
- Args:
- tensor_mode (bool): whether the inputs/outputs are caffe2's tensor
- format or not. Default to True.
- """
- # NOTE: this requries the `keypoint_rcnn_inference` and `mask_rcnn_inference`
- # are called inside the same file as BaseXxxHead due to using mock.patch.
- kpt_heads_mod = keypoint_head.BaseKeypointRCNNHead.__module__
- mask_head_mod = mask_head.BaseMaskRCNNHead.__module__
-
- mock_ctx_managers = [
- mock_fastrcnn_outputs_inference(
- tensor_mode=tensor_mode,
- check=True,
- box_predictor_type=type(self.heads.box_predictor),
- )
- ]
- if getattr(self.heads, "keypoint_on", False):
- mock_ctx_managers += [
- mock_keypoint_rcnn_inference(
- tensor_mode, kpt_heads_mod, self.use_heatmap_max_keypoint
- )
- ]
- if getattr(self.heads, "mask_on", False):
- mock_ctx_managers += [mock_mask_rcnn_inference(tensor_mode, mask_head_mod)]
-
- with contextlib.ExitStack() as stack: # python 3.3+
- for mgr in mock_ctx_managers:
- stack.enter_context(mgr)
- yield
-
- def patch_roi_heads(self, tensor_mode=True):
- self.previous_patched["box_predictor"] = self.heads.box_predictor.inference
- self.previous_patched["keypoint_rcnn"] = keypoint_head.keypoint_rcnn_inference
- self.previous_patched["mask_rcnn"] = mask_head.mask_rcnn_inference
-
- def patched_fastrcnn_outputs_inference(predictions, proposal):
- return caffe2_fast_rcnn_outputs_inference(
- True, self.heads.box_predictor, predictions, proposal
- )
-
- self.heads.box_predictor.inference = patched_fastrcnn_outputs_inference
-
- if getattr(self.heads, "keypoint_on", False):
-
- def patched_keypoint_rcnn_inference(pred_keypoint_logits, pred_instances):
- return caffe2_keypoint_rcnn_inference(
- self.use_heatmap_max_keypoint, pred_keypoint_logits, pred_instances
- )
-
- keypoint_head.keypoint_rcnn_inference = patched_keypoint_rcnn_inference
-
- if getattr(self.heads, "mask_on", False):
-
- def patched_mask_rcnn_inference(pred_mask_logits, pred_instances):
- return caffe2_mask_rcnn_inference(pred_mask_logits, pred_instances)
-
- mask_head.mask_rcnn_inference = patched_mask_rcnn_inference
-
- def unpatch_roi_heads(self):
- self.heads.box_predictor.inference = self.previous_patched["box_predictor"]
- keypoint_head.keypoint_rcnn_inference = self.previous_patched["keypoint_rcnn"]
- mask_head.mask_rcnn_inference = self.previous_patched["mask_rcnn"]
diff --git a/spaces/camenduru-com/riffusion-api/README.md b/spaces/camenduru-com/riffusion-api/README.md
deleted file mode 100644
index bd2838089b8594792d6f3f341230a88d2fdddb30..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/riffusion-api/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: Riffusion App API
-emoji: ⚙
-colorFrom: grey
-colorTo: grey
-sdk: docker
-pinned: false
----
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py
deleted file mode 100644
index 29bc748fa91a6d3de6ec42842416de6af7134f5c..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py
+++ /dev/null
@@ -1,51 +0,0 @@
-"""Async executor versions of file functions from the os module."""
-import os
-
-from . import ospath as path
-from .ospath import wrap
-
-__all__ = [
- "path",
- "stat",
- "statvfs",
- "rename",
- "renames",
- "replace",
- "remove",
- "unlink",
- "mkdir",
- "makedirs",
- "rmdir",
- "removedirs",
- "link",
- "symlink",
- "readlink",
- "listdir",
- "scandir",
- "access",
- "sendfile",
- "wrap",
-]
-
-
-stat = wrap(os.stat)
-rename = wrap(os.rename)
-renames = wrap(os.renames)
-replace = wrap(os.replace)
-remove = wrap(os.remove)
-unlink = wrap(os.unlink)
-mkdir = wrap(os.mkdir)
-makedirs = wrap(os.makedirs)
-rmdir = wrap(os.rmdir)
-removedirs = wrap(os.removedirs)
-link = wrap(os.link)
-symlink = wrap(os.symlink)
-readlink = wrap(os.readlink)
-listdir = wrap(os.listdir)
-scandir = wrap(os.scandir)
-access = wrap(os.access)
-
-if hasattr(os, "sendfile"):
- sendfile = wrap(os.sendfile)
-if hasattr(os, "statvfs"):
- statvfs = wrap(os.statvfs)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md
deleted file mode 100644
index de158e0eea0c287507b701376abc9307ce92c0f1..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md
+++ /dev/null
@@ -1,627 +0,0 @@
-# Chart-based Dense Pose Estimation for Humans and Animals
-
-## Overview
-
-The goal of chart-based DensePose methods is to establish dense correspondences
-between image pixels and 3D object mesh by splitting the latter into charts and estimating
-for each pixel the corresponding chart index `I` and local chart coordinates `(U, V)`.
-
-
-
-
-
-The charts used for human DensePose estimation are shown in Figure 1.
-The human body is split into 24 parts, each part is parametrized by `U` and `V`
-coordinates, each taking values in `[0, 1]`.
-
-
-
-
-
Figure 1. Partitioning and parametrization of human body surface.
-
-The pipeline uses [Faster R-CNN](https://arxiv.org/abs/1506.01497)
-with [Feature Pyramid Network](https://arxiv.org/abs/1612.03144) meta architecture
-outlined in Figure 2. For each detected object, the model predicts
-its coarse segmentation `S` (2 or 15 channels: foreground / background or
-background + 14 predefined body parts), fine segmentation `I` (25 channels:
-background + 24 predefined body parts) and local chart coordinates `U` and `V`.
-
-
-
-
-
Figure 2. DensePose chart-based architecture based on Faster R-CNN with Feature Pyramid Network (FPN).
-
-### Bootstrapping Chart-Based Models
-
-[Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) introduced a pipeline
-to transfer DensePose models trained on humans to proximal animal classes (chimpanzees),
-which is summarized in Figure 3. The training proceeds in two stages:
-
-First, a *master* model is trained on data from source domain (humans with full
-DensePose annotation `S`, `I`, `U` and `V`)
-and supporting domain (animals with segmentation annotation only).
-Only selected animal classes are chosen from the supporting
-domain through *category filters* to guarantee the quality of target domain results.
-The training is done in *class-agnostic manner*: all selected categories are mapped
-to a single category (human).
-
-Second, a *student* model is trained on data from source and supporting domains,
-as well as data from target domain obtained by applying the master model, selecting
-high-confidence detections and sampling the results.
-
-
-
-
-
Figure 3. Domain adaptation: master model is trained on data from source and
-supporting domains to produce predictions in target domain; student model combines data from source and
-supporting domains, as well as sampled predictions from the master model on target domain to improve
-target domain predictions quality.
-
-Examples of pretrained master and student models are available in the [Model Zoo](#ModelZooBootstrap).
-For more details on the bootstrapping pipeline, please see [Bootstrapping Pipeline](BOOTSTRAPPING_PIPELINE.md).
-
-### Datasets
-
-For more details on datasets used for chart-based model training and validation,
-please refer to the [DensePose Datasets](DENSEPOSE_DATASETS.md) page.
-
-## Model Zoo and Baselines
-
-### Legacy Models
-
-Baselines trained using schedules from [Güler et al, 2018](https://arxiv.org/pdf/1802.00434.pdf)
-
-
-
-### Improved Baselines, Original Fully Convolutional Head
-
-These models use an improved training schedule and Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446).
-
-
-
-Acronyms:
-
-`WC1M`: with confidence estimation model type 1 for `U` and `V` and mask confidence estimation
-
-`WC2M`: with confidence estimation model type 2 for `U` and `V` and mask confidence estimation
-
-### Bootstrapping Baselines
-
-Master and student models trained using the bootstrapping pipeline with chimpanzee as the target category,
-see [Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf)
-and [Bootstrapping Pipeline](BOOTSTRAPPING_PIPELINE.md) for details.
-Evaluation is performed on [DensePose Chimps](DENSEPOSE_DATASETS.md#densepose-chimps) dataset.
-
-
We went out and researched the web for the best body language experts we could find and put together five of our favourite video lessons. We hear from Vanessa Van Edwards and the Science of People, Allan Pease in his inspirational TED talk on body language and other tips from the Stanford School of Business.
-
Hi there, Kelsey Tonner here from Be a Better Guide. Today we are talking about how we as tour guides can use body language effectively to communicate that we are confident, capable, and that your guests have nothing to worry about with you in charge.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md b/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md
deleted file mode 100644
index 71101c800b567ab75cda4774d6a752038b804fba..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md b/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md
deleted file mode 100644
index 1ce860ef68ad6649f4ce42bafa6a1971250bdb4a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Tamilnadu Dr Mgr Medical University Digital Librar radar graffiti alpha
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py
deleted file mode 100644
index 4cada106b01c564faf17969d24038f80abd5de6f..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-"""
-Commonly useful converters.
-"""
-
-
-import typing
-
-from ._compat import _AnnotationExtractor
-from ._make import NOTHING, Factory, pipe
-
-
-__all__ = [
- "default_if_none",
- "optional",
- "pipe",
- "to_bool",
-]
-
-
-def optional(converter):
- """
- A converter that allows an attribute to be optional. An optional attribute
- is one which can be set to ``None``.
-
- Type annotations will be inferred from the wrapped converter's, if it
- has any.
-
- :param callable converter: the converter that is used for non-``None``
- values.
-
- .. versionadded:: 17.1.0
- """
-
- def optional_converter(val):
- if val is None:
- return None
- return converter(val)
-
- xtr = _AnnotationExtractor(converter)
-
- t = xtr.get_first_param_type()
- if t:
- optional_converter.__annotations__["val"] = typing.Optional[t]
-
- rt = xtr.get_return_type()
- if rt:
- optional_converter.__annotations__["return"] = typing.Optional[rt]
-
- return optional_converter
-
-
-def default_if_none(default=NOTHING, factory=None):
- """
- A converter that allows to replace ``None`` values by *default* or the
- result of *factory*.
-
- :param default: Value to be used if ``None`` is passed. Passing an instance
- of `attrs.Factory` is supported, however the ``takes_self`` option
- is *not*.
- :param callable factory: A callable that takes no parameters whose result
- is used if ``None`` is passed.
-
- :raises TypeError: If **neither** *default* or *factory* is passed.
- :raises TypeError: If **both** *default* and *factory* are passed.
- :raises ValueError: If an instance of `attrs.Factory` is passed with
- ``takes_self=True``.
-
- .. versionadded:: 18.2.0
- """
- if default is NOTHING and factory is None:
- raise TypeError("Must pass either `default` or `factory`.")
-
- if default is not NOTHING and factory is not None:
- raise TypeError(
- "Must pass either `default` or `factory` but not both."
- )
-
- if factory is not None:
- default = Factory(factory)
-
- if isinstance(default, Factory):
- if default.takes_self:
- raise ValueError(
- "`takes_self` is not supported by default_if_none."
- )
-
- def default_if_none_converter(val):
- if val is not None:
- return val
-
- return default.factory()
-
- else:
-
- def default_if_none_converter(val):
- if val is not None:
- return val
-
- return default
-
- return default_if_none_converter
-
-
-def to_bool(val):
- """
- Convert "boolean" strings (e.g., from env. vars.) to real booleans.
-
- Values mapping to :code:`True`:
-
- - :code:`True`
- - :code:`"true"` / :code:`"t"`
- - :code:`"yes"` / :code:`"y"`
- - :code:`"on"`
- - :code:`"1"`
- - :code:`1`
-
- Values mapping to :code:`False`:
-
- - :code:`False`
- - :code:`"false"` / :code:`"f"`
- - :code:`"no"` / :code:`"n"`
- - :code:`"off"`
- - :code:`"0"`
- - :code:`0`
-
- :raises ValueError: for any other value.
-
- .. versionadded:: 21.3.0
- """
- if isinstance(val, str):
- val = val.lower()
- truthy = {True, "true", "t", "yes", "y", "on", "1", 1}
- falsy = {False, "false", "f", "no", "n", "off", "0", 0}
- try:
- if val in truthy:
- return True
- if val in falsy:
- return False
- except TypeError:
- # Raised when "val" is not hashable (e.g., lists)
- pass
- raise ValueError(f"Cannot convert value to bool: {val}")
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c
deleted file mode 100644
index 228f72ba18e112fa2fe9b8cd7813366be96b02ea..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c
+++ /dev/null
@@ -1,508 +0,0 @@
-/*
- * DXVA2 AV1 HW acceleration.
- *
- * copyright (c) 2020 Hendrik Leppkes
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "config_components.h"
-
-#include "libavutil/avassert.h"
-#include "libavutil/pixdesc.h"
-
-#include "dxva2_internal.h"
-#include "av1dec.h"
-
-#define MAX_TILES 256
-
-struct AV1DXVAContext {
- FFDXVASharedContext shared;
-
- unsigned int bitstream_allocated;
- uint8_t *bitstream_cache;
-};
-
-struct av1_dxva2_picture_context {
- DXVA_PicParams_AV1 pp;
- unsigned tile_count;
- DXVA_Tile_AV1 tiles[MAX_TILES];
- uint8_t *bitstream;
- unsigned bitstream_size;
-};
-
-static int get_bit_depth_from_seq(const AV1RawSequenceHeader *seq)
-{
- if (seq->seq_profile == 2 && seq->color_config.high_bitdepth)
- return seq->color_config.twelve_bit ? 12 : 10;
- else if (seq->seq_profile <= 2 && seq->color_config.high_bitdepth)
- return 10;
- else
- return 8;
-}
-
-static int fill_picture_parameters(const AVCodecContext *avctx, AVDXVAContext *ctx, const AV1DecContext *h,
- DXVA_PicParams_AV1 *pp)
-{
- int i,j, uses_lr;
- const AV1RawSequenceHeader *seq = h->raw_seq;
- const AV1RawFrameHeader *frame_header = h->raw_frame_header;
- const AV1RawFilmGrainParams *film_grain = &h->cur_frame.film_grain;
-
- unsigned char remap_lr_type[4] = { AV1_RESTORE_NONE, AV1_RESTORE_SWITCHABLE, AV1_RESTORE_WIENER, AV1_RESTORE_SGRPROJ };
- int apply_grain = !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) && film_grain->apply_grain;
-
- memset(pp, 0, sizeof(*pp));
-
- pp->width = avctx->width;
- pp->height = avctx->height;
-
- pp->max_width = seq->max_frame_width_minus_1 + 1;
- pp->max_height = seq->max_frame_height_minus_1 + 1;
-
- pp->CurrPicTextureIndex = ff_dxva2_get_surface_index(avctx, ctx, h->cur_frame.f);
- pp->superres_denom = frame_header->use_superres ? frame_header->coded_denom + AV1_SUPERRES_DENOM_MIN : AV1_SUPERRES_NUM;
- pp->bitdepth = get_bit_depth_from_seq(seq);
- pp->seq_profile = seq->seq_profile;
-
- /* Tiling info */
- pp->tiles.cols = frame_header->tile_cols;
- pp->tiles.rows = frame_header->tile_rows;
- pp->tiles.context_update_id = frame_header->context_update_tile_id;
-
- for (i = 0; i < pp->tiles.cols; i++)
- pp->tiles.widths[i] = frame_header->width_in_sbs_minus_1[i] + 1;
-
- for (i = 0; i < pp->tiles.rows; i++)
- pp->tiles.heights[i] = frame_header->height_in_sbs_minus_1[i] + 1;
-
- /* Coding tools */
- pp->coding.use_128x128_superblock = seq->use_128x128_superblock;
- pp->coding.intra_edge_filter = seq->enable_intra_edge_filter;
- pp->coding.interintra_compound = seq->enable_interintra_compound;
- pp->coding.masked_compound = seq->enable_masked_compound;
- pp->coding.warped_motion = frame_header->allow_warped_motion;
- pp->coding.dual_filter = seq->enable_dual_filter;
- pp->coding.jnt_comp = seq->enable_jnt_comp;
- pp->coding.screen_content_tools = frame_header->allow_screen_content_tools;
- pp->coding.integer_mv = frame_header->force_integer_mv || !(frame_header->frame_type & 1);
- pp->coding.cdef = seq->enable_cdef;
- pp->coding.restoration = seq->enable_restoration;
- pp->coding.film_grain = seq->film_grain_params_present && !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN);
- pp->coding.intrabc = frame_header->allow_intrabc;
- pp->coding.high_precision_mv = frame_header->allow_high_precision_mv;
- pp->coding.switchable_motion_mode = frame_header->is_motion_mode_switchable;
- pp->coding.filter_intra = seq->enable_filter_intra;
- pp->coding.disable_frame_end_update_cdf = frame_header->disable_frame_end_update_cdf;
- pp->coding.disable_cdf_update = frame_header->disable_cdf_update;
- pp->coding.reference_mode = frame_header->reference_select;
- pp->coding.skip_mode = frame_header->skip_mode_present;
- pp->coding.reduced_tx_set = frame_header->reduced_tx_set;
- pp->coding.superres = frame_header->use_superres;
- pp->coding.tx_mode = frame_header->tx_mode;
- pp->coding.use_ref_frame_mvs = frame_header->use_ref_frame_mvs;
- pp->coding.enable_ref_frame_mvs = seq->enable_ref_frame_mvs;
- pp->coding.reference_frame_update = 1; // 0 for show_existing_frame with key frames, but those are not passed to the hwaccel
-
- /* Format & Picture Info flags */
- pp->format.frame_type = frame_header->frame_type;
- pp->format.show_frame = frame_header->show_frame;
- pp->format.showable_frame = frame_header->showable_frame;
- pp->format.subsampling_x = seq->color_config.subsampling_x;
- pp->format.subsampling_y = seq->color_config.subsampling_y;
- pp->format.mono_chrome = seq->color_config.mono_chrome;
-
- /* References */
- pp->primary_ref_frame = frame_header->primary_ref_frame;
- pp->order_hint = frame_header->order_hint;
- pp->order_hint_bits = seq->enable_order_hint ? seq->order_hint_bits_minus_1 + 1 : 0;
-
- memset(pp->RefFrameMapTextureIndex, 0xFF, sizeof(pp->RefFrameMapTextureIndex));
- for (i = 0; i < AV1_REFS_PER_FRAME; i++) {
- int8_t ref_idx = frame_header->ref_frame_idx[i];
- AVFrame *ref_frame = h->ref[ref_idx].f;
-
- pp->frame_refs[i].width = ref_frame->width;
- pp->frame_refs[i].height = ref_frame->height;
- pp->frame_refs[i].Index = ref_frame->buf[0] ? ref_idx : 0xFF;
-
- /* Global Motion */
- pp->frame_refs[i].wminvalid = h->cur_frame.gm_invalid[AV1_REF_FRAME_LAST + i];
- pp->frame_refs[i].wmtype = h->cur_frame.gm_type[AV1_REF_FRAME_LAST + i];
- for (j = 0; j < 6; ++j) {
- pp->frame_refs[i].wmmat[j] = h->cur_frame.gm_params[AV1_REF_FRAME_LAST + i][j];
- }
- }
- for (i = 0; i < AV1_NUM_REF_FRAMES; i++) {
- AVFrame *ref_frame = h->ref[i].f;
- if (ref_frame->buf[0])
- pp->RefFrameMapTextureIndex[i] = ff_dxva2_get_surface_index(avctx, ctx, ref_frame);
- }
-
- /* Loop filter parameters */
- pp->loop_filter.filter_level[0] = frame_header->loop_filter_level[0];
- pp->loop_filter.filter_level[1] = frame_header->loop_filter_level[1];
- pp->loop_filter.filter_level_u = frame_header->loop_filter_level[2];
- pp->loop_filter.filter_level_v = frame_header->loop_filter_level[3];
- pp->loop_filter.sharpness_level = frame_header->loop_filter_sharpness;
- pp->loop_filter.mode_ref_delta_enabled = frame_header->loop_filter_delta_enabled;
- pp->loop_filter.mode_ref_delta_update = frame_header->loop_filter_delta_update;
- pp->loop_filter.delta_lf_multi = frame_header->delta_lf_multi;
- pp->loop_filter.delta_lf_present = frame_header->delta_lf_present;
- pp->loop_filter.delta_lf_res = frame_header->delta_lf_res;
-
- for (i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) {
- pp->loop_filter.ref_deltas[i] = frame_header->loop_filter_ref_deltas[i];
- }
-
- pp->loop_filter.mode_deltas[0] = frame_header->loop_filter_mode_deltas[0];
- pp->loop_filter.mode_deltas[1] = frame_header->loop_filter_mode_deltas[1];
- pp->loop_filter.frame_restoration_type[0] = remap_lr_type[frame_header->lr_type[0]];
- pp->loop_filter.frame_restoration_type[1] = remap_lr_type[frame_header->lr_type[1]];
- pp->loop_filter.frame_restoration_type[2] = remap_lr_type[frame_header->lr_type[2]];
- uses_lr = frame_header->lr_type[0] || frame_header->lr_type[1] || frame_header->lr_type[2];
- pp->loop_filter.log2_restoration_unit_size[0] = uses_lr ? (6 + frame_header->lr_unit_shift) : 8;
- pp->loop_filter.log2_restoration_unit_size[1] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8;
- pp->loop_filter.log2_restoration_unit_size[2] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8;
-
- /* Quantization */
- pp->quantization.delta_q_present = frame_header->delta_q_present;
- pp->quantization.delta_q_res = frame_header->delta_q_res;
- pp->quantization.base_qindex = frame_header->base_q_idx;
- pp->quantization.y_dc_delta_q = frame_header->delta_q_y_dc;
- pp->quantization.u_dc_delta_q = frame_header->delta_q_u_dc;
- pp->quantization.v_dc_delta_q = frame_header->delta_q_v_dc;
- pp->quantization.u_ac_delta_q = frame_header->delta_q_u_ac;
- pp->quantization.v_ac_delta_q = frame_header->delta_q_v_ac;
- pp->quantization.qm_y = frame_header->using_qmatrix ? frame_header->qm_y : 0xFF;
- pp->quantization.qm_u = frame_header->using_qmatrix ? frame_header->qm_u : 0xFF;
- pp->quantization.qm_v = frame_header->using_qmatrix ? frame_header->qm_v : 0xFF;
-
- /* Cdef parameters */
- pp->cdef.damping = frame_header->cdef_damping_minus_3;
- pp->cdef.bits = frame_header->cdef_bits;
- for (i = 0; i < 8; i++) {
- pp->cdef.y_strengths[i].primary = frame_header->cdef_y_pri_strength[i];
- pp->cdef.y_strengths[i].secondary = frame_header->cdef_y_sec_strength[i];
- pp->cdef.uv_strengths[i].primary = frame_header->cdef_uv_pri_strength[i];
- pp->cdef.uv_strengths[i].secondary = frame_header->cdef_uv_sec_strength[i];
- }
-
- /* Misc flags */
- pp->interp_filter = frame_header->interpolation_filter;
-
- /* Segmentation */
- pp->segmentation.enabled = frame_header->segmentation_enabled;
- pp->segmentation.update_map = frame_header->segmentation_update_map;
- pp->segmentation.update_data = frame_header->segmentation_update_data;
- pp->segmentation.temporal_update = frame_header->segmentation_temporal_update;
- for (i = 0; i < AV1_MAX_SEGMENTS; i++) {
- for (j = 0; j < AV1_SEG_LVL_MAX; j++) {
- pp->segmentation.feature_mask[i].mask |= frame_header->feature_enabled[i][j] << j;
- pp->segmentation.feature_data[i][j] = frame_header->feature_value[i][j];
- }
- }
-
- /* Film grain */
- if (apply_grain) {
- pp->film_grain.apply_grain = 1;
- pp->film_grain.scaling_shift_minus8 = film_grain->grain_scaling_minus_8;
- pp->film_grain.chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma;
- pp->film_grain.ar_coeff_lag = film_grain->ar_coeff_lag;
- pp->film_grain.ar_coeff_shift_minus6 = film_grain->ar_coeff_shift_minus_6;
- pp->film_grain.grain_scale_shift = film_grain->grain_scale_shift;
- pp->film_grain.overlap_flag = film_grain->overlap_flag;
- pp->film_grain.clip_to_restricted_range = film_grain->clip_to_restricted_range;
- pp->film_grain.matrix_coeff_is_identity = (seq->color_config.matrix_coefficients == AVCOL_SPC_RGB);
-
- pp->film_grain.grain_seed = film_grain->grain_seed;
- pp->film_grain.num_y_points = film_grain->num_y_points;
- for (i = 0; i < film_grain->num_y_points; i++) {
- pp->film_grain.scaling_points_y[i][0] = film_grain->point_y_value[i];
- pp->film_grain.scaling_points_y[i][1] = film_grain->point_y_scaling[i];
- }
- pp->film_grain.num_cb_points = film_grain->num_cb_points;
- for (i = 0; i < film_grain->num_cb_points; i++) {
- pp->film_grain.scaling_points_cb[i][0] = film_grain->point_cb_value[i];
- pp->film_grain.scaling_points_cb[i][1] = film_grain->point_cb_scaling[i];
- }
- pp->film_grain.num_cr_points = film_grain->num_cr_points;
- for (i = 0; i < film_grain->num_cr_points; i++) {
- pp->film_grain.scaling_points_cr[i][0] = film_grain->point_cr_value[i];
- pp->film_grain.scaling_points_cr[i][1] = film_grain->point_cr_scaling[i];
- }
- for (i = 0; i < 24; i++) {
- pp->film_grain.ar_coeffs_y[i] = film_grain->ar_coeffs_y_plus_128[i];
- }
- for (i = 0; i < 25; i++) {
- pp->film_grain.ar_coeffs_cb[i] = film_grain->ar_coeffs_cb_plus_128[i];
- pp->film_grain.ar_coeffs_cr[i] = film_grain->ar_coeffs_cr_plus_128[i];
- }
- pp->film_grain.cb_mult = film_grain->cb_mult;
- pp->film_grain.cb_luma_mult = film_grain->cb_luma_mult;
- pp->film_grain.cr_mult = film_grain->cr_mult;
- pp->film_grain.cr_luma_mult = film_grain->cr_luma_mult;
- pp->film_grain.cb_offset = film_grain->cb_offset;
- pp->film_grain.cr_offset = film_grain->cr_offset;
- pp->film_grain.cr_offset = film_grain->cr_offset;
- }
-
- // XXX: Setting the StatusReportFeedbackNumber breaks decoding on some drivers (tested on NVIDIA 457.09)
- // Status Reporting is not used by FFmpeg, hence not providing a number does not cause any issues
- //pp->StatusReportFeedbackNumber = 1 + DXVA_CONTEXT_REPORT_ID(avctx, ctx)++;
- return 0;
-}
-
-static int dxva2_av1_start_frame(AVCodecContext *avctx,
- av_unused const uint8_t *buffer,
- av_unused uint32_t size)
-{
- const AV1DecContext *h = avctx->priv_data;
- AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
-
- if (!DXVA_CONTEXT_VALID(avctx, ctx))
- return -1;
- av_assert0(ctx_pic);
-
- /* Fill up DXVA_PicParams_AV1 */
- if (fill_picture_parameters(avctx, ctx, h, &ctx_pic->pp) < 0)
- return -1;
-
- ctx_pic->bitstream_size = 0;
- ctx_pic->bitstream = NULL;
- return 0;
-}
-
-static int dxva2_av1_decode_slice(AVCodecContext *avctx,
- const uint8_t *buffer,
- uint32_t size)
-{
- const AV1DecContext *h = avctx->priv_data;
- const AV1RawFrameHeader *frame_header = h->raw_frame_header;
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
- struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data;
- void *tmp;
-
- ctx_pic->tile_count = frame_header->tile_cols * frame_header->tile_rows;
-
- /* too many tiles, exceeding all defined levels in the AV1 spec */
- if (ctx_pic->tile_count > MAX_TILES)
- return AVERROR(ENOSYS);
-
- /* Shortcut if all tiles are in the same buffer */
- if (ctx_pic->tile_count == h->tg_end - h->tg_start + 1) {
- ctx_pic->bitstream = (uint8_t *)buffer;
- ctx_pic->bitstream_size = size;
-
- for (uint32_t tile_num = 0; tile_num < ctx_pic->tile_count; tile_num++) {
- ctx_pic->tiles[tile_num].DataOffset = h->tile_group_info[tile_num].tile_offset;
- ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size;
- ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row;
- ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column;
- ctx_pic->tiles[tile_num].anchor_frame = 0xFF;
- }
-
- return 0;
- }
-
- /* allocate an internal buffer */
- tmp = av_fast_realloc(ctx->bitstream_cache, &ctx->bitstream_allocated,
- ctx_pic->bitstream_size + size);
- if (!tmp) {
- return AVERROR(ENOMEM);
- }
- ctx_pic->bitstream = ctx->bitstream_cache = tmp;
-
- memcpy(ctx_pic->bitstream + ctx_pic->bitstream_size, buffer, size);
-
- for (uint32_t tile_num = h->tg_start; tile_num <= h->tg_end; tile_num++) {
- ctx_pic->tiles[tile_num].DataOffset = ctx_pic->bitstream_size + h->tile_group_info[tile_num].tile_offset;
- ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size;
- ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row;
- ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column;
- ctx_pic->tiles[tile_num].anchor_frame = 0xFF;
- }
-
- ctx_pic->bitstream_size += size;
-
- return 0;
-}
-
-static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
- DECODER_BUFFER_DESC *bs,
- DECODER_BUFFER_DESC *sc)
-{
- const AV1DecContext *h = avctx->priv_data;
- AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
- void *dxva_data_ptr;
- uint8_t *dxva_data;
- unsigned dxva_size;
- unsigned padding;
- unsigned type;
-
-#if CONFIG_D3D11VA
- if (ff_dxva2_is_d3d11(avctx)) {
- type = D3D11_VIDEO_DECODER_BUFFER_BITSTREAM;
- if (FAILED(ID3D11VideoContext_GetDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context,
- D3D11VA_CONTEXT(ctx)->decoder,
- type,
- &dxva_size, &dxva_data_ptr)))
- return -1;
- }
-#endif
-#if CONFIG_DXVA2
- if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) {
- type = DXVA2_BitStreamDateBufferType;
- if (FAILED(IDirectXVideoDecoder_GetBuffer(DXVA2_CONTEXT(ctx)->decoder,
- type,
- &dxva_data_ptr, &dxva_size)))
- return -1;
- }
-#endif
-
- dxva_data = dxva_data_ptr;
-
- if (ctx_pic->bitstream_size > dxva_size) {
- av_log(avctx, AV_LOG_ERROR, "Bitstream size exceeds hardware buffer");
- return -1;
- }
-
- memcpy(dxva_data, ctx_pic->bitstream, ctx_pic->bitstream_size);
-
- padding = FFMIN(128 - ((ctx_pic->bitstream_size) & 127), dxva_size - ctx_pic->bitstream_size);
- if (padding > 0) {
- memset(dxva_data + ctx_pic->bitstream_size, 0, padding);
- ctx_pic->bitstream_size += padding;
- }
-
-#if CONFIG_D3D11VA
- if (ff_dxva2_is_d3d11(avctx))
- if (FAILED(ID3D11VideoContext_ReleaseDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, type)))
- return -1;
-#endif
-#if CONFIG_DXVA2
- if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD)
- if (FAILED(IDirectXVideoDecoder_ReleaseBuffer(DXVA2_CONTEXT(ctx)->decoder, type)))
- return -1;
-#endif
-
-#if CONFIG_D3D11VA
- if (ff_dxva2_is_d3d11(avctx)) {
- D3D11_VIDEO_DECODER_BUFFER_DESC *dsc11 = bs;
- memset(dsc11, 0, sizeof(*dsc11));
- dsc11->BufferType = type;
- dsc11->DataSize = ctx_pic->bitstream_size;
- dsc11->NumMBsInBuffer = 0;
-
- type = D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL;
- }
-#endif
-#if CONFIG_DXVA2
- if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) {
- DXVA2_DecodeBufferDesc *dsc2 = bs;
- memset(dsc2, 0, sizeof(*dsc2));
- dsc2->CompressedBufferType = type;
- dsc2->DataSize = ctx_pic->bitstream_size;
- dsc2->NumMBsInBuffer = 0;
-
- type = DXVA2_SliceControlBufferType;
- }
-#endif
-
- return ff_dxva2_commit_buffer(avctx, ctx, sc, type,
- ctx_pic->tiles, sizeof(*ctx_pic->tiles) * ctx_pic->tile_count, 0);
-}
-
-static int dxva2_av1_end_frame(AVCodecContext *avctx)
-{
- const AV1DecContext *h = avctx->priv_data;
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
- int ret;
-
- if (ctx_pic->bitstream_size <= 0)
- return -1;
-
- ret = ff_dxva2_common_end_frame(avctx, h->cur_frame.f,
- &ctx_pic->pp, sizeof(ctx_pic->pp),
- NULL, 0,
- commit_bitstream_and_slice_buffer);
-
- return ret;
-}
-
-static int dxva2_av1_uninit(AVCodecContext *avctx)
-{
- struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data;
-
- av_freep(&ctx->bitstream_cache);
- ctx->bitstream_allocated = 0;
-
- return ff_dxva2_decode_uninit(avctx);
-}
-
-#if CONFIG_AV1_DXVA2_HWACCEL
-const AVHWAccel ff_av1_dxva2_hwaccel = {
- .name = "av1_dxva2",
- .type = AVMEDIA_TYPE_VIDEO,
- .id = AV_CODEC_ID_AV1,
- .pix_fmt = AV_PIX_FMT_DXVA2_VLD,
- .init = ff_dxva2_decode_init,
- .uninit = dxva2_av1_uninit,
- .start_frame = dxva2_av1_start_frame,
- .decode_slice = dxva2_av1_decode_slice,
- .end_frame = dxva2_av1_end_frame,
- .frame_params = ff_dxva2_common_frame_params,
- .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
- .priv_data_size = sizeof(struct AV1DXVAContext),
-};
-#endif
-
-#if CONFIG_AV1_D3D11VA_HWACCEL
-const AVHWAccel ff_av1_d3d11va_hwaccel = {
- .name = "av1_d3d11va",
- .type = AVMEDIA_TYPE_VIDEO,
- .id = AV_CODEC_ID_AV1,
- .pix_fmt = AV_PIX_FMT_D3D11VA_VLD,
- .init = ff_dxva2_decode_init,
- .uninit = dxva2_av1_uninit,
- .start_frame = dxva2_av1_start_frame,
- .decode_slice = dxva2_av1_decode_slice,
- .end_frame = dxva2_av1_end_frame,
- .frame_params = ff_dxva2_common_frame_params,
- .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
- .priv_data_size = sizeof(struct AV1DXVAContext),
-};
-#endif
-
-#if CONFIG_AV1_D3D11VA2_HWACCEL
-const AVHWAccel ff_av1_d3d11va2_hwaccel = {
- .name = "av1_d3d11va2",
- .type = AVMEDIA_TYPE_VIDEO,
- .id = AV_CODEC_ID_AV1,
- .pix_fmt = AV_PIX_FMT_D3D11,
- .init = ff_dxva2_decode_init,
- .uninit = dxva2_av1_uninit,
- .start_frame = dxva2_av1_start_frame,
- .decode_slice = dxva2_av1_decode_slice,
- .end_frame = dxva2_av1_end_frame,
- .frame_params = ff_dxva2_common_frame_params,
- .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
- .priv_data_size = sizeof(struct AV1DXVAContext),
-};
-#endif
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c
deleted file mode 100644
index 74a2c10b5c89ae6a9f4b15902ad7c747e6badbf2..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c
+++ /dev/null
@@ -1,208 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "adts_header.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "get_bits.h"
-#include "decode.h"
-
-typedef struct FTRContext {
- AVCodecContext *aac_avctx[64]; // wrapper context for AAC
- int nb_context;
- AVPacket *packet;
- AVFrame *frame;
-} FTRContext;
-
-static av_cold int ftr_init(AVCodecContext *avctx)
-{
- FTRContext *s = avctx->priv_data;
- const AVCodec *codec;
- int ret;
-
- if (avctx->ch_layout.nb_channels > 64 ||
- avctx->ch_layout.nb_channels <= 0)
- return AVERROR(EINVAL);
-
- s->packet = av_packet_alloc();
- if (!s->packet)
- return AVERROR(ENOMEM);
-
- s->frame = av_frame_alloc();
- if (!s->frame)
- return AVERROR(ENOMEM);
-
- s->nb_context = avctx->ch_layout.nb_channels;
-
- codec = avcodec_find_decoder(AV_CODEC_ID_AAC);
- if (!codec)
- return AVERROR_BUG;
-
- for (int i = 0; i < s->nb_context; i++) {
- s->aac_avctx[i] = avcodec_alloc_context3(codec);
- if (!s->aac_avctx[i])
- return AVERROR(ENOMEM);
- ret = avcodec_open2(s->aac_avctx[i], codec, NULL);
- if (ret < 0)
- return ret;
- }
-
- avctx->sample_fmt = s->aac_avctx[0]->sample_fmt;
- if (!av_sample_fmt_is_planar(avctx->sample_fmt))
- return AVERROR(EINVAL);
-
- return 0;
-}
-
-static int ftr_decode_frame(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame, AVPacket *avpkt)
-{
- FTRContext *s = avctx->priv_data;
- GetBitContext gb;
- int ret, ch_offset = 0;
-
- ret = init_get_bits8(&gb, avpkt->data, avpkt->size);
- if (ret < 0)
- return ret;
-
- frame->nb_samples = 0;
-
- for (int i = 0; i < s->nb_context; i++) {
- AVCodecContext *codec_avctx = s->aac_avctx[i];
- GetBitContext gb2 = gb;
- AACADTSHeaderInfo hdr_info;
- int size;
-
- if (get_bits_left(&gb) < 64)
- return AVERROR_INVALIDDATA;
-
- memset(&hdr_info, 0, sizeof(hdr_info));
-
- size = ff_adts_header_parse(&gb2, &hdr_info);
- if (size <= 0 || size * 8 > get_bits_left(&gb))
- return AVERROR_INVALIDDATA;
-
- if (size > s->packet->size) {
- ret = av_grow_packet(s->packet, size - s->packet->size);
- if (ret < 0)
- return ret;
- }
-
- ret = av_packet_make_writable(s->packet);
- if (ret < 0)
- return ret;
-
- memcpy(s->packet->data, avpkt->data + (get_bits_count(&gb) >> 3), size);
- s->packet->size = size;
-
- if (size > 12) {
- uint8_t *buf = s->packet->data;
-
- if (buf[3] & 0x20) {
- int tmp = buf[8];
- buf[ 9] = ~buf[9];
- buf[11] = ~buf[11];
- buf[12] = ~buf[12];
- buf[ 8] = ~buf[10];
- buf[10] = ~tmp;
- }
- }
-
- ret = avcodec_send_packet(codec_avctx, s->packet);
- if (ret < 0) {
- av_log(avctx, AV_LOG_ERROR, "Error submitting a packet for decoding\n");
- return ret;
- }
-
- ret = avcodec_receive_frame(codec_avctx, s->frame);
- if (ret < 0)
- return ret;
-
- if (!avctx->sample_rate) {
- avctx->sample_rate = codec_avctx->sample_rate;
- } else {
- if (avctx->sample_rate != codec_avctx->sample_rate)
- return AVERROR_INVALIDDATA;
- }
-
- if (!frame->nb_samples) {
- frame->nb_samples = s->frame->nb_samples;
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
- } else {
- if (frame->nb_samples != s->frame->nb_samples)
- return AVERROR_INVALIDDATA;
- }
-
- skip_bits_long(&gb, size * 8);
-
- if (ch_offset + s->frame->ch_layout.nb_channels > avctx->ch_layout.nb_channels)
- return AVERROR_INVALIDDATA;
-
- if (avctx->sample_fmt != codec_avctx->sample_fmt)
- return AVERROR_INVALIDDATA;
-
- for (int ch = 0; ch < s->frame->ch_layout.nb_channels; ch++)
- memcpy(frame->extended_data[ch_offset + ch],
- s->frame->extended_data[ch],
- av_get_bytes_per_sample(codec_avctx->sample_fmt) * s->frame->nb_samples);
-
- ch_offset += s->frame->ch_layout.nb_channels;
-
- if (ch_offset >= avctx->ch_layout.nb_channels)
- break;
- }
-
- *got_frame = 1;
-
- return get_bits_count(&gb) >> 3;
-}
-
-static void ftr_flush(AVCodecContext *avctx)
-{
- FTRContext *s = avctx->priv_data;
-
- for (int i = 0; i < s->nb_context; i++)
- avcodec_flush_buffers(s->aac_avctx[i]);
-}
-
-static av_cold int ftr_close(AVCodecContext *avctx)
-{
- FTRContext *s = avctx->priv_data;
-
- for (int i = 0; i < s->nb_context; i++)
- avcodec_free_context(&s->aac_avctx[i]);
- av_packet_free(&s->packet);
- av_frame_free(&s->frame);
-
- return 0;
-}
-
-const FFCodec ff_ftr_decoder = {
- .p.name = "ftr",
- .p.long_name = NULL_IF_CONFIG_SMALL("FTR Voice"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_FTR,
- .init = ftr_init,
- FF_CODEC_DECODE_CB(ftr_decode_frame),
- .close = ftr_close,
- .flush = ftr_flush,
- .priv_data_size = sizeof(FTRContext),
- .p.capabilities = AV_CODEC_CAP_SUBFRAMES | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c
deleted file mode 100644
index fccfe7610fe581c1b7b5f5d9d6e90705988fecad..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c
+++ /dev/null
@@ -1,106 +0,0 @@
-/*
- * Copyright (c) 2000, 2001 Fabrice Bellard
- * Copyright (c) 2002-2004 Michael Niedermayer
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/intreadwrite.h"
-
-#include "pixels.h"
-
-#include "bit_depth_template.c"
-
-#define DEF_HPEL(OPNAME, OP) \
-static inline void FUNC(OPNAME ## _pixels8_l2)(uint8_t *dst, \
- const uint8_t *src1, \
- const uint8_t *src2, \
- int dst_stride, \
- int src_stride1, \
- int src_stride2, \
- int h) \
-{ \
- int i; \
- for (i = 0; i < h; i++) { \
- pixel4 a, b; \
- a = AV_RN4P(&src1[i * src_stride1]); \
- b = AV_RN4P(&src2[i * src_stride2]); \
- OP(*((pixel4 *) &dst[i * dst_stride]), rnd_avg_pixel4(a, b)); \
- a = AV_RN4P(&src1[i * src_stride1 + 4 * sizeof(pixel)]); \
- b = AV_RN4P(&src2[i * src_stride2 + 4 * sizeof(pixel)]); \
- OP(*((pixel4 *) &dst[i * dst_stride + 4 * sizeof(pixel)]), \
- rnd_avg_pixel4(a, b)); \
- } \
-} \
- \
-static inline void FUNC(OPNAME ## _pixels4_l2)(uint8_t *dst, \
- const uint8_t *src1, \
- const uint8_t *src2, \
- int dst_stride, \
- int src_stride1, \
- int src_stride2, \
- int h) \
-{ \
- int i; \
- for (i = 0; i < h; i++) { \
- pixel4 a, b; \
- a = AV_RN4P(&src1[i * src_stride1]); \
- b = AV_RN4P(&src2[i * src_stride2]); \
- OP(*((pixel4 *) &dst[i * dst_stride]), rnd_avg_pixel4(a, b)); \
- } \
-} \
- \
-static inline void FUNC(OPNAME ## _pixels2_l2)(uint8_t *dst, \
- const uint8_t *src1, \
- const uint8_t *src2, \
- int dst_stride, \
- int src_stride1, \
- int src_stride2, \
- int h) \
-{ \
- int i; \
- for (i = 0; i < h; i++) { \
- pixel4 a, b; \
- a = AV_RN2P(&src1[i * src_stride1]); \
- b = AV_RN2P(&src2[i * src_stride2]); \
- OP(*((pixel2 *) &dst[i * dst_stride]), rnd_avg_pixel4(a, b)); \
- } \
-} \
- \
-static inline void FUNC(OPNAME ## _pixels16_l2)(uint8_t *dst, \
- const uint8_t *src1, \
- const uint8_t *src2, \
- int dst_stride, \
- int src_stride1, \
- int src_stride2, \
- int h) \
-{ \
- FUNC(OPNAME ## _pixels8_l2)(dst, src1, src2, dst_stride, \
- src_stride1, src_stride2, h); \
- FUNC(OPNAME ## _pixels8_l2)(dst + 8 * sizeof(pixel), \
- src1 + 8 * sizeof(pixel), \
- src2 + 8 * sizeof(pixel), \
- dst_stride, src_stride1, \
- src_stride2, h); \
-} \
-
-#define op_avg(a, b) a = rnd_avg_pixel4(a, b)
-#define op_put(a, b) a = b
-DEF_HPEL(avg, op_avg)
-DEF_HPEL(put, op_put)
-#undef op_avg
-#undef op_put
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Car Game APK How to Download and Play the Most Popular Racing Games.md b/spaces/congsaPfin/Manga-OCR/logs/Car Game APK How to Download and Play the Most Popular Racing Games.md
deleted file mode 100644
index 327f4043dc33835427604f29e7c8f411f9459d16..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Car Game APK How to Download and Play the Most Popular Racing Games.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
How to Download and Install Car Games APK on Android
-
If you are a fan of car games, you might want to try some of the best car games for Android that are not available on the Google Play Store. These games can be downloaded and installed using APK files, which are the packages for Android apps. In this article, we will show you what are APK files, how to find and download them, and how to install them on your Android device.
APK stands for Android Package Kit, and it is the file format used by Android to distribute and install apps. APK files contain all the necessary components for an app to run on your device, such as the code, resources, assets, certificates, and manifest.
-
The benefits of using APK files
-
There are several reasons why you might want to use APK files instead of downloading apps from the Google Play Store. Some of them are:
-
-
You can access apps that are not available in your region or country.
-
You can get early access to beta versions or updates of apps before they are officially released.
-
You can install apps that have been removed from the Google Play Store due to various reasons.
-
You can customize your device with apps that offer more features or options than the official ones.
-
-
The risks of using APK files
-
However, using APK files also comes with some risks that you should be aware of. Some of them are:
-
-
You might download malicious or fake apps that can harm your device or steal your data.
-
You might violate the terms and conditions of the app developers or the Google Play Store by installing unauthorized or modified apps.
-
You might encounter compatibility or performance issues with some apps that are not optimized for your device or Android version.
-
You might lose some features or functionality that are only available in the official versions of the apps.
-
-
Therefore, you should always be careful when downloading and installing APK files from unknown sources. Only download APK files from reputable and trusted websites, and scan them with a reliable antivirus app before installing them. Also, make sure you have enough storage space and battery life on your device before installing any APK file.
-
How to Find and Download Car Games APK Files
-
The best sources for car games APK files
-
There are many websites that offer car games APK files for download, but not all of them are safe and reliable. Some of them might contain viruses, malware, or fake apps that can damage your device or compromise your privacy. To avoid these risks, you should only download car games APK files from reputable sources that monitor and verify the files they host. One of the most popular and trusted sources for car games APK files is APK Mirror, which hosts tons of popular Android apps that can be installed individually or as updates. You can also find other sites that host car games APK files by searching on Google, but make sure you check their reviews and ratings before downloading anything.
-
asphalt 8 car racing game apk download
-extreme car driving simulator apk download
-ultimate car driving simulator apk download
-race master 3d car racing apk download
-real racing 3 car game apk download
-csr racing 2 car game apk download
-need for speed no limits car game apk download
-traffic racer car game apk download
-hill climb racing 2 car game apk download
-asphalt 9 legends car racing game apk download
-drift max pro car drifting game apk download
-car parking multiplayer car game apk download
-city racing 3d car game apk download
-turbo driving racing 3d car game apk download
-mad skills motocross 2 car game apk download
-gt racing 2 the real car experience apk download
-drag racing classic car game apk download
-fastlane road to revenge car game apk download
-beach buggy racing 2 car game apk download
-real drift car racing lite apk download
-rally fury extreme racing car game apk download
-pixel car racer retro style car game apk download
-smashy road wanted 2 car game apk download
-rebel racing realistic car game apk download
-traffic rider motorcycle racing game apk download
-bike race free top motorcycle racing games apk download
-moto x3m bike race game and stunts racing apk download
-bike stunt 3d bike games bike race free apk download
-bike mayhem free best bike game ever apk download
-trial xtreme 4 extreme bike racing champions apk download
-moto traffic race 2 multiplayer bike racing game apk download
-bike blast rush bmx bicycle run and jump games apk download
-bike unchained 2 mountain bike downhill and slopestyle apk download
-dirt bike unchained red bull's new bike game apk download
-downhill masters downhill mountain biking game apk download
-stickman downhill motocross bike and bmx racing game apk download
-stickman downhill monstertruck monster truck racing game apk download
-monster truck destruction real monster truck simulator game apk download
-monster truck go racing games for kids and toddlers apk download
-monster truck demolition derby crash stunts simulator 2021 apk download
-monster truck stunt games mega ramp impossible tracks 3d apk download
-monster truck robot games robot transforming games 2021 apk download
-monster truck police chase cop vs robbers escape games 2021 apk download
-monster truck zombie crusher drive your great vehicle through 20 levels of zombies apocalypse madness and crush them all in this fun and addictive driving and shooting zombie survival...apk download
-
The best car games for Android in 2023
-
If you are looking for some great car games to play on your Android device, you have plenty of options to choose from. Whether you prefer simulations, racing, puzzles, or arcade-style games, there is something for everyone in the car games genre. Here are some of the best car games for Android in 2023 that you can download as APK files:
-
-
Name
Description
Download Link
-
Asphalt 9: Legends
The latest installment of the Asphalt series, featuring stunning graphics, realistic physics, and over 50 licensed cars from top manufacturers. You can race against other players online or offline, customize your cars, and join a club to compete for rewards.
A realistic racing simulation game that offers over 250 cars from 33 brands, 19 real tracks, and a variety of game modes. You can compete with friends and rivals in cross-platform multiplayer, join a team, and participate in special events.
A drifting game that lets you experience the thrill of sliding sideways on different tracks. You can customize your cars, tune your engine, and challenge other players in online or offline modes. You can also create your own club and join tournaments.
A drag racing game that features over 200 licensed cars from top brands, stunning graphics, and realistic physics. You can upgrade your cars, compete with other players in live races, join a crew, and explore a 3D city.
A racing game that lets you build your dream car from scratch, using over 1000 customization options. You can race on various tracks, evade the cops, and take down rivals. You can also join events and win exclusive rewards.
A fun and chaotic game that pits you against your friends or AI opponents in gladiator-style car battles. You can choose from over 100 vehicles, ranging from monster trucks to UFOs, and smash your enemies' heads with various weapons and obstacles.
A physics-based driving game that challenges you to climb hills and overcome obstacles with your vehicle. You can unlock and upgrade over 20 vehicles, customize your driver, and compete with other players in online or offline modes.
A realistic parking simulator that offers over 100 cars, 75 levels, and a huge open world. You can park your car in different scenarios, interact with other players, chat with them, and even exchange cars.
Once you have downloaded the car games APK files that you want to play, you need to install them on your Android device. There are different ways to do this, depending on your device settings and preferences. Here are some of the most common methods:
-
How to enable unknown sources on Android
-
Before you can install any APK file on your device, you need to enable the option to allow installation from unknown sources. This option is disabled by default for security reasons, but you can easily turn it on by following these steps:
-
-
Go to your device's Settings app and tap on Security or Privacy.
-
Find the option that says Unknown sources or Install unknown apps and toggle it on.
-
You might see a warning message that tells you about the risks of installing apps from unknown sources. Tap on OK or Allow to confirm.
-
You can now install any APK file on your device without any restrictions.
-
-
How to use a file manager or a browser to install APK files
-
If you have downloaded the car games APK files using a file manager app or a browser app on your device, you can use the same app to install them. Here is how:
-
-
Open the file manager or browser app that you used to download the APK files.
-
Navigate to the folder where you saved the APK files. Usually, this is the Downloads folder.
-
Tap on the APK file that you want to install. You might see a pop-up window that asks you if you want to install this app. Tap on installed using an APK file, you can do so by following these steps:
-
-
Go to your device's Settings app and tap on Apps or Applications.
-
Find the car game that you want to uninstall and tap on it.
-
Tap on Uninstall or Delete and confirm your choice.
-
You might see a message that tells you that the app has been uninstalled successfully. Tap on OK or Done to exit.
-
-
How can I backup car games APK files?
-
If you want to backup a car game that you have installed using an APK file, you can do so by using a file manager app or an APK extractor app. Here is how:
-
-
Download and install a file manager app or an APK extractor app from the Google Play Store. Some of the most popular ones are ES File Explorer, Solid Explorer, and APK Extractor.
-
Open the app and grant it the necessary permissions to access your device's storage.
-
Navigate to the folder where the car game APK file is located. Usually, this is the /data/app folder.
-
Long-press on the car game APK file and select Copy or Share.
-
Paste or send the car game APK file to another location, such as your SD card, your computer, or your cloud storage.
-
-
I hope this article has helped you learn how to download and install car games APK files on Android. If you have any questions or feedback, please leave a comment below. Happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md
deleted file mode 100644
index e3a8b8318e41f2394622a83c2aa23c1a150d1c54..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
Supreme Duelist Stickman Mod APK Uptodown: A Fun and Exciting Stickman Game
-
If you are looking for a fun and exciting stickman game that you can play with your friends or online players, then you should try Supreme Duelist Stickman. This is a multiplayer stickman game that offers different modes and weapons to choose from. You can control your stickman and compete in various battles where you have to use your skills and strategy to defeat your opponents. You can also customize your stickman with different skins and outfits. However, if you want to unlock all the characters, weapons, and skins in the game, you will need to spend real money or watch ads. That is why you might want to download Supreme Duelist Stickman Mod APK Uptodown, a modified version of the original APK file that offers extra features and benefits for free. In this article, we will tell you what is a mod APK, how to download and install it, and what are its features and advantages.
A multiplayer stickman game with different modes and weapons
-
Supreme Duelist Stickman is a game developed by Neron's Brother, a studio that specializes in creating stickman games. The game has over 100 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars. The game is compatible with Android devices running version 4.1 or higher.
-
The game allows you to play as a stickman and compete in various modes such as single player, two players, survival mode, online mode, tournament mode, etc. You can also choose from different weapons such as guns, swords, axes, hammers, etc. Each weapon has its own advantages and disadvantages, so you have to choose wisely depending on your opponent and situation. You can also use special skills such as teleportation, flying, etc. to gain an edge over your enemies.
-
A game of skill and strategy where you have to defeat your opponents
-
The game is not just about swinging your weapon randomly. You have to use your skill and strategy to defeat your opponents. You have to aim carefully, dodge their attacks, use the environment to your advantage, etc. You also have to manage your energy bar, which depletes as you use your skills or get hit by your enemies. If your energy bar runs out, you will lose the match.
-
The game also has a physics-based system that makes the gameplay more realistic and fun. You can see your stickman react to every hit, bounce, fly, or fall. You can also interact with the objects in the background such as boxes, barrels, ropes, etc. to create more chaos and fun.
-
A game with simple graphics but smooth animations and sound effects
-
The game has simple graphics that resemble stick figures and doodles. However, this does not affect the quality of the game. The game has smooth animations and sound effects that make the gameplay more enjoyable and immersive. You can hear the sound of your weapon hitting your opponent, the sound of your opponent screaming or grunting, the sound of the objects breaking or exploding, etc. You can also see the blood splatter and the ragdoll effects of your stickman and your opponent.
-
What is a mod APK and why do you need it?
-
A modified version of the original APK file that offers extra features and benefits
-
A mod APK is a modified version of the original APK file that is created by third-party developers or hackers. A mod APK can offer extra features and benefits that are not available in the original APK file. For example, a mod APK can remove ads, unlock premium features, add unlimited resources, etc.
-
A mod APK can also bypass the restrictions and limitations imposed by the original APK file. For example, a mod APK can allow you to play a game that is not compatible with your device, or a game that is not available in your region, or a game that requires an internet connection to play.
-
A way to unlock all the characters, weapons, and skins in the game for free
-
One of the reasons why you might want to download Supreme Duelist Stickman Mod APK Uptodown is because it can unlock all the characters, weapons, and skins in the game for free. Normally, you would have to spend real money or watch ads to unlock these items in the game. However, with the mod APK file, you can access all these items without spending a dime or wasting your time.
-
You can choose from different characters such as ninja, pirate, robot, zombie, etc. You can also choose from different weapons such as guns, swords, axes, hammers, etc. You can also customize your stickman with different skins and outfits such as hats, masks, glasses, etc. You can mix and match these items to create your own unique stickman.
-
supreme duelist stickman mod apk download uptodown
-supreme duelist stickman mod apk unlimited money uptodown
-supreme duelist stickman mod apk latest version uptodown
-supreme duelist stickman mod apk android 1 uptodown
-supreme duelist stickman mod apk no ads uptodown
-supreme duelist stickman mod apk all characters unlocked uptodown
-supreme duelist stickman mod apk free shopping uptodown
-supreme duelist stickman mod apk god mode uptodown
-supreme duelist stickman mod apk offline uptodown
-supreme duelist stickman mod apk hack uptodown
-supreme duelist stickman mod apk revdl uptodown
-supreme duelist stickman mod apk rexdl uptodown
-supreme duelist stickman mod apk 2023 uptodown
-supreme duelist stickman mod apk 2.1.8 uptodown
-supreme duelist stickman mod apk 2.1.9 uptodown
-supreme duelist stickman mod apk 2.2.0 uptodown
-supreme duelist stickman mod apk 2.2.1 uptodown
-supreme duelist stickman mod apk 2.2.2 uptodown
-supreme duelist stickman mod apk 2.2.3 uptodown
-supreme duelist stickman mod apk 2.2.4 uptodown
-supreme duelist stickman mod apk 2.2.5 uptodown
-supreme duelist stickman mod apk 2.2.6 uptodown
-supreme duelist stickman mod apk 2.2.7 uptodown
-supreme duelist stickman mod apk 2.2.8 uptodown
-supreme duelist stickman mod apk 3.0.0 uptodown
-supreme duelist stickman mod apk 3.0.1 uptodown
-supreme duelist stickman mod apk 3.0.2 uptodown
-supreme duelist stickman mod apk 3.0.3 uptodown
-supreme duelist stickman mod apk 3.0.4 uptodown
-supreme duelist stickman mod apk 3.0.5 uptodown
-supreme duelist stickman mod apk 3.0.6 uptodown
-supreme duelist stickman mod apk 3.0.7 uptodown
-supreme duelist stickman mod apk 3.0.8 uptodown
-supreme duelist stickman mod apk 3.0.9 uptodown
-supreme duelist stickman mod apk 3.1.0 uptodown
-supreme duelist stickman mod apk 3.1.1 uptodown
-supreme duelist stickman mod apk 3.1.2 uptodown
-supreme duelist stickman mod apk 3.1.3 uptodown
-supreme duelist stickman mod apk 3.1.4 uptodown
-supreme duelist stickman mod apk 3.1.5 uptodown
-supreme duelist stickman mod apk 3.1.6 uptodown
-supreme duelist stickman mod apk 3.1.7 uptodown
-supreme duelist stickman mod apk 3.1.8 uptodown
-supreme duelist stickman mod apk 3.1.9 uptodown
-supreme duelist stickman mod apk 3.2.0 uptodown
-supreme duelist stickman mod apk 3.2.1 uptodown
-supreme duelist stickman mod apk 3.2.2 uptodown
-supreme duelist stickman mod apk 3.2.3 uptodown
-supreme duelist stickman mod apk 3.2.4 uptodown
-
A way to enjoy the game without ads or in-app purchases
-
Another reason why you might want to download Supreme Duelist Stickman Mod APK Uptodown is because it can remove ads and in-app purchases from the game. Normally, you would have to watch ads or buy coins or gems to play the game. However, with the mod APK file, you can enjoy the game without any interruptions or distractions.
-
You can play the game without seeing any annoying ads pop up on your screen. You can also play the game without having to buy any coins or gems to refill your energy bar or unlock new items. You can play the game as much as you want without any limitations or restrictions.
How to download and install Supreme Duelist Stickman Mod APK Uptodown?
-
The steps to download the mod APK file from Uptodown website
-
If you want to download Supreme Duelist Stickman Mod APK Uptodown, you will need to follow these steps:
Search for Supreme Duelist Stickman in the search bar and click on the game icon.
-
Scroll down and click on the green Download button.
-
Choose the latest version of the mod APK file and click on Download again.
-
Wait for the download to finish and save the file in your device.
-
-
The steps to install the mod APK file on your Android device
-
After you have downloaded the mod APK file, you will need to install it on your Android device. To do that, you will need to follow these steps:
-
-
Locate the mod APK file in your device's file manager and tap on it.
-
If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", tap on Settings.
-
Enable the option that says "Allow from this source" or "Install unknown apps".
-
Go back to the mod APK file and tap on it again.
-
Tap on Install and wait for the installation to complete.
-
Tap on Open and enjoy the game.
-
-
The steps to enable unknown sources and permissions on your device
-
If you have not enabled unknown sources and permissions on your device before, you will need to do that before installing the mod APK file. To do that, you will need to follow these steps:
-
-
Go to your device's Settings and tap on Security or Privacy.
-
Find the option that says "Unknown sources" or "Install unknown apps" and enable it.
-
Go to your device's Settings and tap on Apps or Applications.
-
Find the game app and tap on it.
-
Tap on Permissions and enable all the permissions that are required by the game.
-
-
What are the features and advantages of Supreme Duelist Stickman Mod APK Uptodown?
-
The features of the mod APK file such as unlimited coins, gems, and energy
-
The mod APK file of Supreme Duelist Stickman offers some amazing features that are not available in the original APK file. Some of these features are:
-
-
Unlimited coins: You can get unlimited coins in the game, which you can use to buy new weapons, skins, and outfits for your stickman. You can also use coins to upgrade your weapons and skills.
-
Unlimited gems: You can get unlimited gems in the game, which you can use to unlock new characters, modes, and maps in the game. You can also use gems to revive yourself if you lose a match.
-
Unlimited energy: You can get unlimited energy in the game, which means you can play as long as you want without running out of energy. You can also use energy to activate special skills such as teleportation, flying, etc.
-
-
The advantages of the mod APK file such as no root required, no virus or malware, and easy to use
-
The mod APK file of Supreme Duelist Stickman also offers some advantages that make it better than the original APK file. Some of these advantages are:
-
-
No root required: You do not need to root your device to install or use the mod APK file. This means you do not have to risk damaging your device or voiding its warranty.
-
No virus or malware: The mod APK file is safe and secure to download and install. It does not contain any virus or malware that can harm your device or steal your data.
-
Easy to use: The mod APK file is easy to use and does not require any complicated steps or settings. You just have to download, install, and play the game with all the features and benefits.
-
-
The comparison of the mod APK file with the original APK file in terms of performance and quality
-
The mod APK file of Supreme Duelist Stickman does not compromise on the performance and quality of the game. It offers the same gameplay experience as the original APK file with some extra features and benefits. The mod APK file does not affect the graphics, animations, sound effects, or physics of the game. It also does not cause any lag, crash, or error in the game. It also does not require any internet connection to play the game. The mod APK file is compatible with most Android devices and runs smoothly and fast.
-
Conclusion
-
A summary of the main points of the article
-
In conclusion, Supreme Duelist Stickman is a fun and exciting stickman game that you can play with your friends or online players. You can choose from different modes, weapons, and characters to compete in various battles. You can also customize your stickman with different skins and outfits. However, if you want to unlock all the items in the game, you will need to download Supreme Duelist Stickman Mod APK Uptodown, a modified version of the original APK file that offers extra features and benefits for free.
-
A recommendation to try Supreme Duelist Stickman Mod APK Uptodown for a fun and exciting stickman game experience
-
We recommend you to try Supreme Duelist Stickman Mod APK Uptodown for a fun and exciting stickman game experience. You can enjoy the game without ads or in-app purchases. You can also access all the characters, weapons, and skins in the game for free. You can also get unlimited coins, gems, and energy in the game. You can also play the game without any internet connection or root requirement. You can also download and install the mod APK file easily and safely from Uptodown website.
-
A call to action to download the mod APK file from Uptodown website
-
If you are interested in playing Supreme Duelist Stickman Mod APK Uptodown, you can download the mod APK file from Uptodown website by clicking here. You can also follow the steps we have provided above to install the mod APK file on your Android device. You can then enjoy the game with all the features and benefits that the mod APK file offers. So what are you waiting for? Download Supreme Duelist Stickman Mod APK Uptodown now and have fun!
-
FAQs
-
What is Supreme Duelist Stickman?
-
Supreme Duelist Stickman is a multiplayer stickman game that offers different modes and weapons to choose from. You can control your stickman and compete in various battles where you have to use your skills and strategy to defeat your opponents.
-
What is a mod APK?
-
A mod APK is a modified version of the original APK file that offers extra features and benefits that are not available in the original APK file.
-
How to download Supreme Duelist Stickman Mod APK Uptodown?
-
You can download Supreme Duelist Stickman Mod APK Uptodown from Uptodown website by clicking here. You can then follow the steps we have provided above to install the mod APK file on your Android device.
-
What are the features of Supreme Duelist Stickman Mod APK Uptodown?
-
Some of the features of Supreme Duelist Stickman Mod APK Uptodown are unlimited coins, gems, and energy, no ads or in-app purchases, all characters, weapons, and skins unlocked, no internet connection or root required, etc.
-
Is Supreme Duelist Stickman Mod APK Uptodown safe and secure?
-
Yes, Supreme Duelist Stickman Mod APK Uptodown is safe and secure to download and install. It does not contain any virus or malware that can harm your device or steal your data.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md b/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md
deleted file mode 100644
index 9180439e32320b3e4a703445546d31f4baa9133d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
GTA 5 94 GB Download: Everything You Need to Know
-
If you are a fan of video games, you have probably heard of GTA 5, one of the most popular and successful games of all time. But do you know how big the download size of GTA 5 is and how to get it on your device? In this article, we will answer all your questions about GTA 5 94 GB download and more.
-
What is GTA 5 and why is it so popular?
-
GTA 5 is an open-world action-adventure game by Rockstar Games
-
GTA 5, or Grand Theft Auto V, is the fifth main installment in the Grand Theft Auto series, which started in 1997. The game was released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. The game is set in the fictional state of San Andreas, which is based on Southern California, and follows the lives of three protagonists: Michael, Franklin, and Trevor. The game allows the player to switch between the three characters at any time and explore the vast open world, which includes urban areas, countryside, mountains, deserts, and oceans.
GTA 5 has a rich story mode, a vast online multiplayer mode, and stunning graphics
-
GTA 5 has a lot to offer to its players. The game has a story mode that consists of more than 60 missions that involve heists, shootouts, chases, stealth, and more. The game also has a lot of side activities, such as racing, golfing, tennis, hunting, yoga, parachuting, etc. The game also has an online multiplayer mode called GTA Online, which allows up to 30 players to cooperate or compete in various modes, such as deathmatches, races, missions, heists, etc. The game also has stunning graphics that showcase the beauty and diversity of San Andreas. The game features realistic weather effects, dynamic lighting and shadows, high-resolution textures, and detailed animations.
-
Why is GTA 5 94 GB download size and how does it vary across platforms?
-
GTA 5 file size depends on the version, platform, and installation method of the game
-
The file size of GTA 5 is not fixed. It depends on various factors, such as the version of the game, the platform on which you wish to install it, or whether you are installing it from a disk or downloading it from the internet. The file size of GTA 5 also changes over time due to updates and patches that add new content or fix bugs.
-
GTA 5 file size ranges from 72 GB to more than 94 GB depending on the platform
-
The file size of GTA 5 varies across different platforms. Here is a table that shows the approximate file size of GTA 5 depending on the platform:
-
-
Platform
File Size
-
PC
94 GB (download) or 72 GB (disk)
-
PlayStation 4
76 GB (download) or 50 GB (disk)
-
PlayStation 5
80 GB (download) or 50 GB (disk)
-
Xbox One
76 GB (download) or 50 GB (disk)
-
Xbox Series X/S
80 GB (download) or 50 GB (disk)
-
-
As you can see, the file size of GTA 5 is the largest for PC, especially if you download it from the internet. This is because the PC version of GTA 5 has higher resolution textures, more detailed models, and better graphics settings than the console versions. The file size of GTA 5 is also larger for the next-generation consoles, such as PlayStation 5 and Xbox Series X/S, than the previous-generation consoles, such as PlayStation 4 and Xbox One. This is because the next-generation consoles have improved performance and features, such as faster loading times, ray tracing, and 4K resolution.
-
How to download GTA 5 and what are the requirements?
-
GTA 5 can be downloaded from various sources depending on the platform
-
If you want to download GTA 5 on your device, you have different options depending on the platform you are using. Here are some of the sources from which you can download GTA 5:
-
GTA 5 download sources for PC
-
-
The official Rockstar Games Launcher: This is the official source from which you can download GTA 5 on your PC. You need to create a Rockstar Games Social Club account and purchase the game from the launcher. You can also access other Rockstar Games titles and online services from this launcher.
-
Steam: This is another popular source from which you can download GTA 5 on your PC. You need to have a Steam account and purchase the game from the Steam store. You can also access other Steam features and games from this platform.
-
Epic Games Store: This is another source from which you can download GTA 5 on your PC. You need to have an Epic Games account and purchase the game from the Epic Games store. You can also access other Epic Games titles and features from this platform.
-
Other online retailers: You can also download GTA 5 from other online retailers, such as Amazon, Green Man Gaming, Humble Bundle, etc. You need to purchase a digital code or a physical disk from these retailers and redeem it on your PC.
-
-
GTA 5 download sources for PlayStation
-
-
The official PlayStation Store: This is the official source from which you can download GTA 5 on your PlayStation console. You need to have a PlayStation Network account and purchase the game from the PlayStation Store. You can also access other PlayStation games and services from this platform.
-
Other online retailers: You can also download GTA 5 from other online retailers, such as Amazon, Best Buy, GameStop, etc. You need to purchase a digital code or a physical disk from these retailers and redeem it on your PlayStation console.
-
-
GTA 5 download sources for Xbox
-
-
The official Microsoft Store: This is the official source from which you can download GTA 5 on your Xbox console. You need to have a Microsoft account and purchase the game from the Microsoft Store. You can also access other Xbox games and services from this platform.
-
Other online retailers: You can also download GTA 5 from other online retailers, such as Amazon, Best Buy, GameStop, etc. You need to purchase a digital code or a physical disk from these retailers and redeem it on your Xbox console.
-
-
GTA 5 requires a lot of disk space, RAM, and processing power to run smoothly
-
GTA 5 is not a light game. It requires a lot of disk space, RAM, and processing power to run smoothly on your device. Here are some of the minimum and recommended requirements for GTA 5 depending on the platform:
GTA 5 minimum and recommended requirements for PlayStation
-
-
Minimum Requirements
Recommended Requirements
-
Platform: PlayStation 3
Platform: PlayStation 5
-
CPU: Cell Broadband Engine @ 3.2GHz
CPU: AMD Zen 2-based CPU @ 3.5GHz
-
RAM: 256 MB + 256 MB VRAM
RAM: 16 GB GDDR6
-
GPU: NVIDIA RSX @ 550MHz
GPU: AMD RDNA 2-based GPU @ 2.23GHz
-
Disk Space: 50 GB
Disk Space: 80 GB
-
-
GTA 5 minimum and recommended requirements for Xbox
-
-
Minimum Requirements
Recommended Requirements
-
Platform: Xbox 360
Platform: Xbox Series X/S
-
CPU: Xenon @ 3.2GHz
CPU: AMD Zen 2-based CPU @ 3.6GHz /3.4GHz
-
RAM: 512 MB + VRAM
RAM: 16 GB GDDR6 + VRAM
-
GPU: Xenos @500MHz
GPU: AMD RDNA 2-based GPU @1.825GHz /1.565GHz
-
Disk Space:50 GB
Disk Space:80 GB
-
-
As you can see, GTA 5 requires a lot of disk space, RAM, and processing power to run smoothly on your device. You should make sure that your device meets the minimum or recommended requirements before downloading GTA 5.
-
Conclusion and FAQs
-
GTA 5 is one of the most popular and successful games of all time. It is an open-world action-adventure game that offers a rich story mode, a vast online multiplayer mode, and stunning graphics. However, GTA 5 also has a large file size that ranges from 72 GB to more than 94 GB depending on the platform. You should make sure that your device has enough disk space, RAM, and processing power to run GTA 5 smoothly. You can download GTA 5 from various sources depending on the platform you are using.
-
gta 5 94 gb download epic games
-gta 5 94 gb download pc
-gta 5 94 gb download ps4
-gta 5 94 gb download xbox one
-gta 5 94 gb download free
-gta 5 94 gb download size
-gta 5 94 gb download time
-gta 5 94 gb download link
-gta 5 94 gb download torrent
-gta 5 94 gb download highly compressed
-gta 5 94 gb download from rockstar games
-gta 5 94 gb download steam
-gta 5 94 gb download microsoft store
-gta 5 94 gb download ps5
-gta 5 94 gb download xbox series x
-gta 5 94 gb download full version
-gta 5 94 gb download crack
-gta 5 94 gb download without internet
-gta 5 94 gb download offline
-gta 5 94 gb download slow
-gta 5 94 gb download speed
-gta 5 94 gb download error
-gta 5 94 gb download fix
-gta 5 94 gb download update
-gta 5 94 gb download latest version
-gta 5 94 gb download requirements
-gta 5 94 gb download mods
-gta 5 94 gb download cheats
-gta 5 94 gb download gameplay
-gta 5 94 gb download review
-gta 5 94 gb download tips and tricks
-gta 5 94 gb download guide
-gta 5 94 gb download walkthrough
-gta v file size for all platforms [newest update]
-how to install the epic games launcher for the free GTA V offer?
-how to reduce the GTA V file size on PC?
-how to increase the GTA V download speed on PC?
-how to resume the GTA V download on PC?
-how to transfer the GTA V files from one PC to another?
-how to verify the GTA V files on PC?
-how to install the GTA V updates on PC?
-how to uninstall the GTA V files on PC?
-how to play GTA V online on PC?
-how to fix the GTA V launcher error on PC?
-how to optimize the GTA V settings on PC?
-how to run GTA V in windowed mode on PC?
-how to use a controller for GTA V on PC?
-how to change the language of GTA V on PC?
-how to take screenshots in GTA V on PC?
-
If you have any questions about GTA 5 or GTA 5 download size, you can check out these FAQs:
-
-
Q: How long does it take to download GTA 5? A: The download time of GTA 5 depends on your internet speed, the file size of the game, and the source from which you are downloading it. Generally, it can take anywhere from a few hours to a few days to download GTA 5.
-
Q: Can I play GTA Online without downloading GTA 5? A: No, you cannot play GTA Online without downloading GTA 5. GTA Online is a part of GTA 5 and requires the base game to run.
-
Q: Can I reduce the file size of GTA 5? A: There is no official way to reduce the file size of GTA 5. However, some unofficial methods may involve deleting some files or folders from the game directory or using some compression tools. However, these methods are not recommended as they may cause errors or glitches in the game. You should always backup your game files before trying any unofficial methods.
-
Q: Can I play GTA 5 on my mobile device? A: No, you cannot play GTA 5 on your mobile device. GTA 5 is only available for PC, PlayStation, and Xbox platforms. There are some unofficial apps or websites that claim to offer GTA 5 for mobile devices, but they are either fake or malicious. You should avoid them at all costs.
-
Q: Can I transfer my GTA 5 progress from one platform to another? A: Yes, you can transfer your GTA 5 progress from one platform to another, but only for GTA Online. You need to have a Rockstar Games Social Club account and link it to your platform of choice. You can then transfer your GTA Online character and progress from one platform to another. However, you can only do this once per account.
-
-
I hope this article has helped you understand everything you need to know about GTA 5 94 GB download. If you have any other questions or feedback, feel free to leave a comment below. Happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md b/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md
deleted file mode 100644
index 83f523c08a718f2f7ee262af8cf0a00b76be4921..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
TikTok APK Download New Version 2022 Without VPN
-
TikTok is one of the most popular social media platforms in the world, with over 1 billion active users. It allows you to create and share short videos with music, filters, effects, stickers, and more. But what if you want to download TikTok APK new version 2022 without VPN? In this article, we will show you how to do that easily and safely.
TikTok is a social network for creating and sharing short videos
-
TikTok is an app that lets you record, edit, and share videos that are up to 60 seconds long. You can choose from millions of songs, sounds, and clips from the app's library, or use your own audio. You can also add filters, effects, stickers, text, emojis, and more to make your videos unique and expressive.
-
TikTok has millions of users and content creators worldwide
-
TikTok is not only a platform for watching videos, but also a community for connecting with people who share your interests, passions, and talents. You can follow your favorite creators, discover new ones, comment, like, share, and chat with them. You can also join challenges, trends, hashtags, and events to showcase your creativity and have fun.
-
TikTok offers various features and options to make videos fun and engaging
-
TikTok is constantly updating its app with new features and options to make your video experience better. Some of the features include:
-
tiktok app download latest version 2022 free no vpn
-how to download tiktok new update 2022 without vpn android
-tiktok apk 2022 latest version free download no vpn required
-download tiktok for android 2022 new version without using vpn
-tiktok 2022 update apk download free without vpn for android
-tiktok latest version 2022 apk free download no vpn needed
-how to get tiktok new version 2022 without vpn on android
-download tiktok apk 2022 latest update without vpn free
-tiktok for android 2022 new version free download no vpn
-tiktok apk download without vpn 2022 latest version free
-how to install tiktok new version 2022 without vpn on android
-tiktok 2022 latest update apk free download without vpn
-tiktok new version 2022 apk download free no vpn for android
-how to download tiktok latest version 2022 without vpn on android
-tiktok apk free download 2022 new version without vpn android
-download tiktok new version 2022 apk without vpn for free
-tiktok latest update 2022 apk download free no vpn required
-how to update tiktok to new version 2022 without vpn on android
-tiktok apk 2022 new version free download without vpn android
-download tiktok latest version 2022 apk without vpn for free
-tiktok new update 2022 apk free download no vpn needed
-how to download tiktok for android 2022 new version without vpn
-tiktok apk without vpn 2022 latest version free download
-download tiktok for android new version 2022 without using vpn
-tiktok latest version apk download 2022 without vpn free
-how to get tiktok for android new version 2022 without vpn
-tiktok apk download new update 2022 without vpn for free
-download tiktok latest update 2022 apk without vpn free
-tiktok new version apk free download 2022 without vpn android
-how to install tiktok for android new version 2022 without vpn
-tiktok apk free download without vpn 2022 latest version
-download tiktok new version apk 2022 without using vpn for free
-tiktok latest update apk free download 2022 no vpn required
-how to update tiktok for android to new version 2022 without vpn
-tiktok apk download free no vpn 2022 latest version android
-download tiktok latest version for android 2022 without using vpn
-tiktok new version apk download no vpn required 2022 free
-how to get the latest version of tiktok on android without vpn in 2022
-tiktok apk download for free without using vpn in 2022 latest version
-download the newest version of tiktok for android in 2022 without a vpn
-tiktok new update in 2022 apk free download for android no need for a vpn
-how to install the latest update of tiktok on android in 2022 without a vpn
-tiktok apk for android in 2022 newest version free download no need of a vpn
-download the latest update of tiktok for android in 2022 no use of a vpn
-tiktok newest version in 2022 apk free download for android no need of a vpn
-
-
Duet: You can collaborate with another user by recording a video alongside their existing one.
-
Stitch: You can clip and integrate scenes from another user's video into your own.
-
Live: You can broadcast live to your followers and interact with them in real time.
-
Reels: You can create short videos with multiple clips, transitions, effects, and music.
-
Stories: You can share ephemeral videos that disappear after 24 hours.
-
-
Why do you need to download TikTok APK without VPN?
-
TikTok is banned or restricted in some countries due to security or political reasons
-
Unfortunately, not everyone can access TikTok freely. Some countries have banned or restricted the app due to concerns over data privacy, national security, or political censorship. For example, India, Pakistan, Bangladesh, Indonesia, Turkey, Egypt, and some other countries have blocked or limited TikTok's availability in their regions.
-
VPNs can slow down your internet connection and affect your video quality
-
One way to bypass these restrictions is to use a VPN (virtual private network) service that masks your IP address and location. However, VPNs can also have some drawbacks. For one thing, they can slow down your internet speed and bandwidth, which can affect your video streaming and uploading quality. You may experience buffering, lagging, freezing, or pixelation
VPNs can also expose your personal data and online activity to third parties
-
Another downside of using VPNs is that they can also compromise your privacy and security online. Some VPNs may collect and sell your personal data, such as your browsing history, location, device information, and more, to advertisers or other third parties. Some VPNs may also have weak encryption or leak your IP address, which can make you vulnerable to hackers, malware, or government surveillance.
-
How to download TikTok APK new version 2022 without VPN?
-
Find a reliable and safe source for downloading the APK file
-
The first step to download TikTok APK new version 2022 without VPN is to find a trustworthy and secure source for the APK file. APK stands for Android Package Kit, and it is a file format that contains the app's code, resources, and metadata. You can download APK files from various websites, but you need to be careful about the quality and safety of the file. Some APK files may be corrupted, outdated, or infected with viruses or malware.
-
To avoid these risks, you should only download APK files from reputable and verified sources, such as APKMirror, APKPure, or Uptodown. These websites scan and test the APK files before uploading them, and they also provide detailed information about the app's version, size, developer, permissions, and more. You can also read user reviews and ratings to check the feedback and experience of other users.
-
Enable unknown sources on your Android device settings
-
The next step to download TikTok APK new version 2022 without VPN is to enable unknown sources on your Android device settings. This is because Android devices normally do not allow installing apps from sources other than the Google Play Store. To enable unknown sources, you need to follow these steps:
-
-
Go to your device's Settings and tap on Security or Privacy.
-
Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
-
A warning message will pop up, telling you that installing apps from unknown sources can harm your device. Tap on OK or Allow to confirm.
-
-
Now you are ready to install the APK file on your device.
-
Install the APK file and launch the app
-
The final step to download TikTok APK new version 2022 without VPN is to install the APK file and launch the app. To do this, you need to follow these steps:
-
-
Locate the downloaded APK file on your device's storage or file manager. It may be in the Downloads folder or in a specific folder created by the website you downloaded it from.
-
Tap on the APK file and a prompt will appear, asking you if you want to install the app. Tap on Install and wait for the installation process to finish.
-
Once the app is installed, you can tap on Open to launch it or find it on your app drawer or home screen.
-
-
Congratulations! You have successfully downloaded TikTok APK new version 2022 without VPN.
What are the benefits of downloading TikTok APK new version 2022 without VPN?
-
You can access all the features and content of TikTok without any restrictions or limitations
-
One of the main benefits of downloading TikTok APK new version 2022 without VPN is that you can access all the features and content of TikTok without any restrictions or limitations. You can watch, create, and share videos from any country, region, or network. You can also explore and join the global community of TikTok users and creators. You can enjoy the latest trends, challenges, hashtags, and events on TikTok. You can also discover and follow your favorite celebrities, influencers, artists, and brands on TikTok.
-
You can enjoy faster and smoother video streaming and uploading
-
Another benefit of downloading TikTok APK new version 2022 without VPN is that you can enjoy faster and smoother video streaming and uploading. You can watch videos without any buffering, lagging, freezing, or pixelation. You can also upload your videos without any delays, errors, or failures. You can also save your data and battery by using less bandwidth and power. You can have a better video experience on TikTok with high-quality resolution, sound, and speed.
-
You can protect your privacy and security online
-
A third benefit of downloading TikTok APK new version 2022 without VPN is that you can protect your privacy and security online. You can avoid exposing your personal data and online activity to third parties, such as advertisers, hackers, or government agencies. You can also avoid being tracked, monitored, or censored by your ISP, network administrator, or authorities. You can also prevent malware or viruses from infecting your device or stealing your information. You can have a safer and more private online experience on TikTok.
-
Conclusion
-
TikTok is a fun and exciting social media platform that allows you to create and share short videos with music, filters, effects, stickers, and more. However, if you want to download TikTok APK new version 2022 without VPN, you need to follow some steps and precautions. You need to find a reliable and safe source for the APK file, enable unknown sources on your device settings, install the APK file and launch the app. By doing this, you can enjoy the benefits of accessing all the features and content of TikTok without any restrictions or limitations, having faster and smoother video streaming and uploading, and protecting your privacy and security online.
-
We hope this article has helped you learn how to download TikTok APK new version 2022 without VPN. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
FAQs
-
What is an APK file?
-
An APK file is a file format that contains the app's code, resources, and metadata. It is used to install apps on Android devices.
-
What is a VPN service?
-
A VPN service is a service that masks your IP address and location by routing your internet traffic through a secure and encrypted server in another country.
-
Why is TikTok banned or restricted in some countries?
-
TikTok is banned or restricted in some countries due to concerns over data privacy, national security, or political censorship.
-
How can I update my TikTok app after downloading the APK file?
-
You can update your TikTok app by downloading the latest APK file from the same source you downloaded it from before and installing it over the existing app.
-
Is it legal to download TikTok APK without VPN?
-
It depends on the laws and regulations of your country or region. Some countries may prohibit or penalize the use of unlicensed or unauthorized apps or services. You should check the legal status of TikTok in your area before downloading the APK file.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md b/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md
deleted file mode 100644
index b847a37d27c5b502192e55498af1326bc2780cb0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md
+++ /dev/null
@@ -1,175 +0,0 @@
-
-
Traffic Racer 3D Game Download: A Guide for Racing Fans
-
If you love racing games, you should definitely check out Traffic Racer 3D, a milestone in the genre of endless arcade racing. In this game, you can drive your car through highway traffic, earn cash, upgrade your car and buy new ones. You can also try to be one of the fastest drivers in the global leaderboards. Endless racing is now redefined!
In this article, we will show you how to download Traffic Racer 3D on different devices, how to play it and enjoy the thrill of racing, how to compare your performance with other players, how to customize your car and make it stand out, and how to enjoy the stunning graphics and sound effects of this game. We will also answer some frequently asked questions about Traffic Racer 3D. Let's get started!
-
How to Download Traffic Racer 3D on Different Devices
-
Traffic Racer 3D is available on various platforms, such as Android, iOS, Windows, and Chrome. Here are the steps to download it on each device:
-
Android
-
If you have an Android device, you can download Traffic Racer 3D from the Google Play Store. Here's how:
-
Traffic Racer 3D simulation game for Android
-How to play Traffic Racer 3D on PC with emulator
-Traffic Racer 3D APK free download latest version
-Best cars and upgrades in Traffic Racer 3D
-Traffic Racer 3D tips and tricks to score high
-Traffic Racer 3D vs Traffic Racer: which one is better?
-Traffic Racer 3D online leaderboards and achievements
-Traffic Racer 3D review: a fun and addictive racing game
-Traffic Racer 3D mod APK unlimited money and gems
-Traffic Racer 3D cheats and hacks for Android and PC
-Traffic Racer 3D gameplay video and screenshots
-Traffic Racer 3D alternatives: other racing games to try
-Traffic Racer 3D support and feedback: how to contact the developer
-Traffic Racer 3D data safety and privacy policy
-Traffic Racer 3D update: what's new in the latest version?
-Traffic Racer 3D for iOS: is it available on iPhone and iPad?
-Traffic Racer 3D features: stunning 3D graphics and realistic car handling
-Traffic Racer 3D modes: endless, two-way, time trial, police chase and free ride
-Traffic Racer 3D environments: suburb, desert, snowy, rainy and city night
-Traffic Racer 3D download size and system requirements
-How to install Traffic Racer 3D on Android devices
-How to uninstall Traffic Racer 3D from PC or Android
-How to backup and restore Traffic Racer 3D data on Android or PC
-How to fix Traffic Racer 3D not working or crashing issues
-How to change language and settings in Traffic Racer 3D
-How to connect Traffic Racer 3D to Facebook or Google Play Games
-How to earn cash and coins in Traffic Racer 3D fast and easy
-How to unlock all cars and wheels in Traffic Racer 3D
-How to customize your car color and paint in Traffic Racer 3D
-How to overtake cars closely and get bonus scores in Traffic Racer 3D
-How to drive in opposite direction and get extra cash in Traffic Racer 3D
-How to avoid traffic accidents and collisions in Traffic Racer 3D
-How to use tilt or touch controls in Traffic Racer 3D
-How to use gas button and brake button in Traffic Racer 3D
-How to mute or adjust sound effects and music in Traffic Racer 3D
-How to pause or resume the game in Traffic Racer 3D
-How to restart or quit the game in Traffic Racer 3D
-How to view your stats and records in Traffic Racer 3D
-How to access the shop and buy new cars or upgrades in Traffic Racer 3D
-How to watch ads or make in-app purchases in Traffic Racer 3D
-
-
Open the Google Play Store app on your device.
-
Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
-
Tap on the game icon that says Traffic Racer 3D by SK Games.
-
Tap on the Install button and wait for the download to complete.
-
Tap on the Open button to launch the game and enjoy!
-
-
You can also download Traffic Racer 3D from the official website of SK Games. Just follow the link and click on the Download button.
-
iOS
-
If you have an iOS device, you can download Traffic Racer 3D from the App Store. Here's how:
-
-
Open the App Store app on your device.
-
Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
-
Tap on the game icon that says Traffic Racer 3D by SK Games.
-
Tap on the Get button and enter your Apple ID and password if prompted.
-
Wait for the download to complete and tap on the game icon to launch it and enjoy!
-
-
Windows
-
If you have a Windows device, you can download Traffic Racer 3D from the Microsoft Store. Here's how:
-
-
Open the Microsoft Store app on your device.
-
Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
-
Click on the game icon that says Traffic Racer 3D by SK Games.
-
Click on the Get button and sign in with your Microsoft account if prompted.
-
Wait for the download to complete and click on the game icon to launch it and enjoy!
-
-
Chrome
-
If you have a Chrome browser, you can download Traffic Racer 3D from the Chrome Web Store. Here's how:
-
-
Open the Chrome Web Store link in your browser.
-
Search for "Traffic Racer" or "Traffic Racer 3D" in the search bar.
-
Click on the game icon that says Traffic Racer 3D.
-
Click on the Add to Chrome button and confirm if prompted.
-
The game will be added to your Chrome apps and you can launch it from there and enjoy!
-
-
How to Play Traffic Racer 3D and Enjoy the Thrill of Racing
-
Traffic Racer 3D is a simple but addictive game that will keep you entertained for hours. The goal is to drive as fast as you can through traffic without crashing. The faster you drive, the more points you get. You can also earn extra points by overtaking other cars closely or driving in the opposite direction. Here are some features and tips to help you play Traffic Racer 3D and have fun:
-
Game Modes
-
Traffic Racer 3D offers five different game modes to suit your preferences and skills. They are:
-
-
Endless: This is the classic mode where you drive as long as you can without crashing. The traffic gets denser and faster as you progress. You can choose from four different environments: Suburb, Desert, Snowy, and City Night.
-
Two-Way: This is similar to Endless mode, but with a twist. You drive in a two-way road, which means you have to avoid not only the traffic in your lane, but also the oncoming traffic in the opposite lane. This mode is more challenging and rewarding, as you get double points for driving in the wrong lane.
-
Time Trial:This is a mode where you have a limited time to drive as far as you can. You can extend your time by passing checkpoints along the way. You can choose from three different time limits: 60 seconds, 90 seconds, and 120 seconds.
-
Police Chase:This is a mode where you have to escape from the police cars that are chasing you. You have to drive fast and smart, as the police cars will try to ram you or block your way. You can also use nitro boosters to speed up and evade them. You can choose from three different difficulty levels: Easy, Medium, and Hard.
-
Free Ride:Free Ride: This is a mode where you can drive freely without any rules or objectives. You can explore the different environments and enjoy the scenery. You can also change the traffic density and speed to your liking.
-
-
Controls
-
Traffic Racer 3D has simple and intuitive controls that make it easy to play. You can choose from two different control options: Tilt or Touch. Here's how they work:
-
-
Tilt: In this option, you tilt your device left or right to steer your car. You touch the gas button on the right side of the screen to accelerate, and the brake button on the left side of the screen to slow down.
-
Touch: In this option, you touch the left or right arrows on the bottom of the screen to steer your car. You touch the gas button on the right side of the screen to accelerate, and the brake button on the left side of the screen to slow down.
-
-
You can also change the sensitivity of the tilt or touch controls in the settings menu. You can also enable or disable the auto-acceleration feature, which makes your car accelerate automatically without touching the gas button.
-
Tips and Tricks
-
Traffic Racer 3D is a fun and addictive game, but it can also be challenging and frustrating at times. Here are some tips and tricks to help you score more points, avoid crashes, and unlock new cars and upgrades:
-
-
Overtake other cars closely: When you pass another car within 100 meters, you get bonus points. The closer you are, the more points you get. However, be careful not to hit them or you will lose points and damage your car.
-
Drive in the opposite direction: When you drive in the wrong lane in Two-Way mode, you get double points for every overtaking. However, be prepared to face more traffic and higher risk of crashing.
-
Use nitro boosters: When you see a blue bottle on the road, try to collect it. It will give you a nitro booster that will make your car go faster for a few seconds. You can also buy nitro boosters with cash in the garage.
-
Upgrade your car: When you earn enough cash, you can upgrade your car's speed, acceleration, handling, and braking in the garage. This will make your car perform better and easier to control.
-
Buy new cars: When you have enough cash, you can also buy new cars with different features and styles in the garage. Some cars are faster, some are more agile, some are more durable, and some are more stylish. Find the car that suits your taste and budget.
-
-
How to Compare Your Performance with Other Players in Traffic Racer 3D
-
Traffic Racer 3D is not only a game for yourself, but also a game for competing with other players around the world. You can compare your performance with other players in two ways: Leaderboards and Achievements. Here's how they work:
-
Leaderboards
-
Leaderboards are where you can see your rank among other players based on your score in each game mode. You can access them by tapping on the Leaderboards button on the main menu. You can also see your friends' ranks if they are connected to Google Play Games or Game Center.
-
You can filter the leaderboards by All Time, This Week, or Today. You can also see your best score and rank in each game mode on the top of the screen.
-
Achievements
-
Achievements are where you can see your progress and rewards for completing various tasks in Traffic Racer 3D. You can access them by tapping on the Achievements button on the main menu. You can also see your friends' achievements if they are connected to Google Play Games or Game Center.
-
You can see a list of achievements with their names, descriptions, icons, and status (locked or unlocked). Some achievements are easy to unlock, such as driving 10 km or buying a new car. Some achievements are hard to unlock, such as driving 1000 km or reaching 400 km/h.
-
When you unlock an achievement, you get a notification and a reward of cash or nitro boosters. You can also share your achievements with your friends on social media.
-
How to Customize Your Car and Make It Stand Out in Traffic Racer 3D
-
Traffic Racer 3D is not only a game for racing, but also a game for expressing your personality and style. You can customize your car and make it stand out in traffic in two ways: Car Selection and Car Customization. Here's how they work:
-
Car Selection
-
Car Selection is where you can choose from 40+ different cars with different features and styles. You can access it by tapping on the Garage button on the main menu. You can see a list of cars with their names, prices, and stats (speed, acceleration, handling, and braking).
-
You can buy new cars with cash that you earn from playing the game. Some cars are more expensive than others, but they also have better performance and appearance. You can also unlock some cars by completing certain achievements.
-
You can switch between the cars that you own by tapping on them. You can also see a preview of how they look in 3D by tapping on the View button.
-
Car Customization
-
Car Customization is where you can change the color, wheels, and paint of your car. You can access it by tapping on the Customize button on the Garage menu. You can see a 3D view of your car and three options to customize it: Color, Wheels, and Paint.
-
You can change the color of your car by tapping on the Color option and choosing from a palette of colors. You can also use a slider to adjust the brightness and saturation of the color.
-
You can change the wheels of your car by tapping on the Wheels option and choosing from a variety of wheels with different designs and sizes. You can also use a slider to adjust the size of the wheels.
-
You can change the paint of your car by tapping on the Paint option and choosing from a collection of paint patterns with different shapes and colors. You can also use a slider to adjust the scale and rotation of the paint pattern.
-
You can save your customization by tapping on the Save button. You can also reset your customization by tapping on the Reset button.
-
How to Enjoy the Stunning Graphics and Sound Effects of Traffic Racer 3D
-
Traffic Racer 3D is not only a game for playing, but also a game for experiencing. You can enjoy the stunning graphics and sound effects of this game in two ways: Environments and Sound Effects. Here's how they work:
-
Environments
-
Environments are where you can explore 5 detailed environments with different weather and time conditions. You can access them by tapping on the Select Environment button on the main menu. You can see a list of environments with their names and icons.
-
You can choose from four different environments: Suburb, Desert, Snowy, and City Night. Each environment has its own characteristics, such as traffic density, road layout, scenery, lighting, and weather effects. For example, in Suburb, you will see houses, trees, bridges, and sunny skies. In Desert, you will see sand dunes, cacti, rocks, and dusty winds. In Snowy, you will see snowflakes, icebergs, penguins, and auroras. In City Night, you will see skyscrapers, neon lights, billboards, and raindrops.
-
You can also unlock a fifth environment: Rainy Day. This environment is similar to City Night, but with more rain and thunder effects. You can unlock it by reaching 100 km/h in Endless mode.
-
Sound Effects
-
Sound Effects are where you can listen to the realistic engine sounds and background music of Traffic Racer 3D. You can access them by tapping on the Settings button on the main menu. You can see two options to adjust them: SFX Volume and BGM Volume.
-
You can adjust the SFX Volume by using a slider to increase or decrease the sound effects of your car's engine, brakes, horns, crashes, nitro boosters, etc. You can also mute or unmute them by tapping on the speaker icon.
-
You can adjust the BGM Volume by using a slider to increase or decrease the background music of Traffic Racer 3D. The music is composed of various genres, such as rock, pop, techno, etc. You can also mute or unmute them by tapping on the speaker icon.
-
Conclusion: Why Traffic Racer 3D is One of the Best Racing Games on the Market
-
Traffic Racer 3D is a game that will satisfy your need for speed and adrenaline. It has many features and benefits that make it one of the best racing games on the market. Here are some of them:
-
-
It is easy to download and play on different devices, such as Android, iOS, Windows, and Chrome.
-
It has simple and intuitive controls that make it fun and addictive.
-
It has five different game modes that suit your preferences and skills.
-
It has 40+ different cars that you can buy and customize.
-
It has 5 detailed environments that you can explore and enjoy.
-
It has realistic engine sounds and background music that enhance your experience.
-
It has leaderboards and achievements that let you compare your performance with other players.
-
It is free to play, but you can also remove ads and buy more cash and nitro boosters with real money.
-
-
If you are a racing fan, you should not miss Traffic Racer 3D. It is a game that will keep you entertained for hours and make you feel like a real racer. Download it now and enjoy the thrill of racing!
-
FAQs: Frequently Asked Questions about Traffic Racer 3D
-
Here are some of the most common questions that people ask about Traffic Racer 3D. If you have any other questions, feel free to contact us at support@skgames.com.
-
Q1: Is Traffic Racer 3D free to play?
-
A1: Yes, Traffic Racer 3D is free to play. You can download it from the Google Play Store, App Store, Microsoft Store, or Chrome Web Store without paying anything. However, the game contains ads that may interrupt your gameplay. You can remove them by paying a small fee in the game.
-
Q2: How can I remove ads from Traffic Racer 3D?
-
A2: You can remove ads from Traffic Racer 3D by tapping on the No Ads button on the main menu. You will be redirected to a payment page where you can choose your preferred payment method and confirm your purchase. Once you do that, you will not see any ads in the game anymore.
-
Q3: How can I contact the developer of Traffic Racer 3D?
-
A3: You can contact the developer of Traffic Racer 3D by sending an email to support@skgames.com. You can also visit their website at www.skgames.com or follow them on social media at Facebook, Twitter, Instagram, or YouTube. They will be happy to hear from you and answer your questions or feedback.
-
Q4: What are the minimum system requirements for Traffic Racer 3D?
-
A4: The minimum system requirements for Traffic Racer 3D are as follows:
-
-
Android: Android 4.1 or higher, 1 GB of RAM, 100 MB of free storage space.
-
iOS: iOS 9.0 or higher, iPhone 5 or higher, iPad 4 or higher, iPod touch 6th generation or higher, 100 MB of free storage space.
-
Windows: Windows 10 version 10240.0 or higher, x86 or x64 architecture, DirectX version 10 or higher, 2 GB of RAM, 100 MB of free storage space.
-
Chrome: Chrome version 49 or higher, any operating system that supports Chrome browser, any processor type, any amount of RAM, any amount of free storage space.
-
-
Q5: Can I play Traffic Racer 3D offline?
-
A5: Yes, you can play Traffic Racer 3D offline. You don't need an internet connection to play the game. However, some features may not work properly when you are offline, such as leaderboards, achievements, challenges, rewards, and in-app purchases. To enjoy these features fully, you need to connect to the internet.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md b/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md
deleted file mode 100644
index d27057818723d1094a65a92332b06bced0f91e35..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
YT Music Mod APK Download: What You Need to Know
-
If you are looking for a way to enjoy unlimited music and podcasts on your Android device, you might have heard of YT Music Mod APK. This is a modified version of the official YT Music app that offers some extra features and benefits. But what exactly is YT Music Mod APK, how to download and install it, and what are the risks and alternatives? In this article, we will answer these questions and more.
YT Music is a music streaming service that lets you listen to millions of songs, albums, playlists, live performances, remixes, covers, and more. It also lets you watch music videos and access podcasts from various genres and topics. You can use YT Music on your browser or download the app for your Android or iOS device.
-
YT Music features and benefits
-
Some of the features and benefits of using YT Music are:
-
-
You can easily switch between audio and video mode with a single tap.
-
You can discover new music and podcasts based on your preferences, mood, location, time, and trends.
-
You can search for songs by lyrics, artist name, album title, or genre.
-
You can create and share your own playlists or collaborate with others.
-
You can upload up to 100,000 songs of your own music to your library.
-
You can listen to music offline by downloading it to your device.
-
-
YT Music Premium vs YT Music Free
-
YT Music offers two versions: a free version that is supported by ads, and a premium version that costs $9.99 per month. The premium version gives you some exclusive benefits, such as:
-
-
No ads interrupting your music or podcasts.
-
Background play that lets you listen to music or podcasts while using other apps or when your screen is off.
-
Audio-only mode that saves data and battery by playing only the audio of music videos.
-
Smart downloads that automatically download up to 500 songs based on your listening history.
-
-
What is YT Music Mod APK and how to get it?
-
YT Music Mod APK is a modified version of the official YT Music app that bypasses some of the limitations and restrictions of the original app. It is not available on the Google Play Store or the official YT Music website, but you can find it on various third-party websites that offer APK downloads.
-
YT Music Mod APK features and advantages
-
Some of the features and advantages of using YT Music Mod APK are:
-
-
You can enjoy all the benefits of YT Music Premium without paying any subscription fee.
-
You can access all the content available on YT Music without any regional or country restrictions.
-
You can customize the app's interface, theme, layout, and settings according to your preferences.
-
You can remove unwanted permissions, trackers, analytics, and ads from the app.
-
-
YT Music Mod APK installation guide
-
To install YT Music Mod APK on your Android device, you need to follow these steps:
-
y t music mod apk download latest version
-y t music mod apk download no ads
-y t music mod apk download offline
-y t music mod apk download premium
-y t music mod apk download free
-y t music mod apk download for android
-y t music mod apk download 2021
-y t music mod apk download unlimited downloads
-y t music mod apk download background play
-y t music mod apk download without root
-y t music mod apk download for pc
-y t music mod apk download jalan tikus[^1^]
-y t music mod apk download rexdl
-y t music mod apk download revdl
-y t music mod apk download happymod
-y t music mod apk download apkpure
-y t music mod apk download uptodown
-y t music mod apk download android 1
-y t music mod apk download android 11
-y t music mod apk download android 10
-y t music mod apk download android 9
-y t music mod apk download android 8
-y t music mod apk download android 7
-y t music mod apk download android 6
-y t music mod apk download android 5
-y t music mod apk download ios
-y t music mod apk download iphone
-y t music mod apk download ipad
-y t music mod apk download mac
-y t music mod apk download windows 10
-y t music mod apk download windows 7
-y t music mod apk download windows 8.1
-y t music mod apk download linux
-y t music mod apk download chromebook
-y t music mod apk download bluestacks
-y t music mod apk download nox player
-y t music mod apk download ld player
-y t music mod apk download memu play
-y t music mod apk download smart tv
-y t music mod apk download firestick
-y t music mod apk download roku
-y t music mod apk download chromecast
-y t music mod apk download carplay
-y t music mod apk download wear os
-y t music mod apk download watch os
-
-
Download the latest version of YT Music Mod APK from a reliable source. You can use this link as an example.
-
Enable unknown sources on your device by going to Settings > Security > Unknown Sources.
-
Locate the downloaded file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and grant the necessary permissions to the app.
-
Once the installation is complete, launch the app and sign in with your Google account or create a new one.
-
-
Congratulations, you have successfully installed YT Music Mod APK on your device. You can now enjoy unlimited music and podcasts without any ads or interruptions.
-
What are the risks and alternatives of YT Music Mod APK?
-
While YT Music Mod APK may seem like a tempting option to enjoy YT Music for free, it is not without its risks and drawbacks. Here are some of the things you should be aware of before using YT Music Mod APK:
-
Risks of using YT Music Mod APK
-
Some of the risks of using YT Music Mod APK are:
-
-
You may violate the terms and conditions of YT Music and Google, which could result in your account being suspended or banned.
-
You may expose your device and data to malware, viruses, spyware, or other harmful software that could compromise your security and privacy.
-
You may experience bugs, errors, crashes, or performance issues with the app, as it is not officially supported or updated by YT Music or Google.
-
You may miss out on some of the latest features, updates, and improvements that are available on the official YT Music app.
-
-
Alternatives to YT Music Mod APK
-
If you are looking for a safer and more reliable way to enjoy music and podcasts on your Android device, you may want to consider some of the alternatives to YT Music Mod APK. Here are some of the best ones:
-
-
Name
Description
Price
-
Spotify
A popular music streaming service that offers over 70 million songs, podcasts, playlists, and more. You can also create and share your own music and podcasts with Spotify Studio.
Free with ads or $9.99 per month for Spotify Premium.
-
SoundCloud
A platform that lets you discover and stream millions of songs, podcasts, and audio content from independent artists and creators. You can also upload and share your own sounds with the community.
Free with ads or $9.99 per month for SoundCloud Go+.
-
Pandora
A personalized music streaming service that creates custom radio stations based on your favorite artists, songs, genres, and moods. You can also access podcasts, comedy, news, and more.
Free with ads or $4.99 per month for Pandora Plus or $9.99 per month for Pandora Premium.
-
Deezer
A music streaming service that offers over 73 million songs, podcasts, playlists, and more. You can also enjoy live radio stations, lyrics, and recommendations from editors and experts.
Free with ads or $9.99 per month for Deezer Premium or $14.99 per month for Deezer Family.
-
Apple Music
A music streaming service that offers over 75 million songs, podcasts, playlists, and more. You can also access exclusive content, live radio stations, and music videos.
$9.99 per month for Apple Music Individual or $14.99 per month for Apple Music Family.
-
-
Conclusion
-
YT Music Mod APK is a modified version of the official YT Music app that offers some extra features and benefits. However, it also comes with some risks and drawbacks that you should be aware of before using it. If you want to enjoy music and podcasts on your Android device without any hassle or worry, you may want to consider some of the alternatives to YT Music Mod APK that we have listed above.
-
FAQs
-
Here are some of the frequently asked questions about YT Music Mod APK:
-
-
Is YT Music Mod APK legal?
-
No, YT Music Mod APK is not legal, as it violates the terms and conditions of YT Music and Google. It also infringes on the intellectual property rights of the artists and creators whose content is available on YT Music.
-
Is YT Music Mod APK safe?
-
No, YT Music Mod APK is not safe, as it may expose your device and data to malware, viruses, spyware, or other harmful software that could compromise your security and privacy. It may also cause bugs, errors, crashes, or performance issues with the app.
-
Is YT Music Mod APK updated?
-
No, YT Music Mod APK is not updated by YT Music or Google. It depends on the third-party developers who create and distribute it. Therefore, it may not have the latest features, updates, and improvements that are available on the official YT Music app.
-
How to uninstall YT Music Mod APK?
-
To uninstall YT Music Mod APK from your device, you need to follow these steps:
-
-
Go to Settings > Apps > YT Music Mod APK.
-
Tap on Uninstall and confirm your choice.
-
Clear the cache and data of your device by going to Settings > Storage > Cached Data.
-
Restart your device and check if the app is completely removed.
-
-
How to contact YT Music Mod APK developers?
-
There is no official way to contact the developers of YT Music Mod APK, as they are not affiliated with YT Music or Google. However, you may try to find their contact information on the website where you downloaded the app or on their social media accounts if they have any.
-
How to report a problem with YT Music Mod APK?
-
There is no official way to report a problem with YT Music Mod APK, as it is not supported or updated by YT Music or Google. However, you may try to leave a comment or feedback on the website where you downloaded the app or on their social media accounts if they have any.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py
deleted file mode 100644
index fb781d928ed71aceb1abcaef44d3889c00d2261e..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import warnings
-
-import annotator.mmpkg.mmcv as mmcv
-
-from ..builder import PIPELINES
-from .compose import Compose
-
-
-@PIPELINES.register_module()
-class MultiScaleFlipAug(object):
- """Test-time augmentation with multiple scales and flipping.
-
- An example configuration is as followed:
-
- .. code-block::
-
- img_scale=(2048, 1024),
- img_ratios=[0.5, 1.0],
- flip=True,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ]
-
- After MultiScaleFLipAug with above configuration, the results are wrapped
- into lists of the same length as followed:
-
- .. code-block::
-
- dict(
- img=[...],
- img_shape=[...],
- scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)]
- flip=[False, True, False, True]
- ...
- )
-
- Args:
- transforms (list[dict]): Transforms to apply in each augmentation.
- img_scale (None | tuple | list[tuple]): Images scales for resizing.
- img_ratios (float | list[float]): Image ratios for resizing
- flip (bool): Whether apply flip augmentation. Default: False.
- flip_direction (str | list[str]): Flip augmentation directions,
- options are "horizontal" and "vertical". If flip_direction is list,
- multiple flip augmentations will be applied.
- It has no effect when flip == False. Default: "horizontal".
- """
-
- def __init__(self,
- transforms,
- img_scale,
- img_ratios=None,
- flip=False,
- flip_direction='horizontal'):
- self.transforms = Compose(transforms)
- if img_ratios is not None:
- img_ratios = img_ratios if isinstance(img_ratios,
- list) else [img_ratios]
- assert mmcv.is_list_of(img_ratios, float)
- if img_scale is None:
- # mode 1: given img_scale=None and a range of image ratio
- self.img_scale = None
- assert mmcv.is_list_of(img_ratios, float)
- elif isinstance(img_scale, tuple) and mmcv.is_list_of(
- img_ratios, float):
- assert len(img_scale) == 2
- # mode 2: given a scale and a range of image ratio
- self.img_scale = [(int(img_scale[0] * ratio),
- int(img_scale[1] * ratio))
- for ratio in img_ratios]
- else:
- # mode 3: given multiple scales
- self.img_scale = img_scale if isinstance(img_scale,
- list) else [img_scale]
- assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None
- self.flip = flip
- self.img_ratios = img_ratios
- self.flip_direction = flip_direction if isinstance(
- flip_direction, list) else [flip_direction]
- assert mmcv.is_list_of(self.flip_direction, str)
- if not self.flip and self.flip_direction != ['horizontal']:
- warnings.warn(
- 'flip_direction has no effect when flip is set to False')
- if (self.flip
- and not any([t['type'] == 'RandomFlip' for t in transforms])):
- warnings.warn(
- 'flip has no effect when RandomFlip is not in transforms')
-
- def __call__(self, results):
- """Call function to apply test time augment transforms on results.
-
- Args:
- results (dict): Result dict contains the data to transform.
-
- Returns:
- dict[str: list]: The augmented data, where each value is wrapped
- into a list.
- """
-
- aug_data = []
- if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float):
- h, w = results['img'].shape[:2]
- img_scale = [(int(w * ratio), int(h * ratio))
- for ratio in self.img_ratios]
- else:
- img_scale = self.img_scale
- flip_aug = [False, True] if self.flip else [False]
- for scale in img_scale:
- for flip in flip_aug:
- for direction in self.flip_direction:
- _results = results.copy()
- _results['scale'] = scale
- _results['flip'] = flip
- _results['flip_direction'] = direction
- data = self.transforms(_results)
- aug_data.append(data)
- # list of dict to dict of list
- aug_data_dict = {key: [] for key in aug_data[0]}
- for data in aug_data:
- for key, val in data.items():
- aug_data_dict[key].append(val)
- return aug_data_dict
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(transforms={self.transforms}, '
- repr_str += f'img_scale={self.img_scale}, flip={self.flip})'
- repr_str += f'flip_direction={self.flip_direction}'
- return repr_str
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py
deleted file mode 100644
index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(nn.Module):
- """Feature Pyramid Network.
-
- This is an implementation of - Feature Pyramid Networks for Object
- Detection (https://arxiv.org/abs/1612.03144)
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=False,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest')):
- super(FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # For compatibility with previous release
- # TODO: deprecate `extra_convs_on_inputs`
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py
deleted file mode 100644
index 7a24e02cd2b979844bf638b46ac60949ee9ce691..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import timm
-import torch
-import types
-
-import numpy as np
-import torch.nn.functional as F
-
-from .utils import forward_adapted_unflatten, make_backbone_default
-from timm.models.beit import gen_relative_position_index
-from torch.utils.checkpoint import checkpoint
-from typing import Optional
-
-
-def forward_beit(pretrained, x):
- return forward_adapted_unflatten(pretrained, x, "forward_features")
-
-
-def patch_embed_forward(self, x):
- """
- Modification of timm.models.layers.patch_embed.py: PatchEmbed.forward to support arbitrary window sizes.
- """
- x = self.proj(x)
- if self.flatten:
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- return x
-
-
-def _get_rel_pos_bias(self, window_size):
- """
- Modification of timm.models.beit.py: Attention._get_rel_pos_bias to support arbitrary window sizes.
- """
- old_height = 2 * self.window_size[0] - 1
- old_width = 2 * self.window_size[1] - 1
-
- new_height = 2 * window_size[0] - 1
- new_width = 2 * window_size[1] - 1
-
- old_relative_position_bias_table = self.relative_position_bias_table
-
- old_num_relative_distance = self.num_relative_distance
- new_num_relative_distance = new_height * new_width + 3
-
- old_sub_table = old_relative_position_bias_table[:old_num_relative_distance - 3]
-
- old_sub_table = old_sub_table.reshape(1, old_width, old_height, -1).permute(0, 3, 1, 2)
- new_sub_table = F.interpolate(old_sub_table, size=(new_height, new_width), mode="bilinear")
- new_sub_table = new_sub_table.permute(0, 2, 3, 1).reshape(new_num_relative_distance - 3, -1)
-
- new_relative_position_bias_table = torch.cat(
- [new_sub_table, old_relative_position_bias_table[old_num_relative_distance - 3:]])
-
- key = str(window_size[1]) + "," + str(window_size[0])
- if key not in self.relative_position_indices.keys():
- self.relative_position_indices[key] = gen_relative_position_index(window_size)
-
- relative_position_bias = new_relative_position_bias_table[
- self.relative_position_indices[key].view(-1)].view(
- window_size[0] * window_size[1] + 1,
- window_size[0] * window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- return relative_position_bias.unsqueeze(0)
-
-
-def attention_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None):
- """
- Modification of timm.models.beit.py: Attention.forward to support arbitrary window sizes.
- """
- B, N, C = x.shape
-
- qkv_bias = torch.cat((self.q_bias, self.k_bias, self.v_bias)) if self.q_bias is not None else None
- qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
- qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv.unbind(0) # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- if self.relative_position_bias_table is not None:
- window_size = tuple(np.array(resolution) // 16)
- attn = attn + self._get_rel_pos_bias(window_size)
- if shared_rel_pos_bias is not None:
- attn = attn + shared_rel_pos_bias
-
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, -1)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-def block_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None):
- """
- Modification of timm.models.beit.py: Block.forward to support arbitrary window sizes.
- """
- if self.gamma_1 is None:
- x = x + self.drop_path(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- else:
- x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution,
- shared_rel_pos_bias=shared_rel_pos_bias))
- x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
- return x
-
-
-def beit_forward_features(self, x):
- """
- Modification of timm.models.beit.py: Beit.forward_features to support arbitrary window sizes.
- """
- resolution = x.shape[2:]
-
- x = self.patch_embed(x)
- x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
- if self.pos_embed is not None:
- x = x + self.pos_embed
- x = self.pos_drop(x)
-
- rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
- for blk in self.blocks:
- if self.grad_checkpointing and not torch.jit.is_scripting():
- x = checkpoint(blk, x, shared_rel_pos_bias=rel_pos_bias)
- else:
- x = blk(x, resolution, shared_rel_pos_bias=rel_pos_bias)
- x = self.norm(x)
- return x
-
-
-def _make_beit_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[0, 4, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
- start_index_readout=1,
-):
- backbone = make_backbone_default(model, features, size, hooks, vit_features, use_readout, start_index,
- start_index_readout)
-
- backbone.model.patch_embed.forward = types.MethodType(patch_embed_forward, backbone.model.patch_embed)
- backbone.model.forward_features = types.MethodType(beit_forward_features, backbone.model)
-
- for block in backbone.model.blocks:
- attn = block.attn
- attn._get_rel_pos_bias = types.MethodType(_get_rel_pos_bias, attn)
- attn.forward = types.MethodType(attention_forward, attn)
- attn.relative_position_indices = {}
-
- block.forward = types.MethodType(block_forward, block)
-
- return backbone
-
-
-def _make_pretrained_beitl16_512(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("beit_large_patch16_512", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks is None else hooks
-
- features = [256, 512, 1024, 1024]
-
- return _make_beit_backbone(
- model,
- features=features,
- size=[512, 512],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_beitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("beit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks is None else hooks
- return _make_beit_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_beitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("beit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks is None else hooks
- return _make_beit_backbone(
- model,
- features=[96, 192, 384, 768],
- hooks=hooks,
- use_readout=use_readout,
- )
diff --git a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py b/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py
deleted file mode 100644
index a461703287a9bda9c93cfdfbb94d4c3cf90aaba9..0000000000000000000000000000000000000000
--- a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/meta-llama/Llama-2-70b-chat-hf").launch()
\ No newline at end of file
diff --git a/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py b/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py
deleted file mode 100644
index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000
--- a/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import julius
-import torch
-import pytest
-
-from audiocraft.data.audio_utils import (
- _clip_wav,
- convert_audio_channels,
- convert_audio,
- normalize_audio
-)
-from ..common_utils import get_batch_white_noise
-
-
-class TestConvertAudioChannels:
-
- def test_convert_audio_channels_downmix(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=2)
- assert list(mixed.shape) == [b, 2, t]
-
- def test_convert_audio_channels_nochange(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=c)
- assert list(mixed.shape) == list(audio.shape)
-
- def test_convert_audio_channels_upmix(self):
- b, c, t = 2, 1, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=3)
- assert list(mixed.shape) == [b, 3, t]
-
- def test_convert_audio_channels_upmix_error(self):
- b, c, t = 2, 2, 100
- audio = get_batch_white_noise(b, c, t)
- with pytest.raises(ValueError):
- convert_audio_channels(audio, channels=3)
-
-
-class TestConvertAudio:
-
- def test_convert_audio_channels_downmix(self):
- b, c, dur = 2, 3, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2)
- assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]]
-
- def test_convert_audio_channels_upmix(self):
- b, c, dur = 2, 1, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3)
- assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]]
-
- def test_convert_audio_upsample(self):
- b, c, dur = 2, 1, 4.
- sr = 2
- new_sr = 3
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
- def test_convert_audio_resample(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- new_sr = 2
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
-
-class TestNormalizeAudio:
-
- def test_clip_wav(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- _clip_wav(audio)
- assert audio.abs().max() <= 1
-
- def test_normalize_audio_clip(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='clip')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_rms(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='rms')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_peak(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='peak')
- assert norm_audio.abs().max() <= 1
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py
deleted file mode 100644
index cf2993e38920bdebf79c6342875c2898e174ef6b..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py
+++ /dev/null
@@ -1,1064 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# GIF file handling
-#
-# History:
-# 1995-09-01 fl Created
-# 1996-12-14 fl Added interlace support
-# 1996-12-30 fl Added animation support
-# 1997-01-05 fl Added write support, fixed local colour map bug
-# 1997-02-23 fl Make sure to load raster data in getdata()
-# 1997-07-05 fl Support external decoder (0.4)
-# 1998-07-09 fl Handle all modes when saving (0.5)
-# 1998-07-15 fl Renamed offset attribute to avoid name clash
-# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6)
-# 2001-04-17 fl Added palette optimization (0.7)
-# 2002-06-06 fl Added transparency support for save (0.8)
-# 2004-02-24 fl Disable interlacing for small images
-#
-# Copyright (c) 1997-2004 by Secret Labs AB
-# Copyright (c) 1995-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import itertools
-import math
-import os
-import subprocess
-from enum import IntEnum
-
-from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-
-class LoadingStrategy(IntEnum):
- """.. versionadded:: 9.1.0"""
-
- RGB_AFTER_FIRST = 0
- RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1
- RGB_ALWAYS = 2
-
-
-#: .. versionadded:: 9.1.0
-LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST
-
-# --------------------------------------------------------------------
-# Identify/read GIF files
-
-
-def _accept(prefix):
- return prefix[:6] in [b"GIF87a", b"GIF89a"]
-
-
-##
-# Image plugin for GIF images. This plugin supports both GIF87 and
-# GIF89 images.
-
-
-class GifImageFile(ImageFile.ImageFile):
- format = "GIF"
- format_description = "Compuserve GIF"
- _close_exclusive_fp_after_loading = False
-
- global_palette = None
-
- def data(self):
- s = self.fp.read(1)
- if s and s[0]:
- return self.fp.read(s[0])
- return None
-
- def _is_palette_needed(self, p):
- for i in range(0, len(p), 3):
- if not (i // 3 == p[i] == p[i + 1] == p[i + 2]):
- return True
- return False
-
- def _open(self):
- # Screen
- s = self.fp.read(13)
- if not _accept(s):
- msg = "not a GIF file"
- raise SyntaxError(msg)
-
- self.info["version"] = s[:6]
- self._size = i16(s, 6), i16(s, 8)
- self.tile = []
- flags = s[10]
- bits = (flags & 7) + 1
-
- if flags & 128:
- # get global palette
- self.info["background"] = s[11]
- # check if palette contains colour indices
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- p = ImagePalette.raw("RGB", p)
- self.global_palette = self.palette = p
-
- self._fp = self.fp # FIXME: hack
- self.__rewind = self.fp.tell()
- self._n_frames = None
- self._is_animated = None
- self._seek(0) # get ready to read first frame
-
- @property
- def n_frames(self):
- if self._n_frames is None:
- current = self.tell()
- try:
- while True:
- self._seek(self.tell() + 1, False)
- except EOFError:
- self._n_frames = self.tell() + 1
- self.seek(current)
- return self._n_frames
-
- @property
- def is_animated(self):
- if self._is_animated is None:
- if self._n_frames is not None:
- self._is_animated = self._n_frames != 1
- else:
- current = self.tell()
- if current:
- self._is_animated = True
- else:
- try:
- self._seek(1, False)
- self._is_animated = True
- except EOFError:
- self._is_animated = False
-
- self.seek(current)
- return self._is_animated
-
- def seek(self, frame):
- if not self._seek_check(frame):
- return
- if frame < self.__frame:
- self.im = None
- self._seek(0)
-
- last_frame = self.__frame
- for f in range(self.__frame + 1, frame + 1):
- try:
- self._seek(f)
- except EOFError as e:
- self.seek(last_frame)
- msg = "no more images in GIF file"
- raise EOFError(msg) from e
-
- def _seek(self, frame, update_image=True):
- if frame == 0:
- # rewind
- self.__offset = 0
- self.dispose = None
- self.__frame = -1
- self._fp.seek(self.__rewind)
- self.disposal_method = 0
- if "comment" in self.info:
- del self.info["comment"]
- else:
- # ensure that the previous frame was loaded
- if self.tile and update_image:
- self.load()
-
- if frame != self.__frame + 1:
- msg = f"cannot seek to frame {frame}"
- raise ValueError(msg)
-
- self.fp = self._fp
- if self.__offset:
- # backup to last frame
- self.fp.seek(self.__offset)
- while self.data():
- pass
- self.__offset = 0
-
- s = self.fp.read(1)
- if not s or s == b";":
- raise EOFError
-
- palette = None
-
- info = {}
- frame_transparency = None
- interlace = None
- frame_dispose_extent = None
- while True:
- if not s:
- s = self.fp.read(1)
- if not s or s == b";":
- break
-
- elif s == b"!":
- #
- # extensions
- #
- s = self.fp.read(1)
- block = self.data()
- if s[0] == 249:
- #
- # graphic control extension
- #
- flags = block[0]
- if flags & 1:
- frame_transparency = block[3]
- info["duration"] = i16(block, 1) * 10
-
- # disposal method - find the value of bits 4 - 6
- dispose_bits = 0b00011100 & flags
- dispose_bits = dispose_bits >> 2
- if dispose_bits:
- # only set the dispose if it is not
- # unspecified. I'm not sure if this is
- # correct, but it seems to prevent the last
- # frame from looking odd for some animations
- self.disposal_method = dispose_bits
- elif s[0] == 254:
- #
- # comment extension
- #
- comment = b""
-
- # Read this comment block
- while block:
- comment += block
- block = self.data()
-
- if "comment" in info:
- # If multiple comment blocks in frame, separate with \n
- info["comment"] += b"\n" + comment
- else:
- info["comment"] = comment
- s = None
- continue
- elif s[0] == 255 and frame == 0:
- #
- # application extension
- #
- info["extension"] = block, self.fp.tell()
- if block[:11] == b"NETSCAPE2.0":
- block = self.data()
- if len(block) >= 3 and block[0] == 1:
- self.info["loop"] = i16(block, 1)
- while self.data():
- pass
-
- elif s == b",":
- #
- # local image
- #
- s = self.fp.read(9)
-
- # extent
- x0, y0 = i16(s, 0), i16(s, 2)
- x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6)
- if (x1 > self.size[0] or y1 > self.size[1]) and update_image:
- self._size = max(x1, self.size[0]), max(y1, self.size[1])
- Image._decompression_bomb_check(self._size)
- frame_dispose_extent = x0, y0, x1, y1
- flags = s[8]
-
- interlace = (flags & 64) != 0
-
- if flags & 128:
- bits = (flags & 7) + 1
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- palette = ImagePalette.raw("RGB", p)
- else:
- palette = False
-
- # image data
- bits = self.fp.read(1)[0]
- self.__offset = self.fp.tell()
- break
-
- else:
- pass
- # raise OSError, "illegal GIF tag `%x`" % s[0]
- s = None
-
- if interlace is None:
- # self._fp = None
- raise EOFError
-
- self.__frame = frame
- if not update_image:
- return
-
- self.tile = []
-
- if self.dispose:
- self.im.paste(self.dispose, self.dispose_extent)
-
- self._frame_palette = palette if palette is not None else self.global_palette
- self._frame_transparency = frame_transparency
- if frame == 0:
- if self._frame_palette:
- if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- self.mode = "RGBA" if frame_transparency is not None else "RGB"
- else:
- self.mode = "P"
- else:
- self.mode = "L"
-
- if not palette and self.global_palette:
- from copy import copy
-
- palette = copy(self.global_palette)
- self.palette = palette
- else:
- if self.mode == "P":
- if (
- LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY
- or palette
- ):
- self.pyaccess = None
- if "transparency" in self.info:
- self.im.putpalettealpha(self.info["transparency"], 0)
- self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG)
- self.mode = "RGBA"
- del self.info["transparency"]
- else:
- self.mode = "RGB"
- self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG)
-
- def _rgb(color):
- if self._frame_palette:
- color = tuple(self._frame_palette.palette[color * 3 : color * 3 + 3])
- else:
- color = (color, color, color)
- return color
-
- self.dispose_extent = frame_dispose_extent
- try:
- if self.disposal_method < 2:
- # do not dispose or none specified
- self.dispose = None
- elif self.disposal_method == 2:
- # replace with background colour
-
- # only dispose the extent in this frame
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
-
- # by convention, attempt to use transparency first
- dispose_mode = "P"
- color = self.info.get("transparency", frame_transparency)
- if color is not None:
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(color) + (0,)
- else:
- color = self.info.get("background", 0)
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGB"
- color = _rgb(color)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- else:
- # replace with previous contents
- if self.im is not None:
- # only dispose the extent in this frame
- self.dispose = self._crop(self.im, self.dispose_extent)
- elif frame_transparency is not None:
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
- dispose_mode = "P"
- color = frame_transparency
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(frame_transparency) + (0,)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- except AttributeError:
- pass
-
- if interlace is not None:
- transparency = -1
- if frame_transparency is not None:
- if frame == 0:
- if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS:
- self.info["transparency"] = frame_transparency
- elif self.mode not in ("RGB", "RGBA"):
- transparency = frame_transparency
- self.tile = [
- (
- "gif",
- (x0, y0, x1, y1),
- self.__offset,
- (bits, interlace, transparency),
- )
- ]
-
- if info.get("comment"):
- self.info["comment"] = info["comment"]
- for k in ["duration", "extension"]:
- if k in info:
- self.info[k] = info[k]
- elif k in self.info:
- del self.info[k]
-
- def load_prepare(self):
- temp_mode = "P" if self._frame_palette else "L"
- self._prev_im = None
- if self.__frame == 0:
- if self._frame_transparency is not None:
- self.im = Image.core.fill(
- temp_mode, self.size, self._frame_transparency
- )
- elif self.mode in ("RGB", "RGBA"):
- self._prev_im = self.im
- if self._frame_palette:
- self.im = Image.core.fill("P", self.size, self._frame_transparency or 0)
- self.im.putpalette(*self._frame_palette.getdata())
- else:
- self.im = None
- self.mode = temp_mode
- self._frame_palette = None
-
- super().load_prepare()
-
- def load_end(self):
- if self.__frame == 0:
- if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- self.mode = "RGBA"
- else:
- self.mode = "RGB"
- self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG)
- return
- if not self._prev_im:
- return
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- frame_im = self.im.convert("RGBA")
- else:
- frame_im = self.im.convert("RGB")
- frame_im = self._crop(frame_im, self.dispose_extent)
-
- self.im = self._prev_im
- self.mode = self.im.mode
- if frame_im.mode == "RGBA":
- self.im.paste(frame_im, self.dispose_extent, frame_im)
- else:
- self.im.paste(frame_im, self.dispose_extent)
-
- def tell(self):
- return self.__frame
-
-
-# --------------------------------------------------------------------
-# Write GIF files
-
-
-RAWMODE = {"1": "L", "L": "L", "P": "P"}
-
-
-def _normalize_mode(im):
- """
- Takes an image (or frame), returns an image in a mode that is appropriate
- for saving in a Gif.
-
- It may return the original image, or it may return an image converted to
- palette or 'L' mode.
-
- :param im: Image object
- :returns: Image object
- """
- if im.mode in RAWMODE:
- im.load()
- return im
- if Image.getmodebase(im.mode) == "RGB":
- im = im.convert("P", palette=Image.Palette.ADAPTIVE)
- if im.palette.mode == "RGBA":
- for rgba in im.palette.colors:
- if rgba[3] == 0:
- im.info["transparency"] = im.palette.colors[rgba]
- break
- return im
- return im.convert("L")
-
-
-def _normalize_palette(im, palette, info):
- """
- Normalizes the palette for image.
- - Sets the palette to the incoming palette, if provided.
- - Ensures that there's a palette for L mode images
- - Optimizes the palette if necessary/desired.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: Image object
- """
- source_palette = None
- if palette:
- # a bytes palette
- if isinstance(palette, (bytes, bytearray, list)):
- source_palette = bytearray(palette[:768])
- if isinstance(palette, ImagePalette.ImagePalette):
- source_palette = bytearray(palette.palette)
-
- if im.mode == "P":
- if not source_palette:
- source_palette = im.im.getpalette("RGB")[:768]
- else: # L-mode
- if not source_palette:
- source_palette = bytearray(i // 3 for i in range(768))
- im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette)
-
- if palette:
- used_palette_colors = []
- for i in range(0, len(source_palette), 3):
- source_color = tuple(source_palette[i : i + 3])
- index = im.palette.colors.get(source_color)
- if index in used_palette_colors:
- index = None
- used_palette_colors.append(index)
- for i, index in enumerate(used_palette_colors):
- if index is None:
- for j in range(len(used_palette_colors)):
- if j not in used_palette_colors:
- used_palette_colors[i] = j
- break
- im = im.remap_palette(used_palette_colors)
- else:
- used_palette_colors = _get_optimize(im, info)
- if used_palette_colors is not None:
- return im.remap_palette(used_palette_colors, source_palette)
-
- im.palette.palette = source_palette
- return im
-
-
-def _write_single_frame(im, fp, palette):
- im_out = _normalize_mode(im)
- for k, v in im_out.info.items():
- im.encoderinfo.setdefault(k, v)
- im_out = _normalize_palette(im_out, palette, im.encoderinfo)
-
- for s in _get_global_header(im_out, im.encoderinfo):
- fp.write(s)
-
- # local image header
- flags = 0
- if get_interlace(im):
- flags = flags | 64
- _write_local_header(fp, im, (0, 0), flags)
-
- im_out.encoderconfig = (8, get_interlace(im))
- ImageFile._save(im_out, fp, [("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])])
-
- fp.write(b"\0") # end of image data
-
-
-def _getbbox(base_im, im_frame):
- if _get_palette_bytes(im_frame) == _get_palette_bytes(base_im):
- delta = ImageChops.subtract_modulo(im_frame, base_im)
- else:
- delta = ImageChops.subtract_modulo(
- im_frame.convert("RGBA"), base_im.convert("RGBA")
- )
- return delta.getbbox(alpha_only=False)
-
-
-def _write_multiple_frames(im, fp, palette):
- duration = im.encoderinfo.get("duration")
- disposal = im.encoderinfo.get("disposal", im.info.get("disposal"))
-
- im_frames = []
- frame_count = 0
- background_im = None
- for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])):
- for im_frame in ImageSequence.Iterator(imSequence):
- # a copy is required here since seek can still mutate the image
- im_frame = _normalize_mode(im_frame.copy())
- if frame_count == 0:
- for k, v in im_frame.info.items():
- if k == "transparency":
- continue
- im.encoderinfo.setdefault(k, v)
-
- encoderinfo = im.encoderinfo.copy()
- im_frame = _normalize_palette(im_frame, palette, encoderinfo)
- if "transparency" in im_frame.info:
- encoderinfo.setdefault("transparency", im_frame.info["transparency"])
- if isinstance(duration, (list, tuple)):
- encoderinfo["duration"] = duration[frame_count]
- elif duration is None and "duration" in im_frame.info:
- encoderinfo["duration"] = im_frame.info["duration"]
- if isinstance(disposal, (list, tuple)):
- encoderinfo["disposal"] = disposal[frame_count]
- frame_count += 1
-
- if im_frames:
- # delta frame
- previous = im_frames[-1]
- bbox = _getbbox(previous["im"], im_frame)
- if not bbox:
- # This frame is identical to the previous frame
- if encoderinfo.get("duration"):
- previous["encoderinfo"]["duration"] += encoderinfo["duration"]
- continue
- if encoderinfo.get("disposal") == 2:
- if background_im is None:
- color = im.encoderinfo.get(
- "transparency", im.info.get("transparency", (0, 0, 0))
- )
- background = _get_background(im_frame, color)
- background_im = Image.new("P", im_frame.size, background)
- background_im.putpalette(im_frames[0]["im"].palette)
- bbox = _getbbox(background_im, im_frame)
- else:
- bbox = None
- im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo})
-
- if len(im_frames) > 1:
- for frame_data in im_frames:
- im_frame = frame_data["im"]
- if not frame_data["bbox"]:
- # global header
- for s in _get_global_header(im_frame, frame_data["encoderinfo"]):
- fp.write(s)
- offset = (0, 0)
- else:
- # compress difference
- if not palette:
- frame_data["encoderinfo"]["include_color_table"] = True
-
- im_frame = im_frame.crop(frame_data["bbox"])
- offset = frame_data["bbox"][:2]
- _write_frame_data(fp, im_frame, offset, frame_data["encoderinfo"])
- return True
- elif "duration" in im.encoderinfo and isinstance(
- im.encoderinfo["duration"], (list, tuple)
- ):
- # Since multiple frames will not be written, add together the frame durations
- im.encoderinfo["duration"] = sum(im.encoderinfo["duration"])
-
-
-def _save_all(im, fp, filename):
- _save(im, fp, filename, save_all=True)
-
-
-def _save(im, fp, filename, save_all=False):
- # header
- if "palette" in im.encoderinfo or "palette" in im.info:
- palette = im.encoderinfo.get("palette", im.info.get("palette"))
- else:
- palette = None
- im.encoderinfo["optimize"] = im.encoderinfo.get("optimize", True)
-
- if not save_all or not _write_multiple_frames(im, fp, palette):
- _write_single_frame(im, fp, palette)
-
- fp.write(b";") # end of file
-
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-def get_interlace(im):
- interlace = im.encoderinfo.get("interlace", 1)
-
- # workaround for @PIL153
- if min(im.size) < 16:
- interlace = 0
-
- return interlace
-
-
-def _write_local_header(fp, im, offset, flags):
- transparent_color_exists = False
- try:
- if "transparency" in im.encoderinfo:
- transparency = im.encoderinfo["transparency"]
- else:
- transparency = im.info["transparency"]
- transparency = int(transparency)
- except (KeyError, ValueError):
- pass
- else:
- # optimize the block away if transparent color is not used
- transparent_color_exists = True
-
- used_palette_colors = _get_optimize(im, im.encoderinfo)
- if used_palette_colors is not None:
- # adjust the transparency index after optimize
- try:
- transparency = used_palette_colors.index(transparency)
- except ValueError:
- transparent_color_exists = False
-
- if "duration" in im.encoderinfo:
- duration = int(im.encoderinfo["duration"] / 10)
- else:
- duration = 0
-
- disposal = int(im.encoderinfo.get("disposal", 0))
-
- if transparent_color_exists or duration != 0 or disposal:
- packed_flag = 1 if transparent_color_exists else 0
- packed_flag |= disposal << 2
- if not transparent_color_exists:
- transparency = 0
-
- fp.write(
- b"!"
- + o8(249) # extension intro
- + o8(4) # length
- + o8(packed_flag) # packed fields
- + o16(duration) # duration
- + o8(transparency) # transparency index
- + o8(0)
- )
-
- include_color_table = im.encoderinfo.get("include_color_table")
- if include_color_table:
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
- if color_table_size:
- flags = flags | 128 # local color table flag
- flags = flags | color_table_size
-
- fp.write(
- b","
- + o16(offset[0]) # offset
- + o16(offset[1])
- + o16(im.size[0]) # size
- + o16(im.size[1])
- + o8(flags) # flags
- )
- if include_color_table and color_table_size:
- fp.write(_get_header_palette(palette_bytes))
- fp.write(o8(8)) # bits
-
-
-def _save_netpbm(im, fp, filename):
- # Unused by default.
- # To use, uncomment the register_save call at the end of the file.
- #
- # If you need real GIF compression and/or RGB quantization, you
- # can use the external NETPBM/PBMPLUS utilities. See comments
- # below for information on how to enable this.
- tempfile = im._dump()
-
- try:
- with open(filename, "wb") as f:
- if im.mode != "RGB":
- subprocess.check_call(
- ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL
- )
- else:
- # Pipe ppmquant output into ppmtogif
- # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename)
- quant_cmd = ["ppmquant", "256", tempfile]
- togif_cmd = ["ppmtogif"]
- quant_proc = subprocess.Popen(
- quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL
- )
- togif_proc = subprocess.Popen(
- togif_cmd,
- stdin=quant_proc.stdout,
- stdout=f,
- stderr=subprocess.DEVNULL,
- )
-
- # Allow ppmquant to receive SIGPIPE if ppmtogif exits
- quant_proc.stdout.close()
-
- retcode = quant_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, quant_cmd)
-
- retcode = togif_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, togif_cmd)
- finally:
- try:
- os.unlink(tempfile)
- except OSError:
- pass
-
-
-# Force optimization so that we can test performance against
-# cases where it took lots of memory and time previously.
-_FORCE_OPTIMIZE = False
-
-
-def _get_optimize(im, info):
- """
- Palette optimization is a potentially expensive operation.
-
- This function determines if the palette should be optimized using
- some heuristics, then returns the list of palette entries in use.
-
- :param im: Image object
- :param info: encoderinfo
- :returns: list of indexes of palette entries in use, or None
- """
- if im.mode in ("P", "L") and info and info.get("optimize", 0):
- # Potentially expensive operation.
-
- # The palette saves 3 bytes per color not used, but palette
- # lengths are restricted to 3*(2**N) bytes. Max saving would
- # be 768 -> 6 bytes if we went all the way down to 2 colors.
- # * If we're over 128 colors, we can't save any space.
- # * If there aren't any holes, it's not worth collapsing.
- # * If we have a 'large' image, the palette is in the noise.
-
- # create the new palette if not every color is used
- optimise = _FORCE_OPTIMIZE or im.mode == "L"
- if optimise or im.width * im.height < 512 * 512:
- # check which colors are used
- used_palette_colors = []
- for i, count in enumerate(im.histogram()):
- if count:
- used_palette_colors.append(i)
-
- if optimise or max(used_palette_colors) >= len(used_palette_colors):
- return used_palette_colors
-
- num_palette_colors = len(im.palette.palette) // Image.getmodebands(
- im.palette.mode
- )
- current_palette_size = 1 << (num_palette_colors - 1).bit_length()
- if (
- # check that the palette would become smaller when saved
- len(used_palette_colors) <= current_palette_size // 2
- # check that the palette is not already the smallest possible size
- and current_palette_size > 2
- ):
- return used_palette_colors
-
-
-def _get_color_table_size(palette_bytes):
- # calculate the palette size for the header
- if not palette_bytes:
- return 0
- elif len(palette_bytes) < 9:
- return 1
- else:
- return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1
-
-
-def _get_header_palette(palette_bytes):
- """
- Returns the palette, null padded to the next power of 2 (*3) bytes
- suitable for direct inclusion in the GIF header
-
- :param palette_bytes: Unpadded palette bytes, in RGBRGB form
- :returns: Null padded palette
- """
- color_table_size = _get_color_table_size(palette_bytes)
-
- # add the missing amount of bytes
- # the palette has to be 2< 0:
- palette_bytes += o8(0) * 3 * actual_target_size_diff
- return palette_bytes
-
-
-def _get_palette_bytes(im):
- """
- Gets the palette for inclusion in the gif header
-
- :param im: Image object
- :returns: Bytes, len<=768 suitable for inclusion in gif header
- """
- return im.palette.palette if im.palette else b""
-
-
-def _get_background(im, info_background):
- background = 0
- if info_background:
- if isinstance(info_background, tuple):
- # WebPImagePlugin stores an RGBA value in info["background"]
- # So it must be converted to the same format as GifImagePlugin's
- # info["background"] - a global color table index
- try:
- background = im.palette.getcolor(info_background, im)
- except ValueError as e:
- if str(e) not in (
- # If all 256 colors are in use,
- # then there is no need for the background color
- "cannot allocate more than 256 colors",
- # Ignore non-opaque WebP background
- "cannot add non-opaque RGBA color to RGB palette",
- ):
- raise
- else:
- background = info_background
- return background
-
-
-def _get_global_header(im, info):
- """Return a list of strings representing a GIF header"""
-
- # Header Block
- # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp
-
- version = b"87a"
- if im.info.get("version") == b"89a" or (
- info
- and (
- "transparency" in info
- or "loop" in info
- or info.get("duration")
- or info.get("comment")
- )
- ):
- version = b"89a"
-
- background = _get_background(im, info.get("background"))
-
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
-
- header = [
- b"GIF" # signature
- + version # version
- + o16(im.size[0]) # canvas width
- + o16(im.size[1]), # canvas height
- # Logical Screen Descriptor
- # size of global color table + global color table flag
- o8(color_table_size + 128), # packed fields
- # background + reserved/aspect
- o8(background) + o8(0),
- # Global Color Table
- _get_header_palette(palette_bytes),
- ]
- if "loop" in info:
- header.append(
- b"!"
- + o8(255) # extension intro
- + o8(11)
- + b"NETSCAPE2.0"
- + o8(3)
- + o8(1)
- + o16(info["loop"]) # number of loops
- + o8(0)
- )
- if info.get("comment"):
- comment_block = b"!" + o8(254) # extension intro
-
- comment = info["comment"]
- if isinstance(comment, str):
- comment = comment.encode()
- for i in range(0, len(comment), 255):
- subblock = comment[i : i + 255]
- comment_block += o8(len(subblock)) + subblock
-
- comment_block += o8(0)
- header.append(comment_block)
- return header
-
-
-def _write_frame_data(fp, im_frame, offset, params):
- try:
- im_frame.encoderinfo = params
-
- # local image header
- _write_local_header(fp, im_frame, offset, 0)
-
- ImageFile._save(
- im_frame, fp, [("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])]
- )
-
- fp.write(b"\0") # end of image data
- finally:
- del im_frame.encoderinfo
-
-
-# --------------------------------------------------------------------
-# Legacy GIF utilities
-
-
-def getheader(im, palette=None, info=None):
- """
- Legacy Method to get Gif data from image.
-
- Warning:: May modify image data.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: tuple of(list of header items, optimized palette)
-
- """
- used_palette_colors = _get_optimize(im, info)
-
- if info is None:
- info = {}
-
- if "background" not in info and "background" in im.info:
- info["background"] = im.info["background"]
-
- im_mod = _normalize_palette(im, palette, info)
- im.palette = im_mod.palette
- im.im = im_mod.im
- header = _get_global_header(im, info)
-
- return header, used_palette_colors
-
-
-def getdata(im, offset=(0, 0), **params):
- """
- Legacy Method
-
- Return a list of strings representing this image.
- The first string is a local image header, the rest contains
- encoded image data.
-
- To specify duration, add the time in milliseconds,
- e.g. ``getdata(im_frame, duration=1000)``
-
- :param im: Image object
- :param offset: Tuple of (x, y) pixels. Defaults to (0, 0)
- :param \\**params: e.g. duration or other encoder info parameters
- :returns: List of bytes containing GIF encoded frame data
-
- """
-
- class Collector:
- data = []
-
- def write(self, data):
- self.data.append(data)
-
- im.load() # make sure raster data is available
-
- fp = Collector()
-
- _write_frame_data(fp, im, offset, params)
-
- return fp.data
-
-
-# --------------------------------------------------------------------
-# Registry
-
-Image.register_open(GifImageFile.format, GifImageFile, _accept)
-Image.register_save(GifImageFile.format, _save)
-Image.register_save_all(GifImageFile.format, _save_all)
-Image.register_extension(GifImageFile.format, ".gif")
-Image.register_mime(GifImageFile.format, "image/gif")
-
-#
-# Uncomment the following line if you wish to use NETPBM/PBMPLUS
-# instead of the built-in "uncompressed" GIF encoder
-
-# Image.register_save(GifImageFile.format, _save_netpbm)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/fixedTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/fixedTools.py
deleted file mode 100644
index 330042871c521231f2a396add543dd425783722b..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/fixedTools.py
+++ /dev/null
@@ -1,253 +0,0 @@
-"""
-The `OpenType specification `_
-defines two fixed-point data types:
-
-``Fixed``
- A 32-bit signed fixed-point number with a 16 bit twos-complement
- magnitude component and 16 fractional bits.
-``F2DOT14``
- A 16-bit signed fixed-point number with a 2 bit twos-complement
- magnitude component and 14 fractional bits.
-
-To support reading and writing data with these data types, this module provides
-functions for converting between fixed-point, float and string representations.
-
-.. data:: MAX_F2DOT14
-
- The maximum value that can still fit in an F2Dot14. (1.99993896484375)
-"""
-
-from .roundTools import otRound, nearestMultipleShortestRepr
-import logging
-
-log = logging.getLogger(__name__)
-
-__all__ = [
- "MAX_F2DOT14",
- "fixedToFloat",
- "floatToFixed",
- "floatToFixedToFloat",
- "floatToFixedToStr",
- "fixedToStr",
- "strToFixed",
- "strToFixedToFloat",
- "ensureVersionIsLong",
- "versionToFixed",
-]
-
-
-MAX_F2DOT14 = 0x7FFF / (1 << 14)
-
-
-def fixedToFloat(value, precisionBits):
- """Converts a fixed-point number to a float given the number of
- precision bits.
-
- Args:
- value (int): Number in fixed-point format.
- precisionBits (int): Number of precision bits.
-
- Returns:
- Floating point value.
-
- Examples::
-
- >>> import math
- >>> f = fixedToFloat(-10139, precisionBits=14)
- >>> math.isclose(f, -0.61883544921875)
- True
- """
- return value / (1 << precisionBits)
-
-
-def floatToFixed(value, precisionBits):
- """Converts a float to a fixed-point number given the number of
- precision bits.
-
- Args:
- value (float): Floating point value.
- precisionBits (int): Number of precision bits.
-
- Returns:
- int: Fixed-point representation.
-
- Examples::
-
- >>> floatToFixed(-0.61883544921875, precisionBits=14)
- -10139
- >>> floatToFixed(-0.61884, precisionBits=14)
- -10139
- """
- return otRound(value * (1 << precisionBits))
-
-
-def floatToFixedToFloat(value, precisionBits):
- """Converts a float to a fixed-point number and back again.
-
- By converting the float to fixed, rounding it, and converting it back
- to float again, this returns a floating point values which is exactly
- representable in fixed-point format.
-
- Note: this **is** equivalent to ``fixedToFloat(floatToFixed(value))``.
-
- Args:
- value (float): The input floating point value.
- precisionBits (int): Number of precision bits.
-
- Returns:
- float: The transformed and rounded value.
-
- Examples::
- >>> import math
- >>> f1 = -0.61884
- >>> f2 = floatToFixedToFloat(-0.61884, precisionBits=14)
- >>> f1 != f2
- True
- >>> math.isclose(f2, -0.61883544921875)
- True
- """
- scale = 1 << precisionBits
- return otRound(value * scale) / scale
-
-
-def fixedToStr(value, precisionBits):
- """Converts a fixed-point number to a string representing a decimal float.
-
- This chooses the float that has the shortest decimal representation (the least
- number of fractional decimal digits).
-
- For example, to convert a fixed-point number in a 2.14 format, use
- ``precisionBits=14``::
-
- >>> fixedToStr(-10139, precisionBits=14)
- '-0.61884'
-
- This is pretty slow compared to the simple division used in ``fixedToFloat``.
- Use sporadically when you need to serialize or print the fixed-point number in
- a human-readable form.
- It uses nearestMultipleShortestRepr under the hood.
-
- Args:
- value (int): The fixed-point value to convert.
- precisionBits (int): Number of precision bits, *up to a maximum of 16*.
-
- Returns:
- str: A string representation of the value.
- """
- scale = 1 << precisionBits
- return nearestMultipleShortestRepr(value / scale, factor=1.0 / scale)
-
-
-def strToFixed(string, precisionBits):
- """Converts a string representing a decimal float to a fixed-point number.
-
- Args:
- string (str): A string representing a decimal float.
- precisionBits (int): Number of precision bits, *up to a maximum of 16*.
-
- Returns:
- int: Fixed-point representation.
-
- Examples::
-
- >>> ## to convert a float string to a 2.14 fixed-point number:
- >>> strToFixed('-0.61884', precisionBits=14)
- -10139
- """
- value = float(string)
- return otRound(value * (1 << precisionBits))
-
-
-def strToFixedToFloat(string, precisionBits):
- """Convert a string to a decimal float with fixed-point rounding.
-
- This first converts string to a float, then turns it into a fixed-point
- number with ``precisionBits`` fractional binary digits, then back to a
- float again.
-
- This is simply a shorthand for fixedToFloat(floatToFixed(float(s))).
-
- Args:
- string (str): A string representing a decimal float.
- precisionBits (int): Number of precision bits.
-
- Returns:
- float: The transformed and rounded value.
-
- Examples::
-
- >>> import math
- >>> s = '-0.61884'
- >>> bits = 14
- >>> f = strToFixedToFloat(s, precisionBits=bits)
- >>> math.isclose(f, -0.61883544921875)
- True
- >>> f == fixedToFloat(floatToFixed(float(s), precisionBits=bits), precisionBits=bits)
- True
- """
- value = float(string)
- scale = 1 << precisionBits
- return otRound(value * scale) / scale
-
-
-def floatToFixedToStr(value, precisionBits):
- """Convert float to string with fixed-point rounding.
-
- This uses the shortest decimal representation (ie. the least
- number of fractional decimal digits) to represent the equivalent
- fixed-point number with ``precisionBits`` fractional binary digits.
- It uses nearestMultipleShortestRepr under the hood.
-
- >>> floatToFixedToStr(-0.61883544921875, precisionBits=14)
- '-0.61884'
-
- Args:
- value (float): The float value to convert.
- precisionBits (int): Number of precision bits, *up to a maximum of 16*.
-
- Returns:
- str: A string representation of the value.
-
- """
- scale = 1 << precisionBits
- return nearestMultipleShortestRepr(value, factor=1.0 / scale)
-
-
-def ensureVersionIsLong(value):
- """Ensure a table version is an unsigned long.
-
- OpenType table version numbers are expressed as a single unsigned long
- comprising of an unsigned short major version and unsigned short minor
- version. This function detects if the value to be used as a version number
- looks too small (i.e. is less than ``0x10000``), and converts it to
- fixed-point using :func:`floatToFixed` if so.
-
- Args:
- value (Number): a candidate table version number.
-
- Returns:
- int: A table version number, possibly corrected to fixed-point.
- """
- if value < 0x10000:
- newValue = floatToFixed(value, 16)
- log.warning(
- "Table version value is a float: %.4f; " "fix to use hex instead: 0x%08x",
- value,
- newValue,
- )
- value = newValue
- return value
-
-
-def versionToFixed(value):
- """Ensure a table version number is fixed-point.
-
- Args:
- value (str): a candidate table version number.
-
- Returns:
- int: A table version number, possibly corrected to fixed-point.
- """
- value = int(value, 0) if value.startswith("0") else float(value)
- value = ensureVersionIsLong(value)
- return value
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py
deleted file mode 100644
index 57163d726c1a5e850eabe8ec72a44c9ec514b715..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__1.py
+++ /dev/null
@@ -1,164 +0,0 @@
-""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT)
-tool to store its hinting source data.
-
-TSI1 contains the text of the glyph programs in the form of low-level assembly
-code, as well as the 'extra' programs 'fpgm', 'ppgm' (i.e. 'prep'), and 'cvt'.
-"""
-from . import DefaultTable
-from fontTools.misc.loggingTools import LogMixin
-from fontTools.misc.textTools import strjoin, tobytes, tostr
-
-
-class table_T_S_I__1(LogMixin, DefaultTable.DefaultTable):
-
- extras = {0xFFFA: "ppgm", 0xFFFB: "cvt", 0xFFFC: "reserved", 0xFFFD: "fpgm"}
-
- indextable = "TSI0"
-
- def decompile(self, data, ttFont):
- totalLength = len(data)
- indextable = ttFont[self.indextable]
- for indices, isExtra in zip(
- (indextable.indices, indextable.extra_indices), (False, True)
- ):
- programs = {}
- for i, (glyphID, textLength, textOffset) in enumerate(indices):
- if isExtra:
- name = self.extras[glyphID]
- else:
- name = ttFont.getGlyphName(glyphID)
- if textOffset > totalLength:
- self.log.warning("textOffset > totalLength; %r skipped" % name)
- continue
- if textLength < 0x8000:
- # If the length stored in the record is less than 32768, then use
- # that as the length of the record.
- pass
- elif textLength == 0x8000:
- # If the length is 32768, compute the actual length as follows:
- isLast = i == (len(indices) - 1)
- if isLast:
- if isExtra:
- # For the last "extra" record (the very last record of the
- # table), the length is the difference between the total
- # length of the TSI1 table and the textOffset of the final
- # record.
- nextTextOffset = totalLength
- else:
- # For the last "normal" record (the last record just prior
- # to the record containing the "magic number"), the length
- # is the difference between the textOffset of the record
- # following the "magic number" (0xFFFE) record (i.e. the
- # first "extra" record), and the textOffset of the last
- # "normal" record.
- nextTextOffset = indextable.extra_indices[0][2]
- else:
- # For all other records with a length of 0x8000, the length is
- # the difference between the textOffset of the record in
- # question and the textOffset of the next record.
- nextTextOffset = indices[i + 1][2]
- assert nextTextOffset >= textOffset, "entries not sorted by offset"
- if nextTextOffset > totalLength:
- self.log.warning(
- "nextTextOffset > totalLength; %r truncated" % name
- )
- nextTextOffset = totalLength
- textLength = nextTextOffset - textOffset
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError(
- "%r textLength (%d) must not be > 32768" % (name, textLength)
- )
- text = data[textOffset : textOffset + textLength]
- assert len(text) == textLength
- text = tostr(text, encoding="utf-8")
- if text:
- programs[name] = text
- if isExtra:
- self.extraPrograms = programs
- else:
- self.glyphPrograms = programs
-
- def compile(self, ttFont):
- if not hasattr(self, "glyphPrograms"):
- self.glyphPrograms = {}
- self.extraPrograms = {}
- data = b""
- indextable = ttFont[self.indextable]
- glyphNames = ttFont.getGlyphOrder()
-
- indices = []
- for i in range(len(glyphNames)):
- if len(data) % 2:
- data = (
- data + b"\015"
- ) # align on 2-byte boundaries, fill with return chars. Yum.
- name = glyphNames[i]
- if name in self.glyphPrograms:
- text = tobytes(self.glyphPrograms[name], encoding="utf-8")
- else:
- text = b""
- textLength = len(text)
- if textLength >= 0x8000:
- textLength = 0x8000
- indices.append((i, textLength, len(data)))
- data = data + text
-
- extra_indices = []
- codes = sorted(self.extras.items())
- for i in range(len(codes)):
- if len(data) % 2:
- data = (
- data + b"\015"
- ) # align on 2-byte boundaries, fill with return chars.
- code, name = codes[i]
- if name in self.extraPrograms:
- text = tobytes(self.extraPrograms[name], encoding="utf-8")
- else:
- text = b""
- textLength = len(text)
- if textLength >= 0x8000:
- textLength = 0x8000
- extra_indices.append((code, textLength, len(data)))
- data = data + text
- indextable.set(indices, extra_indices)
- return data
-
- def toXML(self, writer, ttFont):
- names = sorted(self.glyphPrograms.keys())
- writer.newline()
- for name in names:
- text = self.glyphPrograms[name]
- if not text:
- continue
- writer.begintag("glyphProgram", name=name)
- writer.newline()
- writer.write_noindent(text.replace("\r", "\n"))
- writer.newline()
- writer.endtag("glyphProgram")
- writer.newline()
- writer.newline()
- extra_names = sorted(self.extraPrograms.keys())
- for name in extra_names:
- text = self.extraPrograms[name]
- if not text:
- continue
- writer.begintag("extraProgram", name=name)
- writer.newline()
- writer.write_noindent(text.replace("\r", "\n"))
- writer.newline()
- writer.endtag("extraProgram")
- writer.newline()
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "glyphPrograms"):
- self.glyphPrograms = {}
- self.extraPrograms = {}
- lines = strjoin(content).replace("\r", "\n").split("\n")
- text = "\r".join(lines[1:-1])
- if name == "glyphProgram":
- self.glyphPrograms[attrs["name"]] = text
- elif name == "extraProgram":
- self.extraPrograms[attrs["name"]] = text
diff --git a/spaces/devoworm-group/Lineage_Population/functions.py b/spaces/devoworm-group/Lineage_Population/functions.py
deleted file mode 100644
index 17a3ef4f525a5152343e293a8b4cc69df370dd7f..0000000000000000000000000000000000000000
--- a/spaces/devoworm-group/Lineage_Population/functions.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import streamlit as st
-import numpy as np
-import pandas as pd
-from PIL import Image
-from pathlib import Path
-import joblib
-
-import numpy as np
-import cv2
-import onnxruntime as ort
-import imutils
-# import matplotlib.pyplot as plt
-import pandas as pd
-import plotly.express as px
-
-
-def scale_model_outputs(scaler_path, data):
- scaler= joblib.load(scaler_path)
- scaled=scaler.inverse_transform(data)
- return(scaled)
-
-
-def onnx_predict_lineage_population(input_image):
- ort_session = ort.InferenceSession('onnx_models/lineage_population_model.onnx')
- img = Image.fromarray(np.uint8(input_image))
- resized = img.resize((256, 256), Image.NEAREST)
-
- transposed=np.transpose(resized, (2, 1, 0))
- img_unsqueeze = expand_dims(transposed)
-
- onnx_outputs = ort_session.run(None, {'input': img_unsqueeze.astype('float32')})
- return(onnx_outputs[0])
-
-
-
-def expand_dims(arr):
- norm=(arr-np.min(arr))/(np.max(arr)-np.min(arr)) #normalize
- ret = np.expand_dims(norm, axis=0)
- return(ret)
-
-
-
-def lineage_population_model():
- selected_box2 = st.sidebar.selectbox(
- 'Choose Example Input',
- (['Example_1.png'])
- )
-
- st.title('Predict Cell Lineage Populations')
- instructions = """
- Predict the population of cells in C. elegans embryo using fluorescence microscopy data. \n
- Either upload your own image or select from the sidebar to get a preconfigured image.
- The image you select or upload will be fed through the Deep Neural Network in real-time
- and the output will be displayed to the screen.
- """
- st.text(instructions)
- file = st.file_uploader('Upload an image or choose an example')
- example_image = Image.open('./images/lineage_population_examples/'+selected_box2).convert("RGB")
-
- col1, col2= st.columns(2)
-
- if file:
- input = Image.open(file).convert("RGB")
- fig1 = px.imshow(input, binary_string=True, labels=dict(x="Input Image"))
- fig1.update(layout_coloraxis_showscale=False)
- fig1.update_layout(margin=dict(l=0, r=0, b=0, t=0))
- col1.plotly_chart(fig1, use_container_width=True)
- else:
- input = example_image
- fig1 = px.imshow(input, binary_string=True, labels=dict(x="Input Image"))
- fig1.update(layout_coloraxis_showscale=False)
- fig1.update_layout(margin=dict(l=0, r=0, b=0, t=0))
- col1.plotly_chart(fig1, use_container_width=True)
-
- pressed = st.button('Run')
- if pressed:
- st.empty()
- output = onnx_predict_lineage_population(np.array(input))
- scaled_output = scale_model_outputs(scaler_path="./scaler.gz", data=output)
-
- for i in range(len(scaled_output[0])):
- scaled_output[0][i]=int(round(scaled_output[0][i]))
-
- df = pd.DataFrame({"Lineage":["A", "E", "M", "P", "C", "D", "Z"] , "Population": scaled_output[0]})
- col2.table(df)
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/AutoCAD 2011 Keygen WORK Xforce.md b/spaces/diacanFperku/AutoGPT/AutoCAD 2011 Keygen WORK Xforce.md
deleted file mode 100644
index 8c229402ca5544cacdd2f2366fa76482db9f891a..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/AutoCAD 2011 Keygen WORK Xforce.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
-страницу
-
-Autocad 2011 keygen
-
-A autocad 2011 keygen copier is usually organized by using a cassette of components, which is accompanied by a system that allows you to purchase the components of the job or finish a job which has been prepared by an expert. A cassette is far more cost effective than a copier and allows for both the purchase of the components of a job and assembling a job at a much faster rate. While a cassette of components may be less expensive, a cassette is a lot more complex than a standard copy machine. In order to utilize a cassette, you need to handle components and replace the cassette that is particular for a job, which can be extremely hard and time consuming. Copiers are much faster than a cassette and typically much easier to use. Copiers are typically much faster than a cassette for many reasons. The most obvious is because a cassette only holds a finite amount of components, while a copier may hold hundreds or thousands of components. Copiers can usually be switched on and produce a copy from a slide or from a job that is stored on a CD or DVD. Typically, cassette copiers must be connected to a CD or DVD in order to make copies. You can not easily change which components are being copied on a cassette copier. A cassette contains only a very specific set of components. You will not be able to change the other components that are on the cassette. On the other hand, a copier can copy many different types of components, allowing you to change the components being copied without the need to remove the cassette.
-
-A cassette is also much larger than a copier. Copiers are typically smaller than a cassette and can be placed on a desk or on top of a desk. A cassette is typically placed on the floor and you have to lift the cassette off of the floor in order to open the cassette. The more parts you need to copy, the larger your cassette will be. The cassette can also be taller and longer than your copier. This allows you to place many different types of parts on the cassette. A cassette can also be a bit more expensive than a copier. You need a cassette in order to copy components and a cassette is much more expensive than the standard copy machine.
-
-Copier Price Evaluations
-
-A basic copier will likely be cheaper than a cassette copier. However, when you purchase a cassette copier you will not have the benefits that a cassette copier provides. 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md b/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md
deleted file mode 100644
index e545a0b55ddfa1c6a22eba9493bedcff9cb3ec45..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-z>sys file> get dell-mcpci.sysv
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md b/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md
deleted file mode 100644
index 5e45b00bf1eecc561df9f2f4080457d361a9aac6..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
How to Download MDSolids 4.0 Full Crack IDM
-
MDSolids is a software for topics taught in the Mechanics of Materials course, such as beams, trusses, Mohr's circle transformations, section properties, torsion, and more[^1^]. It is designed to assist engineering students and professionals in solving a wide variety of engineering problems[^3^].
Download MDSolids 4.1.0 (Microsoft MSI Installer Version) and save it on your computer.
-
Run the installer and follow the instructions to install MDSolids on your computer.
-
Open MDSolids and go to the Help menu. Click on Register MDSolids and enter your name and the registration code D93C8ADC. This code will remove the 30-day limitation and give you free access to the full version of MDSolids[^2^].
-
Enjoy using MDSolids for your engineering projects.
-
-
Note: IDM stands for Internet Download Manager, a tool that can speed up and manage your downloads. You can download IDM from https://www.internetdownloadmanager.com/ and use it to download MDSolids faster.
Benefits of Using MDSolids
-
MDSolids has many benefits for engineering students and professionals who want to learn and apply the concepts of Mechanics of Materials. Some of the benefits are:
-
-
-
MDSolids is easy to use and has a user-friendly interface. You can access different modules and routines from the main menu or the toolbar. You can also customize the appearance and settings of MDSolids according to your preferences.
-
MDSolids covers a wide range of topics and problems in Mechanics of Materials, such as stress, strain, deformation, equilibrium, compatibility, loading, failure criteria, and more. You can use MDSolids to solve problems involving axial members, torsional members, beams, columns, pressure vessels, trusses, frames, and more.
-
MDSolids provides detailed explanations and solutions for each problem. You can see the formulas, diagrams, graphs, tables, and calculations that are used to solve the problem. You can also check your answers and compare them with the solutions provided by MDSolids.
-
MDSolids allows you to save, print, and export your data and results. You can save your data from an individual module or an entire session. You can print your data and results in a formatted report. You can also export your data and results to Excel spreadsheets or bitmap images.
-
MDSolids is compatible with Windows 10, Windows 8, Windows 7, and all prior versions of Windows. You can download and install MDSolids on any computer that runs on Windows operating system.
-
-
MDSolids is a powerful and versatile software that can help you learn and master the concepts of Mechanics of Materials. Whether you are a student or a professional, you will find MDSolids useful and beneficial for your engineering education and career.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md b/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md
deleted file mode 100644
index d98a2ae8ed2354d03211e2c70b769fb45121edaa..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
How to Download and Install Generic Text Only Driver for Windows 7
-
If you need to print plain text documents without any formatting, you might want to use the Generic Text Only driver. This driver is a built-in option in Windows that allows you to send raw text commands to your printer. It can be useful for printing receipts, labels, tickets, or other simple text documents.
-
In this article, we will show you how to download and install the Generic Text Only driver for Windows 7. We will also provide some tips on how to use it effectively.
The Generic Text Only driver is included in Windows 7, but you might need to update it to the latest version. To do this, you can use a reliable driver update tool like DriverGuide. DriverGuide is a free service that scans your computer and finds the best drivers for your devices. It also lets you download and install drivers with one click.
-
To download the Generic Text Only driver with DriverGuide, follow these steps:
Click on the green Download button next to the driver that matches your Windows 7 system.
-
Save the file to your computer and run it.
-
Follow the instructions on the screen to complete the installation.
-
-
Step 2: Install the Generic Text Only Driver
-
After downloading the Generic Text Only driver, you need to install it on your computer. To do this, follow these steps:
-
-
Go to Start > Devices and Printers.
-
Click on Add a printer.
-
Select Add a local printer.
-
Choose Use an existing port and select FILE: (Print to File) from the drop-down menu.
-
Click on Next.
-
Select Generic from the Manufacturer list and Generic / Text Only from the Printers list.
-
Click on Next.
-
Give your printer a name and click on Next.
-
Click on Finish to complete the installation.
-
-
Step 3: Use the Generic Text Only Driver
-
Now that you have installed the Generic Text Only driver, you can use it to print plain text documents. To do this, follow these steps:
-
-
Open the document that you want to print in a text editor like Notepad.
-
Go to File > Print.
-
Select Generic / Text Only as your printer and click on Print.
-
A dialog box will appear asking you to enter a file name and location for your output file. Choose a name and location that you can easily find later.
-
Click on Save to create your output file.
-
Open your output file with a text editor or a hex editor. You should see raw text commands that are sent to your printer.
-
If you want to print your output file, you need to copy it to your printer port. For example, if your printer is connected to LPT1, you can use this command in a command prompt: copy /b output.txt LPT1
-
-
-
Tips for Using the Generic Text Only Driver
-
-
The Generic Text Only driver can be useful for printing simple text documents, but it has some limitations. Here are some tips for using it effectively:
-
-
-
-
The Generic Text Only driver does not support any formatting or graphics. If your document contains any of these elements, they will be ignored or printed as garbage characters.
-
-
The Generic Text Only driver does not support any printer-specific features or settings. If you need to adjust the paper size, orientation, margins, font, or other options, you need to use escape sequences or control codes in your document. You can find more information about these codes in your printer manual or online.
-
-
The Generic Text Only driver does not support Unicode characters or non-ASCII characters. If your document contains any of these characters, they will be printed as question marks or other symbols. You need to use an encoding that d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md b/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md
deleted file mode 100644
index 09db8596572fb31c79ef852bea11d9e05e9ce8e3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-August 28, 2013 — I use Excel (there are other good spreadsheets!) to fly 1500m in 2 minutes, 15,000m in 20 minutes? I'm not quite sure what I should be doing.
-I have found several solutions but I am stuck.
-Some people told me to try to climb 2000m to get the maximum height.
-But I don't know how to do it!
-I understand that I am using a table and that there are a few things to calculate in order to get the maximum height.
-But I can't figure out how.
-I found this solution:
-I'm wondering if this is a useful solution for me?
-I don't want to just rely on this decision. 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md b/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md
deleted file mode 100644
index a8bdcaeb8a8ff2fdfa60a9c8e171b3900cd2154d..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-.0K fps 459KB 54.9MB 5,487.0KB 0% 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md b/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md
deleted file mode 100644
index a9563232a972ec41df76ebefce44da138690d1d4..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
download fantastic beasts 3 english audio as source file.flv in 480p, 720p & 1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on drama, thriller. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.
-
HD Online Player (Fraud Saiyaan Movie Download Dual Au)
download gravity english audio as source file.flv in 480p, 720p & 1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on drama. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.
-
this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movie based on drama. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. english subtitles added with english
-
download brilliant: the operative code-uhd 720p web-dl.download brilliant: the operative code movie uhd web-dl original.this is an english movie & available in 720p,480p & 1080p. this is one of the best movie based on action. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. english subtitles added with english
-
-
download civilization: beyond earth trailer 2022 english audio (720p 1080p ) online. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on comedy. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.
-
download age of glory movie 2022 english audio in 720p,480p &1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on fantasy. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md b/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md
deleted file mode 100644
index 9750e7df225b957df4e90feed8722c064077b356..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
- | Table 2: Article with HTML formatting | |---------------------------------------| |
Tank Hero Mod APK (Unlimited Money and Gold)
| |
Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!
| |
Introduction
| |
Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.
However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.
| |
That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.
| |
To download and install Tank Hero Mod APK, you need to follow these simple steps:
| |
-
Click on the download button below to get the Tank Hero Mod APK file.
-
Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
-
Locate the downloaded file in your file manager and tap on it to install it.
-
Launch the game and enjoy unlimited money and gold.
- | |
Features of Tank Hero Mod APK
| |
Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:
| |
Unlimited Money and Gold
| |
The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.
| |
This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.
| |
Various Tanks and Weapons
| |
Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.
| |
The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.
-
tank hero mod apk download free unlimited money and gold
-tank hero hack mod apk unlimited money and gold latest version
-tank hero mod apk unlimited money and gold android 1
-tank hero mod apk unlimited money and gold rexdl
-tank hero mod apk unlimited money and gold no root
-tank hero mod apk unlimited money and gold offline
-tank hero mod apk unlimited money and gold 2023
-tank hero mod apk unlimited money and gold revdl
-tank hero mod apk unlimited money and gold happymod
-tank hero mod apk unlimited money and gold apkpure
-tank hero mod apk unlimited money and gold for pc
-tank hero mod apk unlimited money and gold ios
-tank hero mod apk unlimited money and gold online
-tank hero mod apk unlimited money and gold obb
-tank hero mod apk unlimited money and gold mediafıre
-tank hero mod apk unlimited money and gold mega
-tank hero mod apk unlimited money and gold uptodown
-tank hero mod apk unlimited money and gold 1.8.0
-tank hero mod apk unlimited money and gold 1.7.9
-tank hero mod apk unlimited money and gold 1.7.8
-tank hero mod apk unlimited money and gold 1.7.7
-tank hero mod apk unlimited money and gold 1.7.6
-tank hero mod apk unlimited money and gold 1.7.5
-tank hero mod apk unlimited money and gold 1.7.4
-tank hero mod apk unlimited money and gold 1.7.3
-tank hero mod apk unlimited money and gold 1.7.2
-tank hero mod apk unlimited money and gold 1.7.1
-tank hero mod apk unlimited money and gold 1.7.0
-tank hero mod apk unlimited money and gold 1.6.9
-tank hero mod apk unlimited money and gold 1.6.8
-tank hero mod apk unlimited money and gold 1.6.7
-tank hero mod apk unlimited money and gold 1.6.6
-tank hero mod apk unlimited money and gold 1.6.5
-tank hero mod apk unlimited money and gold 1.6.4
-tank hero mod apk unlimited money and gold 1.6.3
-tank hero mod apk unlimited money and gold 1.6.2
-tank hero mod apk unlimited money and gold 1.6.1
-tank hero mod apk unlimited money and gold 1.6.0
-how to install tank hero mod apk unlimited money and gold
-how to play tank hero mod apk unlimited money and gold
-how to get tank hero mod apk unlimited money and gold
-how to update tank hero mod apk unlimited money and gold
-how to uninstall tank hero mod apk unlimited money and gold
-how to hack tank hero with mod apk unlimited money and gold
-how to download tank hero with mod apk unlimited money and gold
| | Table 2: Article with HTML formatting | |---------------------------------------| |
Tank Hero Mod APK (Unlimited Money and Gold)
| |
Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!
| |
Introduction
| |
Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.
| |
However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.
| |
That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.
| |
To download and install Tank Hero Mod APK, you need to follow these simple steps:
| |
-
Click on the download button below to get the Tank Hero Mod APK file.
-
Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
-
Locate the downloaded file in your file manager and tap on it to install it.
-
Launch the game and enjoy unlimited money and gold.
- | |
Features of Tank Hero Mod APK
| |
Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:
| |
Unlimited Money and Gold
| |
The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.
| |
This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.
| |
Various Tanks and Weapons
| |
Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.
| |
The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.
| |
To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations.
| |
Multiple Game Modes
-| Table 2: Article with HTML formatting | |---------------------------------------| |
Tank Hero Mod APK (Unlimited Money and Gold)
| |
Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!
| |
Introduction
| |
Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.
| |
However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.
| |
That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.
| |
To download and install Tank Hero Mod APK, you need to follow these simple steps:
| |
-
Click on the download button below to get the Tank Hero Mod APK file.
-
Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
-
Locate the downloaded file in your file manager and tap on it to install it.
-
Launch the game and enjoy unlimited money and gold.
- | |
Features of Tank Hero Mod APK
| |
Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:
| |
Unlimited Money and Gold
| |
The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.
| |
This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.
| |
Various Tanks and Weapons
| |
Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.
| |
The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.
| |
To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations.
| |
Multiple Game Modes
-| Table 2: Article with HTML formatting | |---------------------------------------| |
Tank Hero Mod APK (Unlimited Money and Gold)
| |
Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now!
| |
Introduction
| |
Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements.
| |
However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay.
| |
That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions.
| |
To download and install Tank Hero Mod APK, you need to follow these simple steps:
| |
-
Click on the download button below to get the Tank Hero Mod APK file.
-
Go to your device settings and enable unknown sources to allow installation of apps from unknown sources.
-
Locate the downloaded file in your file manager and tap on it to install it.
-
Launch the game and enjoy unlimited money and gold.
- | |
Features of Tank Hero Mod APK
| |
Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them:
| |
Unlimited Money and Gold
| |
The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle.
| |
This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold.
| |
Various Tanks and Weapons
| |
Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique.
| |
The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style.
| |
To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations.
| |
Multiple Game Modes
-
Tank Hero Mod
Tank Hero Mod APK has multiple game modes for you to enjoy. You can play in different game modes, such as campaign, survival, and multiplayer. Each game mode has its own rules, objectives, and challenges. You can also choose the level of difficulty and the number of enemies in each game mode.
-
The campaign mode is the main mode of the game, where you have to complete various missions and stages. You have to fight against different types of enemies, such as tanks, helicopters, turrets, etc. You have to destroy them all and reach the end of each stage. You can also collect stars and medals by completing the missions with high scores and achievements. You can use the stars and medals to unlock new tanks and weapons.
-
The survival mode is the endless mode of the game, where you have to survive as long as possible against waves of enemies. You have to shoot and dodge the incoming enemies and avoid getting hit by their bullets and missiles. You can also collect power-ups and bonuses that can help you survive longer. You can also compete with other players on the global leaderboard and see how long you can last.
-
The multiplayer mode is the online mode of the game, where you can play with or against other players from around the world. You can join or create rooms in the multiplayer mode and choose the game mode, map, and settings. You can also chat with other players using the in-game chat feature. You can play in different modes, such as team deathmatch, capture the flag, king of the hill, etc. You can also cooperate or compete with other players and show your skills and strategies.
-
Stunning Graphics and Sound Effects
-
Tank Hero Mod APK has stunning graphics and sound effects that make the game more realistic and immersive. The game has 3D graphics that are well-designed and detailed. The game also has dynamic lighting and shadows that create a realistic atmosphere. The game also has smooth animations and transitions that make the gameplay more fluid and responsive.
-
The game also has sound effects that are clear and crisp. The game also has background music that is catchy and fitting for each game mode and situation. The game also has voice-overs that are expressive and humorous. The game also has sound settings that allow you to adjust the volume and quality of the sound effects, music, and voice-overs.
-
To enjoy the stunning graphics and sound effects of Tank Hero Mod APK, you need to have a device that meets the minimum requirements of the game. You also need to have a stable internet connection for the online mode. You can also adjust the graphics and sound settings in the game menu to suit your preference and device performance.
-
Conclusion
-
Tank Hero Mod APK is a great tank game for Android that offers unlimited money and gold, various tanks and weapons, multiple game modes, stunning graphics and sound effects, and more. It is a fun and addictive game that will keep you entertained for hours. It is also easy to download and install on your device.
-
If you are looking for a tank game that will challenge your skills and strategies, then you should download Tank Hero Mod APK now. It is one of the best tank games for Android that you will ever play. It is a modded version of the original Tank Hero game that gives you more features and content than ever before.
-
So what are you waiting for? Download Tank Hero Mod APK now and enjoy unlimited money and gold, various tanks and weapons, multiple game modes, stunning graphics and sound effects, and more!
-
FAQs
-
-
Q1: Is Tank Hero Mod APK safe to download and use?
-
A1: Yes, Tank Hero Mod APK is safe to download and use. It does not contain any viruses or malware. However, you should always download it from a trusted source and scan it with an antivirus before installing it.
-
Q2: Do I need to root my device to use Tank Hero Mod APK?
-
A2: No, you do not need to root your device to use Tank Hero Mod APK. It works on both rooted and non-rooted devices. However, you may need to enable unknown sources in your device settings to install it.
-
Q3: Can I play Tank Hero Mod APK online with other players?
-
A3: Yes, you can play Tank Hero Mod APK online with other players. You can join or create rooms in the multiplayer mode and compete with other players from around the world. You can also chat with them using the in-game chat feature.
-
Q4: How can I update Tank Hero Mod APK to the latest version?
-
A4: You can update Tank Hero Mod APK to the latest version by downloading it again from the same source where you downloaded it before. You do not need to uninstall the previous version before installing the new one. Your progress and data will be saved automatically.
-
Q5: What if I have any problems or questions about Tank Hero Mod APK?
-
A5: If you have any problems or questions about Tank Hero Mod APK, you can contact the developer through their email or social media accounts. You can also visit their official website or blog for more information and support.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md b/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md
deleted file mode 100644
index 7484b01d7f6a3062eacbd25a2f7586d02a9b0804..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
Download APK Mod Old Roll: How to Get Vintage Photos and Videos on Your Android Device
-
Do you love the nostalgic look of old photos and videos? Do you want to capture your memories in a retro style? If yes, then you might be interested in downloading APK Mod Old Roll, a photography app that lets you take and edit photos and videos in a vintage, classic style. In this article, we will tell you what APK Mod Old Roll is, what features it offers, what are the benefits of using it, and how to install it on your Android device. We will also give you some tips on how to find the best APK mod sites to download this app and other modded apps for free.
APK Mod Old Roll is a modified version of the original Old Roll app, which is a photography app that allows you to take and edit photos and videos in a vintage, classic style. The app has various filters, effects, stickers, frames, and fonts that you can apply to your photos and videos to make them look like they were taken decades ago. You can also adjust the brightness, contrast, saturation, exposure, and other settings to enhance your photos and videos.
-
Features of APK Mod Old Roll
-
The modded version of Old Roll has some extra features that are not available in the original version. These include:
-
-
Unlocked all cameras, including Polaroid, Kodak, Fuji, Agfa, and more
-
Premium version unlocked, which gives you access to more filters, effects, stickers, frames, and fonts
-
No ads or watermarks
-
No root required
-
-
Benefits of using APK Mod Old Roll
-
By using APK Mod Old Roll, you can enjoy the following benefits:
-
-
You can save money by getting the premium features for free
-
You can have more creative options by using different cameras, filters, effects, stickers, frames, and fonts
-
You can have more fun by creating unique and artistic photos and videos
-
You can share your photos and videos with your friends and family on social media platforms
-
-
What is APK Mod and How Does It Work?
-
Before we tell you how to install APK Mod Old Roll on your Android device, let us explain what APK mod is and how it works. APK mod is a modified version of an original Android app that has been altered by someone to provide new or improved features that are not present in the original version. An Android app is packaged into a file that has an extension named .APK, which contains all the elements of the app and can be installed on an Android device.
-
download old roll premium apk mod
-old roll mod apk free download
-how to download old roll mod apk
-download old roll mod apk latest version
-old roll mod apk download for android
-old roll camera mod apk download
-download old roll mod apk 2023
-old roll photo editor mod apk download
-download old roll pro mod apk
-old roll video editor mod apk download
-download old roll full unlocked mod apk
-old roll vintage camera mod apk download
-download old roll hack mod apk
-old roll retro camera mod apk download
-download old roll cracked mod apk
-old roll film camera mod apk download
-download old roll vip mod apk
-old roll 1998 cam mod apk download
-download old roll plus mod apk
-old roll disposable camera mod apk download
-download old roll gold mod apk
-old roll analog camera mod apk download
-download old roll ultimate mod apk
-old roll vhs camcorder mod apk download
-download old roll diamond mod apk
-old roll polaroid camera mod apk download
-download old roll elite mod apk
-old roll 35mm film camera mod apk download
-download old roll ad free mod apk
-old roll instax camera mod apk download
-
Advantages and Disadvantages of APK Mod
-
Using APK mod has some advantages and disadvantages that you should be aware of. Here are some of them:
-
-
-
Advantages
-
Disadvantages
-
-
-
You can get premium features for free
-
You may violate the intellectual property rights of the original developers
-
-
-
You can access new or improved features that are not available in the original version
-
You may expose your device to malware or virus infections
-
-
-
You can customize the app according to your preferences
-
You may encounter compatibility or stability issues with your device or other apps
-
-
-
You can bypass the restrictions or limitations imposed by the original developers or the app store
-
You may lose the support or updates from the original developers or the app store
-
-
-
Risks and Precautions of APK Mod
-
Using APK mod also involves some risks and precautions that you should take into account. Here are some of them:
-
-
Make sure you download the APK mod file from a trusted and reliable source, such as the ones we will recommend later in this article. Avoid downloading from unknown or suspicious websites that may contain malware or virus.
-
Check the permissions and reviews of the APK mod file before installing it on your device. Make sure you are comfortable with granting the app access to your device's data and functions. Read the feedback and ratings of other users who have used the app to see if they have encountered any problems or issues.
-
Backup your device's data and settings before installing the APK mod file. In case something goes wrong or you want to uninstall the app, you can restore your device to its previous state.
-
Use a VPN or a proxy server to hide your IP address and location when downloading or using the APK mod file. This can help you avoid being tracked or detected by the original developers or the app store, who may ban your account or take legal action against you for violating their terms and conditions.
-
-
How to Install APK Mod Old Roll on Your Android Device
-
Now that you know what APK mod is and how it works, let us show you how to install APK Mod Old Roll on your Android device. The process is simple and easy, and it only takes a few minutes. Just follow these steps:
-
Step 1: Download the APK file from a trusted source
-
The first step is to download the APK file of APK Mod Old Roll from a trusted and reliable source. You can use one of the best APK mod sites that we will recommend later in this article, or you can search for it on Google or other search engines. Make sure you download the latest version of the app, which is 1.0.9 as of June 2023.
-
Step 2: Enable unknown sources on your device settings
-
The second step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the official app store, such as APK mod files. To do this, go to your device's settings, then security, then unknown sources, and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, as long as you download from a trusted source, you should be fine.
-
Step 3: Locate and install the APK file
-
The third step is to locate and install the APK file that you have downloaded on your device. You can use a file manager app to find the file in your downloads folder, or you can tap on the notification that says download complete. Once you find the file, tap on it and follow the instructions on the screen to install it on your device. You may see a pop-up message that says this type of file can harm your device, but just ignore it and tap on install anyway.
-
Step 4: Launch the app and enjoy
-
The fourth and final step is to launch the app and enjoy its features. You can find the app icon on your home screen or in your app drawer. Tap on it and grant it any permissions it may ask for. Then, you can start taking and editing photos and videos in a vintage, classic style with APK Mod Old Roll.
-
Best APK Mod Sites to Download APK Mod Old Roll and Other Apps
-
If you are looking for the best APK mod sites to download APK Mod Old Roll and other apps for free, here are some of our recommendations:
-
APKPure
-
APKPure is one of the most popular and trusted APK mod sites that offers a wide range of apps and games for Android devices. You can find both original and modded versions of apps and games, as well as exclusive apps that are not available on the official app store. You can also update your apps with one click, download region-locked apps, and request new apps or mods.
-
HappyMod
-
HappyMod is another popular and trusted APK mod site that specializes in providing modded versions of apps and games for Android devices. You can find thousands of mods for different categories, such as action, adventure, arcade, casual, simulation, sports, and more. You can also download multiple mods for the same app or game, and choose the one that suits your needs. You can also rate and review the mods, and request new mods or updates.
-
ReXdl
-
ReXdl is another popular and trusted APK mod site that offers a huge collection of apps and games for Android devices. You can find both original and modded versions of apps and games, as well as premium apps that are normally paid on the official app store. You can also download apps and games that are modded with unlimited money, coins, gems, lives, and other resources. You can also browse by categories, genres, or tags, and download fast and secure.
-
Apkmody
-
Apkmody is another popular and trusted APK mod site that provides high-quality apps and games for Android devices. You can find both original and modded versions of apps and games, as well as exclusive apps that are not available on the official app store. You can also download apps and games that are modded with unlimited money, coins, gems, lives, and other resources. You can also search by keywords, categories, or popularity, and download fast and safe.
-
Conclusion
-
In conclusion, APK Mod Old Roll is a photography app that lets you take and edit photos and videos in a vintage, classic style. It has various filters, effects, stickers, frames, and fonts that you can apply to your photos and videos to make them look like they were taken decades ago. You can also adjust the brightness, contrast, saturation, exposure, and other settings to enhance your photos and videos. The modded version of Old Roll has some extra features that are not available in the original version, such as unlocked all cameras, premium version unlocked, no ads or watermarks, and no root required. To install APK Mod Old Roll on your Android device, you need to download the APK file from a trusted source, enable unknown sources on your device settings, locate and install the APK file, and launch the app and enjoy. You can also use one of the best APK mod sites that we have recommended in this article to download APK Mod Old Roll and other apps for free.
-
FAQs
-
Here are some frequently asked questions about APK Mod Old Roll:
-
Q: Is APK Mod Old Roll safe to use?
-
A: APK Mod Old Roll is safe to use as long as you download it from a trusted source and follow the precautions we have mentioned in this article. However, there is always a risk of malware or virus infections when using APK mod files, so you should always scan the file before installing it on your device.
-
Q: Is APK Mod Old Roll legal to use?
-
A: APK Mod Old Roll is not legal to use as it violates the intellectual property rights of the original developers of Old Roll. By using APK Mod Old Roll, you may also breach the terms and conditions of the official app store or the original developers. Therefore, you should use APK Mod Old Roll at your own risk and responsibility.
-
Q: How can I update APK Mod Old Roll?
-
A: To update APK Mod Old Roll, you need to download the latest version of the APK file from a trusted source and install it on your device. You may also check the best APK mod sites that we have recommended in this article to see if they have updated versions of APK Mod Old Roll.
-
Q: How can I uninstall APK Mod Old Roll?
-
A: To uninstall APK Mod Old Roll, you need to go to your device's settings, then apps, then APK Mod Old Roll, and tap on uninstall. You may also use a file manager app to find and delete the APK file from your device. You may also want to backup your photos and videos before uninstalling the app, as they may be deleted along with the app.
-
Q: Can I use APK Mod Old Roll on other devices?
-
A: APK Mod Old Roll is designed for Android devices, so you may not be able to use it on other devices, such as iOS, Windows, or Mac. However, you may try using an Android emulator or a virtual machine to run APK Mod Old Roll on your PC or laptop.
Gacha Club: Cómo Descargar y Disfrutar el Juego de Anime
-
¿Te gustan los juegos de anime, vestir y batalla? Entonces te encantará Gacha Club, el último juego casual de estrategia RPG de Lunime, la compañía que ha lanzado una gran variedad de juegos gacha desde 2015. Gacha Club es la secuela de Gacha Life, uno de los juegos más populares de Lunime con más de 10 millones de descargas. En este artículo, te mostraremos cómo descargar e instalar Gacha Club en tu dispositivo Android, así como algunas de las características y consejos que ofrece el juego.
Gacha Club es un juego gratuito que puedes descargar fácilmente desde la Google Play Store. Solo tienes que seguir estos pasos:
-
-
Abre la Google Play Store en tu dispositivo Android y busca "Gacha Club".
-
Selecciona el juego de la lista de resultados y pulsa el botón "Instalar".
-
Espera a que se complete la descarga y la instalación del juego.
-
Abre el juego desde el menú de aplicaciones o desde el icono en tu pantalla de inicio.
-
Disfruta de crear tus propios personajes de anime y jugar con ellos en diferentes modos.
-
-
Nota: El juego puede funcionar lento o con lag en dispositivos antiguos o con pantallas 4k. También puede haber algunos errores o glitches si no tienes suficiente espacio de almacenamiento en tu teléfono. Si experimentas alguno de estos problemas, intenta reiniciar el juego o liberar espacio en tu dispositivo.
-
Características de Gacha Club
-
Gacha Club tiene mucho contenido y funciones para que te diviertas durante horas. Estas son algunas de las principales características que ofrece el juego:
-
gacha club descargar apk gratis
-gacha club descargar apk ultima version
-gacha club descargar apk para android
-gacha club descargar apk sin internet
-gacha club descargar apk mod
-gacha club descargar apk full
-gacha club descargar apk mega
-gacha club descargar apk mediafire
-gacha club descargar apk uptodown
-gacha club descargar apk hackeado
-gacha club descargar apk 2023
-gacha club descargar apk pc
-gacha club descargar apk windows 10
-gacha club descargar apk laptop
-gacha club descargar apk mac
-gacha club descargar apk chromebook
-gacha club descargar apk bluestacks
-gacha club descargar apk nox player
-gacha club descargar apk online
-gacha club descargar apk sin emulador
-gacha club descargar apk español
-gacha club descargar apk ingles
-gacha club descargar apk portugues
-gacha club descargar apk frances
-gacha club descargar apk aleman
-gacha club descargar apk japones
-gacha club descargar apk chino
-gacha club descargar apk coreano
-gacha club descargar apk arabe
-gacha club descargar apk ruso
-gacha club descargar apk original
-gacha club descargar apk oficial
-gacha club descargar apk lunime
-gacha club descargar apk play store
-gacha club descargar apk google play
-gacha club descargar apk amazon appstore
-gacha club descargar apk samsung galaxy store
-gacha club descargar apk huawei appgallery
-gacha club descargar apk xiaomi getapps
-gacha club descargar apk oppo app market
-gacha club descargar apk vivo app store
-gacha club descargar apk lg smartworld
-gacha club descargar apk nokia store
-gacha club descargar apk motorola apps store
-
Personalización de Personajes y Modo Estudio
-
En Gacha Club, puedes crear tus propios personajes de anime y vestirlos con tus atuendos favoritos. Puedes personalizar hasta 10 personajes principales y 90 personajes extra con cientos de opciones de vestuario, peinado, accesorios, armas y más. También puedes cambiar los colores de casi todos los elementos, elegir entre 600 poses diferentes y ajustar el cabello, los ojos y los objetos para que se adapten a tus personajes.
-
Después de diseñar tus personajes, puedes entrar en el modo Estudio y crear cualquier escena que puedas imaginar. Puedes añadir hasta 10 personajes en cualquier lugar de la pantalla, así como tus mascotas y objetos favoritos. Puedes elegir entre una gran variedad de fondos y primeros planos, hacer que tus personajes hablen entre sí con cajas de texto personalizadas, añadir un narrador para crear escenas narrativas, guardar y cargar hasta 15 escenas y usar preajustes faciales para cambiar rápidamente tu expresión.
-
Gacha y Modo Batalla
-
Si quieres más acción, puedes gachar más de 180 unidades para usar en batalla. Puedes gachar por 150 mascotas para aumentar tus estadísticas, coleccionar personajes super raros Corruptos y DJ, usar materiales para mejorar
Mini-Juegos y Recompensas
-
Otra forma de divertirte y ganar recursos en Gacha Club es jugar a los mini-juegos que ofrece el juego. Hay cuatro mini-juegos disponibles: Lemo & Yumi Dance, Mascot Whack!, Memory Match, y Usagi vs. Neko. Cada uno tiene su propia mecánica y dificultad, pero todos te darán oro y otras recompensas si logras una buena puntuación. Estos son algunos consejos para cada mini-juego:
-
-
Lemo & Yumi Dance: Este es un juego de ritmo donde tienes que pulsar las flechas que coinciden con las que aparecen en la pantalla. Cuanto más sincronizado estés con la música, más puntos obtendrás. Intenta no perder ninguna flecha y mantén una racha para aumentar tu multiplicador.
-
Mascot Whack!: Este es un juego de reflejos donde tienes que golpear a las mascotas que salen de los agujeros. Cada mascota tiene un valor diferente, así que intenta golpear a las que valen más puntos. Evita golpear a las bombas, ya que te restarán puntos y tiempo.
-
Memory Match: Este es un juego de memoria donde tienes que emparejar las cartas que tienen el mismo personaje. Tienes un tiempo limitado para encontrar todos los pares, así que sé rápido y atento. Intenta recordar la posición de las cartas y usa el botón de pista si te quedas atascado.
-
Usagi vs. Neko: Este es un juego de lucha donde tienes que controlar a Usagi o a Neko y derrotar al otro. Puedes moverte, saltar, atacar y bloquear con los botones de la pantalla. Intenta esquivar o bloquear los ataques del enemigo y golpearlo cuando esté vulnerable. Usa el botón especial cuando se llene tu barra de energía para hacer un ataque poderoso.
-
-
Conclusión
-
Gacha Club es un juego muy completo y divertido que te permite crear tus propios personajes de anime y jugar con ellos en diferentes modos. Puedes personalizarlos al máximo, crear escenas increíbles, gachar unidades y mascotas, luchar contra monstruos y jefes, y jugar a mini-juegos variados. Además, es un juego gratuito y sin conexión, lo que lo hace ideal para pasar el rato sin preocuparte por el dinero o el internet.
-
Si te gustan los juegos de anime, vestir y batalla, no dudes en descargar Gacha Club en tu dispositivo Android y disfrutar de todo lo que ofrece. También puedes unirte a la comunidad de Gacha Club en Facebook o en su página web oficial para compartir tus creaciones, conocer a otros jugadores, y estar al tanto de las novedades del juego.
-
¿A qué esperas? ¡Entra en Gacha Club y empieza tu aventura hoy mismo!
-
Preguntas Frecuentes
-
A continuación, te respondemos algunas de las preguntas más comunes sobre Gacha Club:
-
-
¿Gacha Club es gratis? Sí, Gacha Club es un juego totalmente gratuito que puedes descargar e instalar sin pagar nada. Además, no tiene compras dentro de la aplicación ni anuncios molestos.
-
¿Gacha Club es seguro para los niños? Sí, Gacha Club es un juego apto para todos los públicos que no contiene violencia explícita ni contenido inapropiado. Sin embargo, como tiene una función de chat en vivo, se recomienda a los padres supervisar con quién interactúan sus hijos en línea.
-
¿Gacha Club está disponible en otras plataformas? Sí, Gacha Club también está disponible para Windows e iOS en sus respectivas tiendas de aplicaciones. Sin embargo, la versión completa del juego solo está disponible para Android por el momento.
-
¿Cómo puedo importar y exportar mis personajes en Gacha Club? Puedes importar y exportar tus personajes usando códigos QR o códigos numéricos. Para importar un personaje, solo tienes que escanear el código QR o introducir el código numérico del personaje que quieres importar en el menú de importación/exportación. Para exportar un personaje, solo tienes que generar el código QR o el código numérico del personaje que quieres exportar en el mismo menú. Luego, puedes compartir el código con otros jugadores o guardarlo para usarlo más tarde.
-
¿Cómo puedo conseguir más gemas y bytes en Gacha Club? Puedes conseguir más gemas y bytes, que son las monedas del juego, de varias formas. Algunas de ellas son: completar misiones, jugar a mini-juegos, gachar unidades y mascotas, vender objetos que no necesites, y ver vídeos promocionales.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py
deleted file mode 100644
index 1829d7db4ef832ad65598b471caa7d256a06d012..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py
+++ /dev/null
@@ -1,213 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts
deleted file mode 100644
index bda367d74c634f58d3e3898029bbc64bdbc61c0a..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts
+++ /dev/null
@@ -1,542 +0,0 @@
-/**
- * HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as a
- * separate module.
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/https.js)
- */
-declare module 'https' {
- import { Duplex } from 'node:stream';
- import * as tls from 'node:tls';
- import * as http from 'node:http';
- import { URL } from 'node:url';
- type ServerOptions<
- Request extends typeof http.IncomingMessage = typeof http.IncomingMessage,
- Response extends typeof http.ServerResponse = typeof http.ServerResponse,
- > = tls.SecureContextOptions & tls.TlsOptions & http.ServerOptions;
- type RequestOptions = http.RequestOptions &
- tls.SecureContextOptions & {
- checkServerIdentity?: typeof tls.checkServerIdentity | undefined;
- rejectUnauthorized?: boolean | undefined; // Defaults to true
- servername?: string | undefined; // SNI TLS Extension
- };
- interface AgentOptions extends http.AgentOptions, tls.ConnectionOptions {
- rejectUnauthorized?: boolean | undefined;
- maxCachedSessions?: number | undefined;
- }
- /**
- * An `Agent` object for HTTPS similar to `http.Agent`. See {@link request} for more information.
- * @since v0.4.5
- */
- class Agent extends http.Agent {
- constructor(options?: AgentOptions);
- options: AgentOptions;
- }
- interface Server<
- Request extends typeof http.IncomingMessage = typeof http.IncomingMessage,
- Response extends typeof http.ServerResponse = typeof http.ServerResponse,
- > extends http.Server {}
- /**
- * See `http.Server` for more information.
- * @since v0.3.4
- */
- class Server<
- Request extends typeof http.IncomingMessage = typeof http.IncomingMessage,
- Response extends typeof http.ServerResponse = typeof http.ServerResponse,
- > extends tls.Server {
- constructor(requestListener?: http.RequestListener);
- constructor(
- options: ServerOptions,
- requestListener?: http.RequestListener,
- );
- /**
- * Closes all connections connected to this server.
- * @since v18.2.0
- */
- closeAllConnections(): void;
- /**
- * Closes all connections connected to this server which are not sending a request or waiting for a response.
- * @since v18.2.0
- */
- closeIdleConnections(): void;
- addListener(event: string, listener: (...args: any[]) => void): this;
- addListener(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this;
- addListener(
- event: 'newSession',
- listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void,
- ): this;
- addListener(
- event: 'OCSPRequest',
- listener: (
- certificate: Buffer,
- issuer: Buffer,
- callback: (err: Error | null, resp: Buffer) => void,
- ) => void,
- ): this;
- addListener(
- event: 'resumeSession',
- listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void,
- ): this;
- addListener(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this;
- addListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this;
- addListener(event: 'close', listener: () => void): this;
- addListener(event: 'connection', listener: (socket: Duplex) => void): this;
- addListener(event: 'error', listener: (err: Error) => void): this;
- addListener(event: 'listening', listener: () => void): this;
- addListener(event: 'checkContinue', listener: http.RequestListener): this;
- addListener(event: 'checkExpectation', listener: http.RequestListener): this;
- addListener(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this;
- addListener(
- event: 'connect',
- listener: (req: InstanceType, socket: Duplex, head: Buffer) => void,
- ): this;
- addListener(event: 'request', listener: http.RequestListener): this;
- addListener(
- event: 'upgrade',
- listener: (req: InstanceType, socket: Duplex, head: Buffer) => void,
- ): this;
- emit(event: string, ...args: any[]): boolean;
- emit(event: 'keylog', line: Buffer, tlsSocket: tls.TLSSocket): boolean;
- emit(
- event: 'newSession',
- sessionId: Buffer,
- sessionData: Buffer,
- callback: (err: Error, resp: Buffer) => void,
- ): boolean;
- emit(
- event: 'OCSPRequest',
- certificate: Buffer,
- issuer: Buffer,
- callback: (err: Error | null, resp: Buffer) => void,
- ): boolean;
- emit(event: 'resumeSession', sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void): boolean;
- emit(event: 'secureConnection', tlsSocket: tls.TLSSocket): boolean;
- emit(event: 'tlsClientError', err: Error, tlsSocket: tls.TLSSocket): boolean;
- emit(event: 'close'): boolean;
- emit(event: 'connection', socket: Duplex): boolean;
- emit(event: 'error', err: Error): boolean;
- emit(event: 'listening'): boolean;
- emit(
- event: 'checkContinue',
- req: InstanceType,
- res: InstanceType & { req: InstanceType },
- ): boolean;
- emit(
- event: 'checkExpectation',
- req: InstanceType,
- res: InstanceType & { req: InstanceType },
- ): boolean;
- emit(event: 'clientError', err: Error, socket: Duplex): boolean;
- emit(event: 'connect', req: InstanceType, socket: Duplex, head: Buffer): boolean;
- emit(
- event: 'request',
- req: InstanceType,
- res: InstanceType & { req: InstanceType },
- ): boolean;
- emit(event: 'upgrade', req: InstanceType, socket: Duplex, head: Buffer): boolean;
- on(event: string, listener: (...args: any[]) => void): this;
- on(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this;
- on(
- event: 'newSession',
- listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void,
- ): this;
- on(
- event: 'OCSPRequest',
- listener: (
- certificate: Buffer,
- issuer: Buffer,
- callback: (err: Error | null, resp: Buffer) => void,
- ) => void,
- ): this;
- on(
- event: 'resumeSession',
- listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void,
- ): this;
- on(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this;
- on(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this;
- on(event: 'close', listener: () => void): this;
- on(event: 'connection', listener: (socket: Duplex) => void): this;
- on(event: 'error', listener: (err: Error) => void): this;
- on(event: 'listening', listener: () => void): this;
- on(event: 'checkContinue', listener: http.RequestListener): this;
- on(event: 'checkExpectation', listener: http.RequestListener): this;
- on(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this;
- on(event: 'connect', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this;
- on(event: 'request', listener: http.RequestListener): this;
- on(event: 'upgrade', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this;
- once(event: string, listener: (...args: any[]) => void): this;
- once(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this;
- once(
- event: 'newSession',
- listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void,
- ): this;
- once(
- event: 'OCSPRequest',
- listener: (
- certificate: Buffer,
- issuer: Buffer,
- callback: (err: Error | null, resp: Buffer) => void,
- ) => void,
- ): this;
- once(
- event: 'resumeSession',
- listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void,
- ): this;
- once(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this;
- once(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this;
- once(event: 'close', listener: () => void): this;
- once(event: 'connection', listener: (socket: Duplex) => void): this;
- once(event: 'error', listener: (err: Error) => void): this;
- once(event: 'listening', listener: () => void): this;
- once(event: 'checkContinue', listener: http.RequestListener): this;
- once(event: 'checkExpectation', listener: http.RequestListener): this;
- once(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this;
- once(event: 'connect', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this;
- once(event: 'request', listener: http.RequestListener): this;
- once(event: 'upgrade', listener: (req: InstanceType, socket: Duplex, head: Buffer) => void): this;
- prependListener(event: string, listener: (...args: any[]) => void): this;
- prependListener(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this;
- prependListener(
- event: 'newSession',
- listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void,
- ): this;
- prependListener(
- event: 'OCSPRequest',
- listener: (
- certificate: Buffer,
- issuer: Buffer,
- callback: (err: Error | null, resp: Buffer) => void,
- ) => void,
- ): this;
- prependListener(
- event: 'resumeSession',
- listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void,
- ): this;
- prependListener(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this;
- prependListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this;
- prependListener(event: 'close', listener: () => void): this;
- prependListener(event: 'connection', listener: (socket: Duplex) => void): this;
- prependListener(event: 'error', listener: (err: Error) => void): this;
- prependListener(event: 'listening', listener: () => void): this;
- prependListener(event: 'checkContinue', listener: http.RequestListener): this;
- prependListener(event: 'checkExpectation', listener: http.RequestListener): this;
- prependListener(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this;
- prependListener(
- event: 'connect',
- listener: (req: InstanceType, socket: Duplex, head: Buffer) => void,
- ): this;
- prependListener(event: 'request', listener: http.RequestListener): this;
- prependListener(
- event: 'upgrade',
- listener: (req: InstanceType, socket: Duplex, head: Buffer) => void,
- ): this;
- prependOnceListener(event: string, listener: (...args: any[]) => void): this;
- prependOnceListener(event: 'keylog', listener: (line: Buffer, tlsSocket: tls.TLSSocket) => void): this;
- prependOnceListener(
- event: 'newSession',
- listener: (sessionId: Buffer, sessionData: Buffer, callback: (err: Error, resp: Buffer) => void) => void,
- ): this;
- prependOnceListener(
- event: 'OCSPRequest',
- listener: (
- certificate: Buffer,
- issuer: Buffer,
- callback: (err: Error | null, resp: Buffer) => void,
- ) => void,
- ): this;
- prependOnceListener(
- event: 'resumeSession',
- listener: (sessionId: Buffer, callback: (err: Error, sessionData: Buffer) => void) => void,
- ): this;
- prependOnceListener(event: 'secureConnection', listener: (tlsSocket: tls.TLSSocket) => void): this;
- prependOnceListener(event: 'tlsClientError', listener: (err: Error, tlsSocket: tls.TLSSocket) => void): this;
- prependOnceListener(event: 'close', listener: () => void): this;
- prependOnceListener(event: 'connection', listener: (socket: Duplex) => void): this;
- prependOnceListener(event: 'error', listener: (err: Error) => void): this;
- prependOnceListener(event: 'listening', listener: () => void): this;
- prependOnceListener(event: 'checkContinue', listener: http.RequestListener): this;
- prependOnceListener(event: 'checkExpectation', listener: http.RequestListener): this;
- prependOnceListener(event: 'clientError', listener: (err: Error, socket: Duplex) => void): this;
- prependOnceListener(
- event: 'connect',
- listener: (req: InstanceType, socket: Duplex, head: Buffer) => void,
- ): this;
- prependOnceListener(event: 'request', listener: http.RequestListener): this;
- prependOnceListener(
- event: 'upgrade',
- listener: (req: InstanceType, socket: Duplex, head: Buffer) => void,
- ): this;
- }
- /**
- * ```js
- * // curl -k https://localhost:8000/
- * const https = require('https');
- * const fs = require('fs');
- *
- * const options = {
- * key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
- * cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
- * };
- *
- * https.createServer(options, (req, res) => {
- * res.writeHead(200);
- * res.end('hello world\n');
- * }).listen(8000);
- * ```
- *
- * Or
- *
- * ```js
- * const https = require('https');
- * const fs = require('fs');
- *
- * const options = {
- * pfx: fs.readFileSync('test/fixtures/test_cert.pfx'),
- * passphrase: 'sample'
- * };
- *
- * https.createServer(options, (req, res) => {
- * res.writeHead(200);
- * res.end('hello world\n');
- * }).listen(8000);
- * ```
- * @since v0.3.4
- * @param options Accepts `options` from `createServer`, `createSecureContext` and `createServer`.
- * @param requestListener A listener to be added to the `'request'` event.
- */
- function createServer<
- Request extends typeof http.IncomingMessage = typeof http.IncomingMessage,
- Response extends typeof http.ServerResponse = typeof http.ServerResponse,
- >(requestListener?: http.RequestListener): Server;
- function createServer<
- Request extends typeof http.IncomingMessage = typeof http.IncomingMessage,
- Response extends typeof http.ServerResponse = typeof http.ServerResponse,
- >(
- options: ServerOptions,
- requestListener?: http.RequestListener,
- ): Server;
- /**
- * Makes a request to a secure web server.
- *
- * The following additional `options` from `tls.connect()` are also accepted:`ca`, `cert`, `ciphers`, `clientCertEngine`, `crl`, `dhparam`, `ecdhCurve`,`honorCipherOrder`, `key`, `passphrase`,
- * `pfx`, `rejectUnauthorized`,`secureOptions`, `secureProtocol`, `servername`, `sessionIdContext`,`highWaterMark`.
- *
- * `options` can be an object, a string, or a `URL` object. If `options` is a
- * string, it is automatically parsed with `new URL()`. If it is a `URL` object, it will be automatically converted to an ordinary `options` object.
- *
- * `https.request()` returns an instance of the `http.ClientRequest` class. The `ClientRequest` instance is a writable stream. If one needs to
- * upload a file with a POST request, then write to the `ClientRequest` object.
- *
- * ```js
- * const https = require('https');
- *
- * const options = {
- * hostname: 'encrypted.google.com',
- * port: 443,
- * path: '/',
- * method: 'GET'
- * };
- *
- * const req = https.request(options, (res) => {
- * console.log('statusCode:', res.statusCode);
- * console.log('headers:', res.headers);
- *
- * res.on('data', (d) => {
- * process.stdout.write(d);
- * });
- * });
- *
- * req.on('error', (e) => {
- * console.error(e);
- * });
- * req.end();
- * ```
- *
- * Example using options from `tls.connect()`:
- *
- * ```js
- * const options = {
- * hostname: 'encrypted.google.com',
- * port: 443,
- * path: '/',
- * method: 'GET',
- * key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
- * cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
- * };
- * options.agent = new https.Agent(options);
- *
- * const req = https.request(options, (res) => {
- * // ...
- * });
- * ```
- *
- * Alternatively, opt out of connection pooling by not using an `Agent`.
- *
- * ```js
- * const options = {
- * hostname: 'encrypted.google.com',
- * port: 443,
- * path: '/',
- * method: 'GET',
- * key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
- * cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem'),
- * agent: false
- * };
- *
- * const req = https.request(options, (res) => {
- * // ...
- * });
- * ```
- *
- * Example using a `URL` as `options`:
- *
- * ```js
- * const options = new URL('https://abc:xyz@example.com');
- *
- * const req = https.request(options, (res) => {
- * // ...
- * });
- * ```
- *
- * Example pinning on certificate fingerprint, or the public key (similar to`pin-sha256`):
- *
- * ```js
- * const tls = require('tls');
- * const https = require('https');
- * const crypto = require('crypto');
- *
- * function sha256(s) {
- * return crypto.createHash('sha256').update(s).digest('base64');
- * }
- * const options = {
- * hostname: 'github.com',
- * port: 443,
- * path: '/',
- * method: 'GET',
- * checkServerIdentity: function(host, cert) {
- * // Make sure the certificate is issued to the host we are connected to
- * const err = tls.checkServerIdentity(host, cert);
- * if (err) {
- * return err;
- * }
- *
- * // Pin the public key, similar to HPKP pin-sha25 pinning
- * const pubkey256 = 'pL1+qb9HTMRZJmuC/bB/ZI9d302BYrrqiVuRyW+DGrU=';
- * if (sha256(cert.pubkey) !== pubkey256) {
- * const msg = 'Certificate verification error: ' +
- * `The public key of '${cert.subject.CN}' ` +
- * 'does not match our pinned fingerprint';
- * return new Error(msg);
- * }
- *
- * // Pin the exact certificate, rather than the pub key
- * const cert256 = '25:FE:39:32:D9:63:8C:8A:FC:A1:9A:29:87:' +
- * 'D8:3E:4C:1D:98:DB:71:E4:1A:48:03:98:EA:22:6A:BD:8B:93:16';
- * if (cert.fingerprint256 !== cert256) {
- * const msg = 'Certificate verification error: ' +
- * `The certificate of '${cert.subject.CN}' ` +
- * 'does not match our pinned fingerprint';
- * return new Error(msg);
- * }
- *
- * // This loop is informational only.
- * // Print the certificate and public key fingerprints of all certs in the
- * // chain. Its common to pin the public key of the issuer on the public
- * // internet, while pinning the public key of the service in sensitive
- * // environments.
- * do {
- * console.log('Subject Common Name:', cert.subject.CN);
- * console.log(' Certificate SHA256 fingerprint:', cert.fingerprint256);
- *
- * hash = crypto.createHash('sha256');
- * console.log(' Public key ping-sha256:', sha256(cert.pubkey));
- *
- * lastprint256 = cert.fingerprint256;
- * cert = cert.issuerCertificate;
- * } while (cert.fingerprint256 !== lastprint256);
- *
- * },
- * };
- *
- * options.agent = new https.Agent(options);
- * const req = https.request(options, (res) => {
- * console.log('All OK. Server matched our pinned cert or public key');
- * console.log('statusCode:', res.statusCode);
- * // Print the HPKP values
- * console.log('headers:', res.headers['public-key-pins']);
- *
- * res.on('data', (d) => {});
- * });
- *
- * req.on('error', (e) => {
- * console.error(e.message);
- * });
- * req.end();
- * ```
- *
- * Outputs for example:
- *
- * ```text
- * Subject Common Name: github.com
- * Certificate SHA256 fingerprint: 25:FE:39:32:D9:63:8C:8A:FC:A1:9A:29:87:D8:3E:4C:1D:98:DB:71:E4:1A:48:03:98:EA:22:6A:BD:8B:93:16
- * Public key ping-sha256: pL1+qb9HTMRZJmuC/bB/ZI9d302BYrrqiVuRyW+DGrU=
- * Subject Common Name: DigiCert SHA2 Extended Validation Server CA
- * Certificate SHA256 fingerprint: 40:3E:06:2A:26:53:05:91:13:28:5B:AF:80:A0:D4:AE:42:2C:84:8C:9F:78:FA:D0:1F:C9:4B:C5:B8:7F:EF:1A
- * Public key ping-sha256: RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho=
- * Subject Common Name: DigiCert High Assurance EV Root CA
- * Certificate SHA256 fingerprint: 74:31:E5:F4:C3:C1:CE:46:90:77:4F:0B:61:E0:54:40:88:3B:A9:A0:1E:D0:0B:A6:AB:D7:80:6E:D3:B1:18:CF
- * Public key ping-sha256: WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18=
- * All OK. Server matched our pinned cert or public key
- * statusCode: 200
- * headers: max-age=0; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho=";
- * pin-sha256="k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQws="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="IQBnNBEiFuhj+8x6X8XLgh01V9Ic5/V3IRQLNFFc7v4=";
- * pin-sha256="iie1VXtL7HzAMF+/PVPR9xzT80kQxdZeJ+zduCB3uj0="; pin-sha256="LvRiGEjRqfzurezaWuj8Wie2gyHMrW5Q06LspMnox7A="; includeSubDomains
- * ```
- * @since v0.3.6
- * @param options Accepts all `options` from `request`, with some differences in default values:
- */
- function request(
- options: RequestOptions | string | URL,
- callback?: (res: http.IncomingMessage) => void,
- ): http.ClientRequest;
- function request(
- url: string | URL,
- options: RequestOptions,
- callback?: (res: http.IncomingMessage) => void,
- ): http.ClientRequest;
- /**
- * Like `http.get()` but for HTTPS.
- *
- * `options` can be an object, a string, or a `URL` object. If `options` is a
- * string, it is automatically parsed with `new URL()`. If it is a `URL` object, it will be automatically converted to an ordinary `options` object.
- *
- * ```js
- * const https = require('https');
- *
- * https.get('https://encrypted.google.com/', (res) => {
- * console.log('statusCode:', res.statusCode);
- * console.log('headers:', res.headers);
- *
- * res.on('data', (d) => {
- * process.stdout.write(d);
- * });
- *
- * }).on('error', (e) => {
- * console.error(e);
- * });
- * ```
- * @since v0.3.6
- * @param options Accepts the same `options` as {@link request}, with the `method` always set to `GET`.
- */
- function get(
- options: RequestOptions | string | URL,
- callback?: (res: http.IncomingMessage) => void,
- ): http.ClientRequest;
- function get(
- url: string | URL,
- options: RequestOptions,
- callback?: (res: http.IncomingMessage) => void,
- ): http.ClientRequest;
- let globalAgent: Agent;
-}
-declare module 'node:https' {
- export * from 'https';
-}
diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/filter_sharded_dataset.py b/spaces/fffiloni/lama-video-watermark-remover/bin/filter_sharded_dataset.py
deleted file mode 100644
index b3c2b490e88bb3b55c6bb717e08f97f7a396d5fa..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/bin/filter_sharded_dataset.py
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/usr/bin/env python3
-
-
-import math
-import os
-import random
-
-import braceexpand
-import webdataset as wds
-
-DEFAULT_CATS_FILE = os.path.join(os.path.dirname(__file__), '..', 'configs', 'places2-categories_157.txt')
-
-def is_good_key(key, cats):
- return any(c in key for c in cats)
-
-
-def main(args):
- if args.categories == 'nofilter':
- good_categories = None
- else:
- with open(args.categories, 'r') as f:
- good_categories = set(line.strip().split(' ')[0] for line in f if line.strip())
-
- all_input_files = list(braceexpand.braceexpand(args.infile))
- chunk_size = int(math.ceil(len(all_input_files) / args.n_read_streams))
-
- input_iterators = [iter(wds.Dataset(all_input_files[start : start + chunk_size]).shuffle(args.shuffle_buffer))
- for start in range(0, len(all_input_files), chunk_size)]
- output_datasets = [wds.ShardWriter(args.outpattern.format(i)) for i in range(args.n_write_streams)]
-
- good_readers = list(range(len(input_iterators)))
- step_i = 0
- good_samples = 0
- bad_samples = 0
- while len(good_readers) > 0:
- if step_i % args.print_freq == 0:
- print(f'Iterations done {step_i}; readers alive {good_readers}; good samples {good_samples}; bad samples {bad_samples}')
-
- step_i += 1
-
- ri = random.choice(good_readers)
- try:
- sample = next(input_iterators[ri])
- except StopIteration:
- good_readers = list(set(good_readers) - {ri})
- continue
-
- if good_categories is not None and not is_good_key(sample['__key__'], good_categories):
- bad_samples += 1
- continue
-
- wi = random.randint(0, args.n_write_streams - 1)
- output_datasets[wi].write(sample)
- good_samples += 1
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('--categories', type=str, default=DEFAULT_CATS_FILE)
- aparser.add_argument('--shuffle-buffer', type=int, default=10000)
- aparser.add_argument('--n-read-streams', type=int, default=10)
- aparser.add_argument('--n-write-streams', type=int, default=10)
- aparser.add_argument('--print-freq', type=int, default=1000)
- aparser.add_argument('infile', type=str)
- aparser.add_argument('outpattern', type=str)
-
- main(aparser.parse_args())
diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/bodies_enum.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/bodies_enum.js
deleted file mode 100644
index fa09136f02c609c6ae58bf9b07521a8689f2c8ac..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/bodies/bodies_enum.js
+++ /dev/null
@@ -1,7 +0,0 @@
-let BodyTypesEnum = {
- WALKER: 0,
- SWIMMER: 1,
- CLIMBER: 2,
- AMPHIBIAN: 3,
-};
-
diff --git a/spaces/frncscp/bullerengue/musika/layers.py b/spaces/frncscp/bullerengue/musika/layers.py
deleted file mode 100644
index 1edd67a8d7d8c58be23211a4bc61a30e6cc703a4..0000000000000000000000000000000000000000
--- a/spaces/frncscp/bullerengue/musika/layers.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import tensorflow as tf
-import tensorflow.python.keras.backend as K
-from tensorflow.python.eager import context
-from tensorflow.python.ops import (
- gen_math_ops,
- math_ops,
- sparse_ops,
- standard_ops,
-)
-
-
-def l2normalize(v, eps=1e-12):
- return v / (tf.norm(v) + eps)
-
-
-class ConvSN2D(tf.keras.layers.Conv2D):
- def __init__(self, filters, kernel_size, power_iterations=1, datatype=tf.float32, **kwargs):
- super(ConvSN2D, self).__init__(filters, kernel_size, **kwargs)
- self.power_iterations = power_iterations
- self.datatype = datatype
-
- def build(self, input_shape):
- super(ConvSN2D, self).build(input_shape)
-
- if self.data_format == "channels_first":
- channel_axis = 1
- else:
- channel_axis = -1
-
- self.u = self.add_weight(
- self.name + "_u",
- shape=tuple([1, self.kernel.shape.as_list()[-1]]),
- initializer=tf.initializers.RandomNormal(0, 1),
- trainable=False,
- dtype=self.dtype,
- )
-
- def compute_spectral_norm(self, W, new_u, W_shape):
- for _ in range(self.power_iterations):
-
- new_v = l2normalize(tf.matmul(new_u, tf.transpose(W)))
- new_u = l2normalize(tf.matmul(new_v, W))
-
- sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u))
- W_bar = W / sigma
-
- with tf.control_dependencies([self.u.assign(new_u)]):
- W_bar = tf.reshape(W_bar, W_shape)
-
- return W_bar
-
- def call(self, inputs):
- W_shape = self.kernel.shape.as_list()
- W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1]))
- new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape)
- outputs = self._convolution_op(inputs, new_kernel)
-
- if self.use_bias:
- if self.data_format == "channels_first":
- outputs = tf.nn.bias_add(outputs, self.bias, data_format="NCHW")
- else:
- outputs = tf.nn.bias_add(outputs, self.bias, data_format="NHWC")
- if self.activation is not None:
- return self.activation(outputs)
-
- return outputs
-
-
-class DenseSN(tf.keras.layers.Dense):
- def __init__(self, datatype=tf.float32, **kwargs):
- super(DenseSN, self).__init__(**kwargs)
- self.datatype = datatype
-
- def build(self, input_shape):
- super(DenseSN, self).build(input_shape)
-
- self.u = self.add_weight(
- self.name + "_u",
- shape=tuple([1, self.kernel.shape.as_list()[-1]]),
- initializer=tf.initializers.RandomNormal(0, 1),
- trainable=False,
- dtype=self.dtype,
- )
-
- def compute_spectral_norm(self, W, new_u, W_shape):
- new_v = l2normalize(tf.matmul(new_u, tf.transpose(W)))
- new_u = l2normalize(tf.matmul(new_v, W))
- sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u))
- W_bar = W / sigma
- with tf.control_dependencies([self.u.assign(new_u)]):
- W_bar = tf.reshape(W_bar, W_shape)
- return W_bar
-
- def call(self, inputs):
- W_shape = self.kernel.shape.as_list()
- W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1]))
- new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape)
- rank = len(inputs.shape)
- if rank > 2:
- outputs = standard_ops.tensordot(inputs, new_kernel, [[rank - 1], [0]])
- if not context.executing_eagerly():
- shape = inputs.shape.as_list()
- output_shape = shape[:-1] + [self.units]
- outputs.set_shape(output_shape)
- else:
- inputs = math_ops.cast(inputs, self._compute_dtype)
- if K.is_sparse(inputs):
- outputs = sparse_ops.sparse_tensor_dense_matmul(inputs, new_kernel)
- else:
- outputs = gen_math_ops.mat_mul(inputs, new_kernel)
- if self.use_bias:
- outputs = tf.nn.bias_add(outputs, self.bias)
- if self.activation is not None:
- return self.activation(outputs)
- return outputs
-
-
-class AddNoise(tf.keras.layers.Layer):
- def __init__(self, datatype=tf.float32, **kwargs):
- super(AddNoise, self).__init__(**kwargs)
- self.datatype = datatype
-
- def build(self, input_shape):
- self.b = self.add_weight(
- shape=[
- 1,
- ],
- initializer=tf.keras.initializers.zeros(),
- trainable=True,
- name="noise_weight",
- dtype=tf.float32,
- )
-
- def call(self, inputs):
- rand = tf.random.normal(
- [tf.shape(inputs)[0], inputs.shape[1], inputs.shape[2], 1],
- mean=0.0,
- stddev=1.0,
- dtype=self.datatype,
- )
- output = inputs + self.b * rand
- return output
-
-
-class PosEnc(tf.keras.layers.Layer):
- def __init__(self, datatype=tf.float32, **kwargs):
- super(PosEnc, self).__init__(**kwargs)
- self.datatype = datatype
-
- def call(self, inputs):
- pos = tf.repeat(
- tf.reshape(tf.range(inputs.shape[-3], dtype=tf.int32), [1, -1, 1, 1]),
- inputs.shape[-2],
- -2,
- )
- pos = tf.cast(tf.repeat(pos, tf.shape(inputs)[0], 0), self.dtype) / tf.cast(inputs.shape[-3], self.datatype)
- return tf.concat([inputs, pos], -1) # [bs,1,hop,2]
-
-
-def flatten_hw(x, data_format="channels_last"):
- if data_format == "channels_last":
- x = tf.transpose(x, perm=[0, 3, 1, 2]) # Convert to `channels_first`
-
- old_shape = tf.shape(x)
diff --git a/spaces/gauravgarg/youtube_transcript/app.py b/spaces/gauravgarg/youtube_transcript/app.py
deleted file mode 100644
index 0ae26e2c5b4f1e4bc7ef52fcbf6a27fee5e06069..0000000000000000000000000000000000000000
--- a/spaces/gauravgarg/youtube_transcript/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import streamlit as st
-from youtube_transcript_api import YouTubeTranscriptApi
-import urllib.parse as urlparse
-from pytube import extract
-
-transcript = ""
-
-
-def read_file(_id):
- tx = YouTubeTranscriptApi.get_transcript(_id)
- my_list = []
- for i in tx:
- curr_str = i["text"].replace("\xa0", "").replace("\n", " ")
- my_list.append(curr_str)
- transcript = " ".join(str(x) for x in my_list)
- st.markdown(transcript)
-
-st.title('Easily transcribe YouTube videos')
-
-link = st.text_input('Enter your YouTube video link', 'https://www.youtube.com/watch?v=WzBt4VJzfUI')
-
-if link is not None:
- print(link)
- _id=extract.video_id(link)
- st.video(link)
-
- st.button('check_status', on_click=read_file(_id))
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/geekyrakshit/enhance-me/enhance_me/zero_dce/__init__.py b/spaces/geekyrakshit/enhance-me/enhance_me/zero_dce/__init__.py
deleted file mode 100644
index c11296e6da27a4d470d3589212310dad5d272840..0000000000000000000000000000000000000000
--- a/spaces/geekyrakshit/enhance-me/enhance_me/zero_dce/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .zero_dce import ZeroDCE
diff --git a/spaces/ghuron/artist/model.py b/spaces/ghuron/artist/model.py
deleted file mode 100644
index 1416b326c7654b016a13fb530cc6e7258989f068..0000000000000000000000000000000000000000
--- a/spaces/ghuron/artist/model.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from sentence_transformers import SentenceTransformer, util
-import torch
-import numpy as np
-
-from sql import get_article, get_index_articles
-
-
-device = torch.device("cpu")
-
-model = SentenceTransformer('all-mpnet-base-v2').to(device)
-embeddings = torch.load('embeddings/embeddings.pt', map_location=device)
-
-
-def query(sentence, initial_year, final_year, n=5):
- # embedding the query
- query_embedding = model.encode(sentence, convert_to_tensor=True)
-
- # mask index
- mask_indices = np.hstack(get_index_articles(initial_year, final_year))
-
- # Use mask to select only relevant embeddings
- masked_embeddings = embeddings[mask_indices]
-
- # Calculate cosine similarity with only the masked embeddings
- cos_scores = util.pytorch_cos_sim(query_embedding, masked_embeddings)[0]
- top_results = torch.topk(cos_scores, k=n)
-
- results = []
- for score, relative_idx in zip(top_results[0].cpu().numpy(), top_results[1].cpu().numpy()):
- # Map the relative index back to the original embeddings' index
- original_idx = mask_indices[relative_idx]
-
- article = get_article(original_idx)
- results.append((article, score))
-
- return results
diff --git a/spaces/gligen/demo/gligen/ldm/modules/ema.py b/spaces/gligen/demo/gligen/ldm/modules/ema.py
deleted file mode 100644
index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000
--- a/spaces/gligen/demo/gligen/ldm/modules/ema.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self,model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion Aventura] Descarga la pelicula que hizo reir a millones.md b/spaces/gotiQspiryo/whisper-ui/examples/Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion Aventura] Descarga la pelicula que hizo reir a millones.md
deleted file mode 100644
index c129ca656902694395c980226bf6baa4a9851be6..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion Aventura] Descarga la pelicula que hizo reir a millones.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Chicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion | Aventura]
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py b/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py
deleted file mode 100644
index 98e835fa6e4c0bcad062df9c519701bf795c98be..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import utils
-from fairseq.criterions import LegacyFairseqCriterion, register_criterion
-from torch import nn
-
-
-@register_criterion("composite_loss")
-class CompositeLoss(LegacyFairseqCriterion):
- """This is a composite loss that, given a list of model outputs and a list of targets,
- computes an average of losses for each output-target pair"""
-
- def __init__(self, args, task):
- super().__init__(args, task)
- self.underlying_criterion = args.underlying_criterion
-
- @staticmethod
- def add_args(parser):
- """Add criterion-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True,
- help='underlying criterion to use for the composite loss')
- # fmt: on
-
- @staticmethod
- def build_underlying_criterion(args, task):
- saved_criterion = args.criterion
- args.criterion = args.underlying_criterion
- assert saved_criterion != args.underlying_criterion
- underlying_criterion = task.build_criterion(args)
- args.criterion = saved_criterion
- return underlying_criterion
-
- @classmethod
- def build_criterion(cls, args, task):
- underlying_criterion = CompositeLoss.build_underlying_criterion(args, task)
-
- class FakeModel(nn.Module):
- def __init__(self, model, net_out, target):
- super().__init__()
- self.model = model
- self.net_out = net_out
- self.target = target
-
- def forward(self, **unused):
- return self.net_out
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- return self.model.get_normalized_probs(
- net_output, log_probs, sample=sample
- )
-
- def get_targets(self, *unused):
- return self.target
-
- @property
- def decoder(self):
- return self.model.decoder
-
- class _CompositeLoss(LegacyFairseqCriterion):
- def __init__(self, args, task, underlying_criterion):
- super().__init__(args, task)
- self.underlying_criterion = underlying_criterion
-
- def forward(self, model, sample, reduce=True):
- net_outputs = model(**sample["net_input"])
- targets = sample["target"]
-
- bsz = targets[0].size(0)
- loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_()
-
- sample_size = 0
- logging_output = {}
- for o, t in zip(net_outputs[0], targets):
- m = FakeModel(model, (o, net_outputs[1]), t)
- sample["target"] = t
- l, ss, logging_output = self.underlying_criterion(m, sample, reduce)
- loss += l
- sample_size += ss
-
- loss.div_(len(targets))
- sample_size /= len(targets)
-
- logging_output["loss"] = utils.item(loss.data) if reduce else loss.data
- return loss, sample_size, logging_output
-
- @staticmethod
- def aggregate_logging_outputs(logging_outputs):
- return underlying_criterion.__class__.aggregate_logging_outputs(
- logging_outputs
- )
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- underlying_criterion.__class__.reduce_metrics(logging_outputs)
-
- return _CompositeLoss(args, task, underlying_criterion)
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py
deleted file mode 100644
index 022912df133bd977364786f90d6ae635292dc135..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-#
-
-
-import os
-import re
-from typing import List
-import legacy
-
-import click
-import dnnlib
-import numpy as np
-import PIL.Image
-import torch
-
-"""
-Style mixing using pretrained network pickle.
-
-Examples:
-
-\b
-python style_mixing.py --network=pretrained_models/stylegan_human_v2_1024.pkl --rows=85,100,75,458,1500 \\
- --cols=55,821,1789,293 --styles=0-3 --outdir=outputs/stylemixing
-"""
-
-
-@click.command()
-@click.option('--network', 'network_pkl', help='Network pickle filename', required=True)
-@click.option('--rows', 'row_seeds', type=legacy.num_range, help='Random seeds to use for image rows', required=True)
-@click.option('--cols', 'col_seeds', type=legacy.num_range, help='Random seeds to use for image columns', required=True)
-@click.option('--styles', 'col_styles', type=legacy.num_range, help='Style layer range', default='0-6', show_default=True)
-@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=0.8, show_default=True)
-@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True)
-@click.option('--outdir', type=str, required=True, default='outputs/stylemixing')
-def generate_style_mix(
- network_pkl: str,
- row_seeds: List[int],
- col_seeds: List[int],
- col_styles: List[int],
- truncation_psi: float,
- noise_mode: str,
- outdir: str
-):
-
- print('Loading networks from "%s"...' % network_pkl)
- device = torch.device('cuda')
- with dnnlib.util.open_url(network_pkl) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device)
-
- os.makedirs(outdir, exist_ok=True)
-
- print('Generating W vectors...')
- all_seeds = list(set(row_seeds + col_seeds))
- all_z = np.stack([np.random.RandomState(seed).randn(G.z_dim)
- for seed in all_seeds])
- all_w = G.mapping(torch.from_numpy(all_z).to(device), None)
- w_avg = G.mapping.w_avg
- all_w = w_avg + (all_w - w_avg) * truncation_psi
- w_dict = {seed: w for seed, w in zip(all_seeds, list(all_w))}
-
- print('Generating images...')
- all_images = G.synthesis(all_w, noise_mode=noise_mode)
- all_images = (all_images.permute(0, 2, 3, 1) * 127.5 +
- 128).clamp(0, 255).to(torch.uint8).cpu().numpy()
- image_dict = {(seed, seed): image for seed,
- image in zip(all_seeds, list(all_images))}
-
- print('Generating style-mixed images...')
- for row_seed in row_seeds:
- for col_seed in col_seeds:
- w = w_dict[row_seed].clone()
- w[col_styles] = w_dict[col_seed][col_styles]
- image = G.synthesis(w[np.newaxis], noise_mode=noise_mode)
- image = (image.permute(0, 2, 3, 1) * 127.5 +
- 128).clamp(0, 255).to(torch.uint8)
- image_dict[(row_seed, col_seed)] = image[0].cpu().numpy()
-
- os.makedirs(outdir, exist_ok=True)
- # print('Saving images...')
- # for (row_seed, col_seed), image in image_dict.items():
- # PIL.Image.fromarray(image, 'RGB').save(f'{outdir}/{row_seed}-{col_seed}.png')
-
- print('Saving image grid...')
- W = G.img_resolution // 2
- H = G.img_resolution
- canvas = PIL.Image.new(
- 'RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black')
- for row_idx, row_seed in enumerate([0] + row_seeds):
- for col_idx, col_seed in enumerate([0] + col_seeds):
- if row_idx == 0 and col_idx == 0:
- continue
- key = (row_seed, col_seed)
- if row_idx == 0:
- key = (col_seed, col_seed)
- if col_idx == 0:
- key = (row_seed, row_seed)
- canvas.paste(PIL.Image.fromarray(
- image_dict[key], 'RGB'), (W * col_idx, H * row_idx))
- canvas.save(f'{outdir}/grid.png')
-
-
-# ----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- generate_style_mix() # pylint: disable=no-value-for-parameter
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/h2oai/wave-tour/examples/canvas.py b/spaces/h2oai/wave-tour/examples/canvas.py
deleted file mode 100644
index e13b25f0a24223c07939e385e068f5750da33eb3..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/canvas.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Canvas
-# A card that displays a freeform drawing canvas.
-# A canvas card can synchronize its state with other canvas cards at the same URL.
-# Open `/demo` in multiple browsers and watch them synchronize in realtime.
-# #collaboration
-# ---
-from h2o_wave import site, data, ui
-
-page = site['/demo']
-page.drop()
-
-page.add('example', ui.canvas_card(
- box='1 1 4 7',
- title='Sample Canvas',
- width=500,
- height=500,
- data=dict(),
-))
-page.save()
diff --git a/spaces/h2oai/wave-tour/examples/table_pagination.py b/spaces/h2oai/wave-tour/examples/table_pagination.py
deleted file mode 100644
index e036809c1f4373bc4503a830e6ee7367c601284d..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/table_pagination.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Table / Pagination
-# Use a paginated #table to display large (100k+ rows) tabular data.
-# #form #table #pagination
-# ---
-
-import os
-from typing import Dict, List
-from h2o_wave import main, app, Q, ui
-from copy import deepcopy
-import csv
-
-
-# Create a dummy data blueprint.
-class Issue:
- def __init__(self, text: str, status: str):
- self.text = text
- self.status = status
-
-
-all_rows = [Issue(text=i + 1, status=('Closed' if i % 2 == 0 else 'Open')) for i in range(100)]
-rows_per_page = 10
-total_rows = len(all_rows)
-
-
-def get_rows(base: List, sort: Dict[str, bool] = None, search: Dict = None, filters: Dict[str, List[str]] = None) -> List:
- # Make a deep copy in order to not mutate the original `all_issues` which serves as our baseline.
- rows = deepcopy(base)
-
- # Sort by multiple columns.
- if sort:
- for col, reverse in sort.items():
- rows.sort(key=lambda i: getattr(i, col), reverse=reverse)
- # Filter out all rows that do not contain searched string.
- if search:
- search_val = search['value'].lower()
- cols = search['cols']
- rows = [row for row in rows if any(search_val in str(getattr(row, col)).lower() for col in cols)]
- # Filter out rows that do not contain filtered column value.
- if filters:
- for col, filters in filters.items():
- rows = [row for row in rows if not filters or any(f in getattr(row, col) for f in filters)]
-
- return rows
-
-
-@app('/demo')
-async def serve(q: Q):
- if not q.client.initialized:
- q.page['meta'] = ui.meta_card(box='')
- q.page['form'] = ui.form_card(box='1 1 -1 -1', items=[
- ui.table(
- name='table',
- columns=[
- ui.table_column(name='text', label='Text', sortable=True, searchable=True, link=False),
- ui.table_column(name='status', label='Status', filterable=True, filters=['Open', 'Closed']),
- ],
- rows=[ui.table_row(str(r.text), [str(r.text), r.status]) for r in get_rows(all_rows)[0:rows_per_page]],
- resettable=True,
- downloadable=True,
- pagination=ui.table_pagination(total_rows=len(all_rows), rows_per_page=rows_per_page),
- # Make sure to register the necessary events for the feature you want to support, e.g. sorting.
- # All the registered events have to be handled by the developer.
- # `page_change` event is required to be handled for pagination to work.
- events=['sort', 'filter', 'search', 'page_change', 'download', 'reset']
- )
- ])
- q.client.initialized = True
-
- # Check if user triggered any table action and save it to local state for allowing multiple
- # actions to be performed on the data at the same time, e.g. sort the filtered data etc.
- if q.events.table:
- table = q.page['form'].table
- if q.events.table.sort:
- q.client.sort = q.events.table.sort
- q.client.page_offset = 0
- if q.events.table.filter:
- q.client.filters = q.events.table.filter
- q.client.page_offset = 0
- if q.events.table.search is not None:
- q.client.search = q.events.table.search
- q.client.page_offset = 0
- if q.events.table.page_change:
- q.client.page_offset = q.events.table.page_change.get('offset', 0)
- if q.events.table.reset:
- q.client.search = None
- q.client.sort = None
- q.client.filters = None
- q.client.page_offset = 0
- table.pagination = ui.table_pagination(total_rows, rows_per_page)
-
- rows = get_rows(all_rows, q.client.sort, q.client.search, q.client.filters)
- offset = q.client.page_offset or 0
- table.rows = [ui.table_row(str(r.text), [str(r.text), r.status]) for r in rows[offset: offset + rows_per_page]]
-
- # Update table pagination according to the new row count.
- if q.client.search is not None or q.client.filters:
- table.pagination = ui.table_pagination(len(rows), rows_per_page)
-
- if q.events.table.download:
- # For multi-user apps, the tmp file name should be unique for each user, not hardcoded.
- with open('data_download.csv', 'w') as csvfile:
- csv_writer = csv.writer(csvfile, delimiter=',')
- for r in rows:
- csv_writer.writerow([r.text, r.status])
- download_url, = await q.site.upload(['data_download.csv'])
- # Clean up the file after upload.
- os.remove('data_download.csv')
- q.page['meta'].script = ui.inline_script(f'window.open("{download_url}")')
-
- await q.page.save()
diff --git a/spaces/hahahafofo/image2text_prompt_generator/utils/html.py b/spaces/hahahafofo/image2text_prompt_generator/utils/html.py
deleted file mode 100644
index b0edb1ae05b25f21b6e71756361acfc5a7c7efcb..0000000000000000000000000000000000000000
--- a/spaces/hahahafofo/image2text_prompt_generator/utils/html.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import html
-
-
-def plaintext_to_html(text):
- text = (
- "
" + " \n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "
"
- )
- return text
diff --git a/spaces/hanstyle/tts/evaluation/gen_videos_from_filelist.py b/spaces/hanstyle/tts/evaluation/gen_videos_from_filelist.py
deleted file mode 100644
index bd666b93258d3da3143a63da742265ebeac2a8a3..0000000000000000000000000000000000000000
--- a/spaces/hanstyle/tts/evaluation/gen_videos_from_filelist.py
+++ /dev/null
@@ -1,238 +0,0 @@
-from os import listdir, path
-import numpy as np
-import scipy, cv2, os, sys, argparse
-import dlib, json, subprocess
-from tqdm import tqdm
-from glob import glob
-import torch
-
-sys.path.append('../')
-import audio
-import face_detection
-from models import Wav2Lip
-
-parser = argparse.ArgumentParser(description='Code to generate results for test filelists')
-
-parser.add_argument('--filelist', type=str,
- help='Filepath of filelist file to read', required=True)
-parser.add_argument('--results_dir', type=str, help='Folder to save all results into',
- required=True)
-parser.add_argument('--data_root', type=str, required=True)
-parser.add_argument('--checkpoint_path', type=str,
- help='Name of saved checkpoint to load weights from', required=True)
-
-parser.add_argument('--pads', nargs='+', type=int, default=[0, 0, 0, 0],
- help='Padding (top, bottom, left, right)')
-parser.add_argument('--face_det_batch_size', type=int,
- help='Single GPU batch size for face detection', default=64)
-parser.add_argument('--wav2lip_batch_size', type=int, help='Batch size for Wav2Lip', default=128)
-
-# parser.add_argument('--resize_factor', default=1, type=int)
-
-args = parser.parse_args()
-args.img_size = 96
-
-def get_smoothened_boxes(boxes, T):
- for i in range(len(boxes)):
- if i + T > len(boxes):
- window = boxes[len(boxes) - T:]
- else:
- window = boxes[i : i + T]
- boxes[i] = np.mean(window, axis=0)
- return boxes
-
-def face_detect(images):
- batch_size = args.face_det_batch_size
-
- while 1:
- predictions = []
- try:
- for i in range(0, len(images), batch_size):
- predictions.extend(detector.get_detections_for_batch(np.array(images[i:i + batch_size])))
- except RuntimeError:
- if batch_size == 1:
- raise RuntimeError('Image too big to run face detection on GPU')
- batch_size //= 2
- args.face_det_batch_size = batch_size
- print('Recovering from OOM error; New batch size: {}'.format(batch_size))
- continue
- break
-
- results = []
- pady1, pady2, padx1, padx2 = args.pads
- for rect, image in zip(predictions, images):
- if rect is None:
- raise ValueError('Face not detected!')
-
- y1 = max(0, rect[1] - pady1)
- y2 = min(image.shape[0], rect[3] + pady2)
- x1 = max(0, rect[0] - padx1)
- x2 = min(image.shape[1], rect[2] + padx2)
-
- results.append([x1, y1, x2, y2])
-
- boxes = get_smoothened_boxes(np.array(results), T=5)
- results = [[image[y1: y2, x1:x2], (y1, y2, x1, x2), True] for image, (x1, y1, x2, y2) in zip(images, boxes)]
-
- return results
-
-def datagen(frames, face_det_results, mels):
- img_batch, mel_batch, frame_batch, coords_batch = [], [], [], []
-
- for i, m in enumerate(mels):
- if i >= len(frames): raise ValueError('Equal or less lengths only')
-
- frame_to_save = frames[i].copy()
- face, coords, valid_frame = face_det_results[i].copy()
- if not valid_frame:
- continue
-
- face = cv2.resize(face, (args.img_size, args.img_size))
-
- img_batch.append(face)
- mel_batch.append(m)
- frame_batch.append(frame_to_save)
- coords_batch.append(coords)
-
- if len(img_batch) >= args.wav2lip_batch_size:
- img_batch, mel_batch = np.asarray(img_batch), np.asarray(mel_batch)
-
- img_masked = img_batch.copy()
- img_masked[:, args.img_size//2:] = 0
-
- img_batch = np.concatenate((img_masked, img_batch), axis=3) / 255.
- mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1])
-
- yield img_batch, mel_batch, frame_batch, coords_batch
- img_batch, mel_batch, frame_batch, coords_batch = [], [], [], []
-
- if len(img_batch) > 0:
- img_batch, mel_batch = np.asarray(img_batch), np.asarray(mel_batch)
-
- img_masked = img_batch.copy()
- img_masked[:, args.img_size//2:] = 0
-
- img_batch = np.concatenate((img_masked, img_batch), axis=3) / 255.
- mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1])
-
- yield img_batch, mel_batch, frame_batch, coords_batch
-
-fps = 25
-mel_step_size = 16
-mel_idx_multiplier = 80./fps
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-print('Using {} for inference.'.format(device))
-
-detector = face_detection.FaceAlignment(face_detection.LandmarksType._2D,
- flip_input=False, device=device)
-
-def _load(checkpoint_path):
- if device == 'cuda':
- checkpoint = torch.load(checkpoint_path)
- else:
- checkpoint = torch.load(checkpoint_path,
- map_location=lambda storage, loc: storage)
- return checkpoint
-
-def load_model(path):
- model = Wav2Lip()
- print("Load checkpoint from: {}".format(path))
- checkpoint = _load(path)
- s = checkpoint["state_dict"]
- new_s = {}
- for k, v in s.items():
- new_s[k.replace('module.', '')] = v
- model.load_state_dict(new_s)
-
- model = model.to(device)
- return model.eval()
-
-model = load_model(args.checkpoint_path)
-
-def main():
- assert args.data_root is not None
- data_root = args.data_root
-
- if not os.path.isdir(args.results_dir): os.makedirs(args.results_dir)
-
- with open(args.filelist, 'r') as filelist:
- lines = filelist.readlines()
-
- for idx, line in enumerate(tqdm(lines)):
- audio_src, video = line.strip().split()
-
- audio_src = os.path.join(data_root, audio_src) + '.mp4'
- video = os.path.join(data_root, video) + '.mp4'
-
- command = 'ffmpeg -loglevel panic -y -i {} -strict -2 {}'.format(audio_src, '../temp/temp.wav')
- subprocess.call(command, shell=True)
- temp_audio = '../temp/temp.wav'
-
- wav = audio.load_wav(temp_audio, 16000)
- mel = audio.melspectrogram(wav)
- if np.isnan(mel.reshape(-1)).sum() > 0:
- continue
-
- mel_chunks = []
- i = 0
- while 1:
- start_idx = int(i * mel_idx_multiplier)
- if start_idx + mel_step_size > len(mel[0]):
- break
- mel_chunks.append(mel[:, start_idx : start_idx + mel_step_size])
- i += 1
-
- video_stream = cv2.VideoCapture(video)
-
- full_frames = []
- while 1:
- still_reading, frame = video_stream.read()
- if not still_reading or len(full_frames) > len(mel_chunks):
- video_stream.release()
- break
- full_frames.append(frame)
-
- if len(full_frames) < len(mel_chunks):
- continue
-
- full_frames = full_frames[:len(mel_chunks)]
-
- try:
- face_det_results = face_detect(full_frames.copy())
- except ValueError as e:
- continue
-
- batch_size = args.wav2lip_batch_size
- gen = datagen(full_frames.copy(), face_det_results, mel_chunks)
-
- for i, (img_batch, mel_batch, frames, coords) in enumerate(gen):
- if i == 0:
- frame_h, frame_w = full_frames[0].shape[:-1]
- out = cv2.VideoWriter('../temp/result.avi',
- cv2.VideoWriter_fourcc(*'DIVX'), fps, (frame_w, frame_h))
-
- img_batch = torch.FloatTensor(np.transpose(img_batch, (0, 3, 1, 2))).to(device)
- mel_batch = torch.FloatTensor(np.transpose(mel_batch, (0, 3, 1, 2))).to(device)
-
- with torch.no_grad():
- pred = model(mel_batch, img_batch)
-
-
- pred = pred.cpu().numpy().transpose(0, 2, 3, 1) * 255.
-
- for pl, f, c in zip(pred, frames, coords):
- y1, y2, x1, x2 = c
- pl = cv2.resize(pl.astype(np.uint8), (x2 - x1, y2 - y1))
- f[y1:y2, x1:x2] = pl
- out.write(f)
-
- out.release()
-
- vid = os.path.join(args.results_dir, '{}.mp4'.format(idx))
-
- command = 'ffmpeg -loglevel panic -y -i {} -i {} -strict -2 -q:v 1 {}'.format(temp_audio,
- '../temp/result.avi', vid)
- subprocess.call(command, shell=True)
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/hardon-server/dalle-mini/index.html b/spaces/hardon-server/dalle-mini/index.html
deleted file mode 100644
index 6882e87e6effaebbed14fb56d0f47b2a2214417d..0000000000000000000000000000000000000000
--- a/spaces/hardon-server/dalle-mini/index.html
+++ /dev/null
@@ -1,243 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/builtin.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/builtin.py
deleted file mode 100644
index e70f3d3e006d1801dcfb743c9c21b46ca54a3053..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/builtin.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from .coco import BASE_DATASETS as BASE_COCO_DATASETS
-from .coco import DATASETS as COCO_DATASETS
-from .coco import register_datasets as register_coco_datasets
-
-DEFAULT_DATASETS_ROOT = "data"
-
-
-register_coco_datasets(COCO_DATASETS, DEFAULT_DATASETS_ROOT)
-register_coco_datasets(BASE_COCO_DATASETS, DEFAULT_DATASETS_ROOT)
diff --git a/spaces/hasibzunair/fifa-tryon-demo/util/image_pool.py b/spaces/hasibzunair/fifa-tryon-demo/util/image_pool.py
deleted file mode 100644
index deeb7cca923330113afa0f460723b9d86e321c08..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/util/image_pool.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import random
-import torch
-from torch.autograd import Variable
-
-
-class ImagePool():
- def __init__(self, pool_size):
- self.pool_size = pool_size
- if self.pool_size > 0:
- self.num_imgs = 0
- self.images = []
-
- def query(self, images):
- if self.pool_size == 0:
- return images
- return_images = []
- for image in images.data:
- image = torch.unsqueeze(image, 0)
- if self.num_imgs < self.pool_size:
- self.num_imgs = self.num_imgs + 1
- self.images.append(image)
- return_images.append(image)
- else:
- p = random.uniform(0, 1)
- if p > 0.5:
- random_id = random.randint(0, self.pool_size-1)
- tmp = self.images[random_id].clone()
- self.images[random_id] = image
- return_images.append(tmp)
- else:
- return_images.append(image)
- return_images = Variable(torch.cat(return_images, 0))
- return return_images
diff --git a/spaces/hf-vision/detection_metrics/detection_metrics/pycocotools/mask.py b/spaces/hf-vision/detection_metrics/detection_metrics/pycocotools/mask.py
deleted file mode 100644
index f39db4fe729e2cc9c3372edaca6c40fd21525db2..0000000000000000000000000000000000000000
--- a/spaces/hf-vision/detection_metrics/detection_metrics/pycocotools/mask.py
+++ /dev/null
@@ -1,103 +0,0 @@
-__author__ = 'tsungyi'
-
-from detection_metrics.pycocotools import _mask
-
-# Interface for manipulating masks stored in RLE format.
-#
-# RLE is a simple yet efficient format for storing binary masks. RLE
-# first divides a vector (or vectorized image) into a series of piecewise
-# constant regions and then for each piece simply stores the length of
-# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would
-# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1]
-# (note that the odd counts are always the numbers of zeros). Instead of
-# storing the counts directly, additional compression is achieved with a
-# variable bitrate representation based on a common scheme called LEB128.
-#
-# Compression is greatest given large piecewise constant regions.
-# Specifically, the size of the RLE is proportional to the number of
-# *boundaries* in M (or for an image the number of boundaries in the y
-# direction). Assuming fairly simple shapes, the RLE representation is
-# O(sqrt(n)) where n is number of pixels in the object. Hence space usage
-# is substantially lower, especially for large simple objects (large n).
-#
-# Many common operations on masks can be computed directly using the RLE
-# (without need for decoding). This includes computations such as area,
-# union, intersection, etc. All of these operations are linear in the
-# size of the RLE, in other words they are O(sqrt(n)) where n is the area
-# of the object. Computing these operations on the original mask is O(n).
-# Thus, using the RLE can result in substantial computational savings.
-#
-# The following API functions are defined:
-# encode - Encode binary masks using RLE.
-# decode - Decode binary masks encoded via RLE.
-# merge - Compute union or intersection of encoded masks.
-# iou - Compute intersection over union between masks.
-# area - Compute area of encoded masks.
-# toBbox - Get bounding boxes surrounding encoded masks.
-# frPyObjects - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask.
-#
-# Usage:
-# Rs = encode( masks )
-# masks = decode( Rs )
-# R = merge( Rs, intersect=false )
-# o = iou( dt, gt, iscrowd )
-# a = area( Rs )
-# bbs = toBbox( Rs )
-# Rs = frPyObjects( [pyObjects], h, w )
-#
-# In the API the following formats are used:
-# Rs - [dict] Run-length encoding of binary masks
-# R - dict Run-length encoding of binary mask
-# masks - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order)
-# iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore
-# bbs - [nx4] Bounding box(es) stored as [x y w h]
-# poly - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list)
-# dt,gt - May be either bounding boxes or encoded masks
-# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel).
-#
-# Finally, a note about the intersection over union (iou) computation.
-# The standard iou of a ground truth (gt) and detected (dt) object is
-# iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt))
-# For "crowd" regions, we use a modified criteria. If a gt object is
-# marked as "iscrowd", we allow a dt to match any subregion of the gt.
-# Choosing gt' in the crowd gt that best matches the dt can be done using
-# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing
-# iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt)
-# For crowd gt regions we use this modified criteria above for the iou.
-#
-# To compile run "python setup.py build_ext --inplace"
-# Please do not contact us for help with compiling.
-#
-# Microsoft COCO Toolbox. version 2.0
-# Data, paper, and tutorials available at: http://mscoco.org/
-# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
-# Licensed under the Simplified BSD License [see coco/license.txt]
-
-iou = _mask.iou
-merge = _mask.merge
-frPyObjects = _mask.frPyObjects
-
-def encode(bimask):
- if len(bimask.shape) == 3:
- return _mask.encode(bimask)
- elif len(bimask.shape) == 2:
- h, w = bimask.shape
- return _mask.encode(bimask.reshape((h, w, 1), order='F'))[0]
-
-def decode(rleObjs):
- if type(rleObjs) == list:
- return _mask.decode(rleObjs)
- else:
- return _mask.decode([rleObjs])[:,:,0]
-
-def area(rleObjs):
- if type(rleObjs) == list:
- return _mask.area(rleObjs)
- else:
- return _mask.area([rleObjs])[0]
-
-def toBbox(rleObjs):
- if type(rleObjs) == list:
- return _mask.toBbox(rleObjs)
- else:
- return _mask.toBbox([rleObjs])[0]
\ No newline at end of file
diff --git a/spaces/hiraltalsaniya/YOLOv7_face_mask/README.md b/spaces/hiraltalsaniya/YOLOv7_face_mask/README.md
deleted file mode 100644
index 5886a43fad9f61a3c90ff2b47036167e57aed5be..0000000000000000000000000000000000000000
--- a/spaces/hiraltalsaniya/YOLOv7_face_mask/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: YOLOv7 Face Mask
-emoji: 🐠
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hololee/dreambooth-training/app.py b/spaces/hololee/dreambooth-training/app.py
deleted file mode 100644
index f7d90f7250ccac1b7d250062b6d3348124acdf4e..0000000000000000000000000000000000000000
--- a/spaces/hololee/dreambooth-training/app.py
+++ /dev/null
@@ -1,687 +0,0 @@
-from subprocess import getoutput
-import os
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- which_gpu = "A10G"
- os.system(f"pip install --no-deps xformers==0.0.16rc425")
-elif("T4" in gpu_info):
- which_gpu = "T4"
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-else:
- which_gpu = "CPU"
-
-import gradio as gr
-from pathlib import Path
-import argparse
-import shutil
-from train_dreambooth import run_training
-from convertosd import convert
-from PIL import Image
-from slugify import slugify
-import requests
-import torch
-import zipfile
-import tarfile
-import urllib.parse
-import gc
-from diffusers import StableDiffusionPipeline
-from huggingface_hub import snapshot_download, update_repo_visibility, HfApi
-
-is_spaces = True if "SPACE_ID" in os.environ else False
-if(is_spaces):
- is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False
-else:
- is_shared_ui = False
-is_gpu_associated = torch.cuda.is_available()
-
-os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
-
-if(is_gpu_associated):
- model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable")
- model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"])
- model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"])
- safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
- model_to_load = model_v1
-
-def swap_base_model(selected_model):
- if(is_gpu_associated):
- global model_to_load
- if(selected_model == "v1-5"):
- model_to_load = model_v1
- elif(selected_model == "v2-1-768"):
- model_to_load = model_v2
- else:
- model_to_load = model_v2_512
-
-
-
-css = '''
- .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
- .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
- #component-4, #component-3, #component-10{min-height: 0}
- .duplicate-button img{margin: 0}
-'''
-maximum_concepts = 3
-
-def swap_text(option, base):
- resize_width = 768 if base == "v2-1-768" else 512
- mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
- if(option == "object"):
- instance_prompt_example = "cttoy"
- freeze_for = 30
- return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)]
- elif(option == "person"):
- instance_prompt_example = "julcto"
- freeze_for = 70
- #show_prior_preservation = True if base != "v2-1-768" else False
- show_prior_preservation=False
- if(show_prior_preservation):
- prior_preservation_box_update = gr.update(visible=show_prior_preservation)
- else:
- prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False)
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update]
- elif(option == "style"):
- instance_prompt_example = "trsldamrl"
- freeze_for = 10
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like birme for smart cropping. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)]
-
-def count_files(*inputs):
- file_counter = 0
- concept_counter = 0
- for i, input in enumerate(inputs):
- if(i < maximum_concepts):
- files = inputs[i]
- if(files):
- concept_counter+=1
- file_counter+=len(files)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- selected_model = inputs[-5]
- experimental_faces = inputs[-6]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- else:
- Training_Steps = file_counter*150
- if(type_of_thing == "person" and Training_Steps > 2400):
- Training_Steps = 2400 #Avoid overfitting on person faces
- if(is_spaces):
- if(selected_model == "v1-5"):
- its = 1.1 if which_gpu == "T4" else 1.8
- if(experimental_faces):
- its = 1
- elif(selected_model == "v2-1-512"):
- its = 0.8 if which_gpu == "T4" else 1.5
- if(experimental_faces):
- its = 0.7
- elif(selected_model == "v2-1-768"):
- its = 0.48 if which_gpu == "T4" else 0.85
-
- gpu_price = 0.60 if which_gpu == "T4" else 1.10
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes.
- The setup, compression and uploading the model can take up to 20 minutes. As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.
- If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.
'''
- else:
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.
'''
-
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
-
-def update_steps(*files_list):
- file_counter = 0
- for i, files in enumerate(files_list):
- if(files):
- file_counter+=len(files)
- return(gr.update(value=file_counter*200))
-
-def visualise_progress_bar():
- return gr.update(visible=True)
-
-def pad_image(image):
- w, h = image.size
- if w == h:
- return image
- elif w > h:
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
- new_image.paste(image, (0, (w - h) // 2))
- return new_image
- else:
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
- new_image.paste(image, ((h - w) // 2, 0))
- return new_image
-
-def validate_model_upload(hf_token, model_name):
- if(hf_token != ''):
- api = HfApi()
- try:
- _ = api.whoami(hf_token)
- except:
- raise gr.Error("You have inserted an invalid Hugging Face token")
- try:
- if(is_spaces):
- update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space")
- except:
- raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions")
- else:
- raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)")
- if(model_name == ""):
- raise gr.Error("Please fill in your model's name")
-
-def swap_hardware(hf_token, hardware="cpu-basic"):
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'flavor': hardware}
- requests.post(hardware_url, json = body, headers=headers)
-
-def swap_sleep_time(hf_token,sleep_time):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'seconds':sleep_time}
- requests.post(sleep_time_url,json=body,headers=headers)
-
-def get_sleep_time(hf_token):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}"
- headers = { "authorization" : f"Bearer {hf_token}"}
- response = requests.get(sleep_time_url,headers=headers)
- try:
- gcTimeout = response.json()['runtime']['gcTimeout']
- except:
- gcTimeout = None
- return gcTimeout
-
-def write_to_community(title, description,hf_token):
- from huggingface_hub import HfApi
- api = HfApi()
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token)
-
-def train(progress=gr.Progress(track_tqdm=True), *inputs):
- which_model = inputs[-10]
- if(which_model == ""):
- raise gr.Error("You forgot to select a base model to use")
-
- if is_shared_ui:
- raise gr.Error("This Space only works in duplicated instances")
- if not is_gpu_associated:
- raise gr.Error("Please associate a T4 or A10G GPU for this Space")
- hf_token = inputs[-5]
- model_name = inputs[-7]
- if(is_spaces):
- sleep_time = get_sleep_time(hf_token)
- if sleep_time:
- swap_sleep_time(hf_token, -1)
- remove_attribution_after = inputs[-6]
- else:
- remove_attribution_after = False
-
- if(remove_attribution_after):
- validate_model_upload(hf_token, model_name)
-
- torch.cuda.empty_cache()
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
-
- if os.path.exists("output_model"): shutil.rmtree('output_model')
- if os.path.exists("instance_images"): shutil.rmtree('instance_images')
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
- file_counter = 0
- resolution = 512 if which_model != "v2-1-768" else 768
- for i, input in enumerate(inputs):
- if(i < maximum_concepts-1):
- if(input):
- os.makedirs('instance_images',exist_ok=True)
- files = inputs[i+(maximum_concepts*2)]
- prompt = inputs[i+maximum_concepts]
- if(prompt == "" or prompt == None):
- raise gr.Error("You forgot to define your concept prompt")
- for j, file_temp in enumerate(files):
- file = Image.open(file_temp.name)
- image = pad_image(file)
- image = image.resize((resolution, resolution))
- extension = file_temp.name.split(".")[1]
- image = image.convert('RGB')
- image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
- file_counter += 1
-
- os.makedirs('output_model',exist_ok=True)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- experimental_face_improvement = inputs[-9]
-
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- Train_text_encoder_for = int(inputs[-2])
- else:
- if(type_of_thing == "object"):
- Train_text_encoder_for=30
-
- elif(type_of_thing == "style"):
- Train_text_encoder_for=15
-
- elif(type_of_thing == "person"):
- Train_text_encoder_for=70
-
- Training_Steps = file_counter*150
- if(type_of_thing == "person" and Training_Steps > 2600):
- Training_Steps = 2600 #Avoid overfitting on people's faces
- stptxt = int((Training_Steps*Train_text_encoder_for)/100)
- gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False
- cache_latents = True if which_model != "v1-5" else False
- if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)):
- args_general = argparse.Namespace(
- image_captions_filename = True,
- train_text_encoder = True if stptxt > 0 else False,
- stop_text_encoder_training = stptxt,
- save_n_steps = 0,
- pretrained_model_name_or_path = model_to_load,
- instance_data_dir="instance_images",
- class_data_dir=None,
- output_dir="output_model",
- instance_prompt="",
- seed=42,
- resolution=resolution,
- mixed_precision="fp16",
- train_batch_size=1,
- gradient_accumulation_steps=1,
- use_8bit_adam=True,
- learning_rate=2e-6,
- lr_scheduler="polynomial",
- lr_warmup_steps = 0,
- max_train_steps=Training_Steps,
- gradient_checkpointing=gradient_checkpointing,
- cache_latents=cache_latents,
- )
- print("Starting single training...")
- lock_file = open("intraining.lock", "w")
- lock_file.close()
- try:
- run_training(args_general)
- except Exception as e:
- if(is_spaces):
- title="There was an error on during your training"
- description=f'''
- Unfortunately there was an error during training your {model_name} model.
- Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training):
- ```
- {str(e)}
- ```
- '''
- swap_hardware(hf_token, "cpu-basic")
- write_to_community(title,description,hf_token)
-
-
- gc.collect()
- torch.cuda.empty_cache()
- if(which_model == "v1-5"):
- print("Adding Safety Checker to the model...")
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True)
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True)
- shutil.copy(f"model_index.json", "output_model/model_index.json")
-
- if(not remove_attribution_after):
- swap_sleep_time(hf_token, sleep_time)
- print("Archiving model file...")
- with tarfile.open("diffusers_model.tar", "w") as tar:
- tar.add("output_model", arcname=os.path.basename("output_model"))
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
- trained_file = open("hastrained.success", "w")
- trained_file.close()
- print("Training completed!")
- return [
- gr.update(visible=False), #progress_bar
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
- gr.update(visible=True), #try_your_model
- gr.update(visible=True), #push_to_hub
- gr.update(visible=True), #convert_button
- gr.update(visible=False), #training_ongoing
- gr.update(visible=True) #completed_training
- ]
- else:
- where_to_upload = inputs[-8]
- push(model_name, where_to_upload, hf_token, which_model, True)
- swap_hardware(hf_token, "cpu-basic")
-
-pipe_is_set = False
-def generate(prompt, steps):
- torch.cuda.empty_cache()
- from diffusers import StableDiffusionPipeline
- global pipe_is_set
- if(not pipe_is_set):
- global pipe
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
- pipe = pipe.to("cuda")
- pipe_is_set = True
-
- image = pipe(prompt, num_inference_steps=steps).images[0]
- return(image)
-
-def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
- validate_model_upload(hf_token, model_name)
- if(not os.path.exists("model.ckpt")):
- convert("output_model", "model.ckpt")
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
- from huggingface_hub import create_repo
- model_name_slug = slugify(model_name)
- api = HfApi()
- your_username = api.whoami(token=hf_token)["name"]
- if(where_to_upload == "My personal profile"):
- model_id = f"{your_username}/{model_name_slug}"
- else:
- model_id = f"sd-dreambooth-library/{model_name_slug}"
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
-
- print(f"Starting to upload the model {model_id}...")
- images_upload = os.listdir("instance_images")
- image_string = ""
- instance_prompt_list = []
- previous_instance_prompt = ''
- for i, image in enumerate(images_upload):
- instance_prompt = image.split("_")[0]
- if(instance_prompt != previous_instance_prompt):
- title_instance_prompt_string = instance_prompt
- instance_prompt_list.append(instance_prompt)
- else:
- title_instance_prompt_string = ''
- previous_instance_prompt = instance_prompt
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
-{image_string}})'''
- readme_text = f'''---
-license: creativeml-openrail-m
-tags:
-- text-to-image
-widget:
-- text: {instance_prompt_list[0]}
----
-### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
-
-You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
-
-Sample pictures of:
-{image_string}
-'''
- #Save the readme to a file
- readme_file = open("model.README.md", "w")
- readme_file.write(readme_text)
- readme_file.close()
- #Save the token identifier to a file
- text_file = open("token_identifier.txt", "w")
- text_file.write(', '.join(instance_prompt_list))
- text_file.close()
- try:
- create_repo(model_id,private=True, token=hf_token)
- except:
- import time
- epoch_time = str(int(time.time()))
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
- operations = [
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
- ]
- api.create_commit(
- repo_id=model_id,
- operations=operations,
- commit_message=f"Upload the model {model_name}",
- token=hf_token
- )
- api.upload_folder(
- folder_path="output_model",
- repo_id=model_id,
- token=hf_token
- )
- api.upload_folder(
- folder_path="instance_images",
- path_in_repo="concept_images",
- repo_id=model_id,
- token=hf_token
- )
- if is_spaces:
- if(not comes_from_automated):
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
- else:
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
- title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!"
- description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}"
- write_to_community(title, description, hf_token)
- #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
- print("Model uploaded successfully!")
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
-
-def convert_to_ckpt():
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
- convert("output_model", "model.ckpt")
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
-
-def check_status(top_description):
- if os.path.exists("hastrained.success"):
- if is_spaces:
- update_top_tag = gr.update(value=f'''
-
-
Your model has finished training ✅
-
Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic
You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model
Attention - This Space doesn't work in this shared UI
-
For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!
You have successfully cloned the Dreambooth Training Space locally 🎉
-
Do a pip install requirements-local.txt
-
- ''')
- gr.Markdown("# Dreambooth Training UI 💭")
- gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)")
-
- with gr.Row() as what_are_you_training:
- type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
- with gr.Column():
- base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True)
-
- #Very hacky approach to emulate dynamically created Gradio components
- with gr.Row() as upload_your_concept:
- with gr.Column():
- thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example")
- thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
- thing_image_example = gr.HTML('''''')
- things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
-
- with gr.Column():
- file_collection = []
- concept_collection = []
- buttons_collection = []
- delete_collection = []
- is_visible = []
-
- row = [None] * maximum_concepts
- for x in range(maximum_concepts):
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
- if(x == 0):
- visible = True
- is_visible.append(gr.State(value=True))
- else:
- visible = False
- is_visible.append(gr.State(value=False))
-
- file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
- with gr.Column(visible=visible) as row[x]:
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions'''))
- with gr.Row():
- if(x < maximum_concepts-1):
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
- if(x > 0):
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
-
- counter_add = 1
- for button in buttons_collection:
- if(counter_add < len(buttons_collection)):
- button.click(lambda:
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
- None,
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
- else:
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
- counter_add += 1
-
- counter_delete = 1
- for delete_button in delete_collection:
- if(counter_delete < len(delete_collection)+1):
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
- counter_delete += 1
-
- with gr.Accordion("Custom Settings", open=False):
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
- gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
- steps = gr.Number(label="How many steps", value=2400)
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
-
- with gr.Box(visible=False) as training_summary:
- training_summary_text = gr.HTML("", visible=True, label="Training Summary")
- is_advanced_visible = True if is_spaces else False
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible)
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=True)
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True)
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True)
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True)
-
- train_btn = gr.Button("Start Training")
- progress_bar = gr.Textbox(visible=False)
- if(is_shared_ui):
- training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False)
- elif(not is_gpu_associated):
- training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False)
- else:
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
-
-
- #Post-training UI
- completed_training = gr.Markdown('''# ✅ Training completed.
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
-
- with gr.Row():
- with gr.Box(visible=False) as try_your_model:
- gr.Markdown("## Try your model")
- prompt = gr.Textbox(label="Type your prompt")
- result_image = gr.Image()
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
- generate_button = gr.Button("Generate Image")
-
- with gr.Box(visible=False) as push_to_hub:
- gr.Markdown("## Push to Hugging Face Hub")
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
-
- push_button = gr.Button("Push to the Hub")
-
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
- success_message_upload = gr.Markdown(visible=False)
- convert_button = gr.Button("Convert to CKPT", visible=False)
-
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
-
- #Swap the base model
-
- base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
- #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar)
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
- for file in file_collection:
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
- file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- #Give more options if the user wants to finish everything after training
- if(is_spaces):
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
- #Add a message for while it is in training
-
- #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
-
- #The main train function
- train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar)
- train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
-
- #Button to generate an image from your trained model after training
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
- #Button to push the model to the Hugging Face Hub
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
- #Button to convert the model to ckpt format
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
-
- #Checks if the training is running
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
-
-demo.queue(default_enabled=False).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py b/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py
deleted file mode 100644
index 52413e0f8a45a8c8511bf103d3aabd537fac97b9..0000000000000000000000000000000000000000
--- a/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import random
-import torch
-
-
-class ImagePool():
- def __init__(self, pool_size):
- self.pool_size = pool_size
- if self.pool_size > 0:
- self.num_imgs = 0
- self.images = []
-
- def query(self, images):
- if self.pool_size == 0:
- return images
- return_images = []
- for image in images:
- image = torch.unsqueeze(image.data, 0)
- if self.num_imgs < self.pool_size:
- self.num_imgs = self.num_imgs + 1
- self.images.append(image)
- return_images.append(image)
- else:
- p = random.uniform(0, 1)
- if p > 0.5:
- random_id = random.randint(0, self.pool_size - 1) # randint is inclusive
- tmp = self.images[random_id].clone()
- self.images[random_id] = image
- return_images.append(tmp)
- else:
- return_images.append(image)
- return_images = torch.cat(return_images, 0)
- return return_images
diff --git a/spaces/hysts/DDNM-HQ/style.css b/spaces/hysts/DDNM-HQ/style.css
deleted file mode 100644
index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000
--- a/spaces/hysts/DDNM-HQ/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
diff --git a/spaces/hysts/Kandinsky-2-1/app.py b/spaces/hysts/Kandinsky-2-1/app.py
deleted file mode 100644
index 7dd489cfc466c261d54a58265befffe3747de3f1..0000000000000000000000000000000000000000
--- a/spaces/hysts/Kandinsky-2-1/app.py
+++ /dev/null
@@ -1,202 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-import random
-
-import gradio as gr
-import numpy as np
-import PIL.Image
-import spaces
-import torch
-from diffusers import DDPMScheduler, DiffusionPipeline
-
-DESCRIPTION = "# Kandinsky 2.1"
-if not torch.cuda.is_available():
- DESCRIPTION += "\n
-
-. 7383628160 . coub.com/stories/3146267-new-autodesk-robot-structural-analysis-2009-crack. html
-1xbet mirror working for free right now right now.
-Download free song blue frost lay on the wires.
-Series Chernobyl 2019 watch online all series in a row for free.
-In deo, you don't need it in the ass.
-Volcano million registration.
-Gdz Russian language 5th grade Shmelev part 1.
-How to make money with photoshop online.
-Play slot machines for money
-Download video of the terrorist attack in Volgodonsk on September 16.
-Mathematics 1st grade gdz.
-Gdz mathematics grade 1.
-Porn stories about mature women. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md b/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md
deleted file mode 100644
index 04248370b9e42a61a476bc9f26cf7858044291f5..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Murgee Auto Clicker Crack is used for the automatically clicking of the Left Mouse Button by the usage of the Keyboard Shortcut. Users can ... 4d29de3e1b
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md b/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md
deleted file mode 100644
index cee16f2d5368f7f1b8e3fa93315e9d249aa97888..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-cracker Activation Code:- 10372602/10372603/10372604/10372605/10372606/10372607/10372608/10372609/10372610/10372611/10372612/10372613/10372614/10372615/10372616/10372617/10372618/10372619/10372620/10372621/10372622/10372623/10372624/10372625/10372626/10372627/10372628/10372629/10372630/10372631/10372632/10372633/10372634/10372635/10372636/10372637/10372638/10372639/10372640/10372641/10372642/10372643/10372644/10372645/10372646/10372647/10372648/10372649/10372650/10372651/10372652/10372653/10372654/10372655/10372656/10372657/10372658/10372659/10372660/10372661/10372662/10372663/10372664/10372665/10372666/10372667/10372668/10372669/10372670/10372671/10372672/10372673/10372674/10372675/10372676/10372677/10372678/10372679/10372680/10372681/10372682/10372683/10372684/10372685/10372686/10372687/10372688/10372689/10372690/10372691/10372692/10372693/10372694/10372695/10372696/10372697/10372698/10372699/10372700/10372701/10372702/10 4fefd39f24
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/CorelDraw X4 [torrent By R0bY90] Working TOP Keygen.md b/spaces/lincquiQcaudo/Top-20-Diffusion/CorelDraw X4 [torrent By R0bY90] Working TOP Keygen.md
deleted file mode 100644
index 1ba4cde7aa51f83f4826b94dd8c3df7b2c8344fa..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/CorelDraw X4 [torrent By R0bY90] Working TOP Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-A keygen CorelDRAW x4 RAR often looks like an ordinary archive, ... Use all the ... CorelDraw X4 [torrent By R0bY90] Working Keygen … CorelDraw X4 [torrent ... 1fdad05405
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer !LINK!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer !LINK!.md
deleted file mode 100644
index 9b7d0408a573ab2788384692b45d7329b25f25c7..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer !LINK!.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
How to Download and Install Counter-Strike: Global Offensive V1.33.1.0 AutoUpdate for PC
-
-
Counter-Strike: Global Offensive (CS:GO) is one of the most popular and competitive first-person shooter games in the world. It features team-based action gameplay with various maps and game modes, as well as weapon customization and a vibrant multiplayer community. If you want to experience the thrill of CS:GO on your PC, you need to download and install the latest version of the game, which is V1.33.1.0 AutoUpdate.
-
Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer
In this article, we will show you how to download and install CS:GO V1.33.1.0 AutoUpdate for PC using two methods: torrent or launcher. Both methods are easy and fast, and will allow you to enjoy the game in no time.
-
-
Method 1: Torrent
-
-
A torrent is a file that contains information about other files that are distributed over a peer-to-peer network. You can use a torrent client, such as uTorrent or BitTorrent, to download the files you want from other users who have them. This way, you can download large files faster and more efficiently.
-
-
To download CS:GO V1.33.1.0 AutoUpdate for PC using torrent, follow these steps:
Click on the "Download Now" button and save the torrent file to your computer.
-
Open the torrent file with your torrent client and choose a location to save the game files.
-
Wait for the download to finish. It may take some time depending on your internet speed and the number of seeders (users who have the complete file and are sharing it).
-
Once the download is complete, open the folder where you saved the game files and run the setup.exe file.
-
Follow the instructions on the screen to install the game on your PC.
-
Launch the game from your desktop or start menu and enjoy!
-
-
-
Method 2: Launcher
-
-
A launcher is a program that allows you to download and install games directly from their official sources. You can use a launcher, such as Steam or SE7EN.ws, to download and install CS:GO V1.33.1.0 AutoUpdate for PC without using a torrent.
-
-
-
To download CS:GO V1.33.1.0 AutoUpdate for PC using launcher, follow these steps:
You will be redirected to a page where you can choose between Steam or SE7EN.ws launcher.
-
If you choose Steam, you will need to have a Steam account and install Steam on your PC. If you don't have one, you can create one for free at https://store.steampowered.com/join/. Once you have Steam installed, log in with your account and click on "Install Game" to download and install CS:GO on your PC.
-
If you choose SE7EN.ws, you will need to download and install their launcher on your PC. Once you have it installed, run it and click on "Install Game" to download and install CS:GO on your PC.
-
Wait for the download and installation to finish. It may take some time depending on your internet speed and the server availability.
-
Once the installation is complete, launch the game from your desktop or start menu and enjoy!
-
-
-
Conclusion
-
-
Counter-Strike: Global Offensive V1.33.1.0 AutoUpdate is the latest version of the game that offers improved performance, stability, and security
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download The Odyssey Full Movie 1997 176 !!BETTER!!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download The Odyssey Full Movie 1997 176 !!BETTER!!.md
deleted file mode 100644
index 7cd1135579c1fd4e2d5ecb6d7a27804382f40d7a..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download The Odyssey Full Movie 1997 176 !!BETTER!!.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
How to Download The Odyssey Full Movie 1997 176
-
The Odyssey is a 1997 TV miniseries based on Homer's epic poem of the same name. It stars Armand Assante as Odysseus, the Greek hero who faces many trials and adventures on his way back home after the Trojan War. The miniseries also features Greta Scacchi, Isabella Rossellini, Vanessa Williams, and Christopher Lee.
If you want to watch this classic adaptation of The Odyssey, you might be wondering how to download it online. There are many websites that claim to offer free downloads of The Odyssey full movie 1997 176, but not all of them are safe or legal. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Others might require you to sign up for a subscription or pay a fee to access the download link.
-
To avoid these risks and enjoy The Odyssey full movie 1997 176 without any hassle, we recommend you to use a reliable and reputable streaming service that has the rights to stream or download this miniseries. One of such services is Netflix, which offers a wide range of movies and shows for a monthly fee. You can watch The Odyssey full movie 1997 176 on Netflix by following these steps:
-
-
Create an account on Netflix if you don't have one already. You can choose from different plans depending on your preferences and budget.
-
Log in to your Netflix account and search for The Odyssey in the search bar.
-
Select The Odyssey from the results and click on the play button to start streaming it online.
-
If you want to download The Odyssey full movie 1997 176 for offline viewing, you can do so by clicking on the download icon next to the play button. This will save the miniseries to your device's storage so you can watch it anytime without an internet connection.
-
-
That's it! You can now enjoy The Odyssey full movie 1997 176 on your device with Netflix. Remember to respect the copyright laws and only download or stream content from authorized sources.
-
-
The Odyssey full movie 1997 176 is a faithful adaptation of Homer's epic poem, which is considered one of the greatest works of literature in history. The miniseries covers the entire story of Odysseus and his journey home, from the fall of Troy to the reunion with his wife Penelope and son Telemachus. Along the way, he encounters many mythical creatures and gods, such as the Cyclops, the Sirens, the Lotus-eaters, Circe, Calypso, Poseidon, Athena, and Zeus.
-
-
The Odyssey full movie 1997 176 is not only a thrilling adventure, but also a profound exploration of human nature, morality, loyalty, courage, love, and fate. It shows how Odysseus struggles to overcome his flaws and temptations, and how he grows as a leader and a hero. It also depicts the challenges and hardships that his family and friends face in his absence, and how they cope with the uncertainty and danger of his return.
-
The Odyssey full movie 1997 176 is a masterpiece of storytelling that will captivate you from start to finish. It has received critical acclaim and numerous awards, including four Emmys and a Golden Globe. It has also been praised for its stunning cinematography, costumes, music, and special effects. The cast delivers outstanding performances that bring the characters to life with emotion and authenticity.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Khelar Putul Bangla Movie Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Khelar Putul Bangla Movie Download.md
deleted file mode 100644
index ccc061ee4d5a52dbf7258740434733daa2c16753..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Khelar Putul Bangla Movie Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- );
-}
diff --git a/spaces/manhngolibo/manhngo/README.md b/spaces/manhngolibo/manhngo/README.md
deleted file mode 100644
index dbde2df8d37c74e14759e30e58c607e4869bb5bd..0000000000000000000000000000000000000000
--- a/spaces/manhngolibo/manhngo/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Manhngo
-emoji: 😻
-colorFrom: yellow
-colorTo: gray
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/matthoffner/starchat-ggml/dialogue.py b/spaces/matthoffner/starchat-ggml/dialogue.py
deleted file mode 100644
index a43d54709b64fefea446aa92e6ab33d1e697d1ea..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/starchat-ggml/dialogue.py
+++ /dev/null
@@ -1,241 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-import os
-from dataclasses import asdict, dataclass
-from pathlib import Path
-from typing import Any, Dict, List, Optional, Type, TypeVar, Union
-
-from huggingface_hub import ModelHubMixin, hf_hub_download
-
-# Generic variable that is either ModelHubMixin or a subclass thereof
-T = TypeVar("T", bound="ModelHubMixin")
-
-TEMPLATE_FILENAME = "dialogue_template.json"
-IGNORE_INDEX = -100
-
-
-@dataclass
-class DialogueTemplate(ModelHubMixin):
- """Converts all turns of a dialogue between a user and assistant to a standardized format.
-
- Adapted from OpenAI's ChatML (https://github.com/openai/openai-python/blob/main/chatml.md) and Vicuna (https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py)
- """
-
- system: str
- messages: List[Dict[str, str]] = None
- system_token: str = "<|system|>"
- user_token: str = "<|user|>"
- assistant_token: str = "<|assistant|>"
- end_token: str = "<|end|>"
-
- def get_training_prompt(self) -> str:
- prompt = self.system_token + "\n" + self.system + self.end_token + "\n"
- if self.messages is None:
- raise ValueError("Dialogue template must have at least one message.")
- for message in self.messages:
- if message["role"] == "user":
- prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n"
- else:
- prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n"
- return prompt
-
- def get_inference_prompt(self) -> str:
- prompt = self.system_token + "\n" + self.system + self.end_token + "\n"
- if self.messages is None:
- raise ValueError("Dialogue template must have at least one message.")
- for message in self.messages:
- if message["role"] == "user":
- prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n"
- else:
- prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n"
- prompt += self.assistant_token
- return prompt
-
- def get_dialogue(self):
- """Helper function to format the messages as an easy-to-read dialogue."""
- prompt = ""
- if self.messages is None:
- raise ValueError("Dialogue template must have at least one message.")
- for message in self.messages:
- if message["role"] == "user":
- prompt += "\n\nHuman: " + message["content"]
- else:
- prompt += "\n\nAssistant: " + message["content"]
- return prompt
-
- def get_special_tokens(self) -> List[str]:
- return [self.system_token, self.user_token, self.assistant_token, self.end_token]
-
- def copy(self):
- return DialogueTemplate(
- system=self.system,
- messages=self.messages,
- system_token=self.system_token,
- user_token=self.user_token,
- assistant_token=self.assistant_token,
- end_token=self.end_token,
- )
-
- def to_dict(self) -> Dict[str, Any]:
- return {k: v for k, v in asdict(self).items()}
-
- @classmethod
- def from_dict(cls, data):
- return DialogueTemplate(
- system=data["system"] if "system" in data else "",
- messages=data["messages"] if "messages" in data else None,
- system_token=data["system_token"] if "system_token" in data else "<|system|>",
- user_token=data["user_token"] if "user_token" in data else "<|user|>",
- assistant_token=data["assistant_token"] if "assistant_token" in data else "<|assistant|>",
- end_token=data["end_token"] if "end_token" in data else "<|end|>",
- )
-
- def _save_pretrained(self, save_directory: Union[str, Path]) -> None:
- save_directory = Path(save_directory)
- save_directory.mkdir(exist_ok=True)
- with open(save_directory / "dialogue_template.json", "w") as f:
- json.dump(self.to_dict(), f, indent=2)
-
- @classmethod
- def _from_pretrained(
- cls: Type[T],
- *,
- model_id: str,
- revision: Optional[str],
- cache_dir: Optional[Union[str, Path]],
- force_download: bool,
- proxies: Optional[Dict],
- resume_download: bool,
- local_files_only: bool,
- token: Optional[Union[str, bool]],
- **model_kwargs,
- ) -> T:
- """Loads the dialogue template from a local directory or the Huggingface Hub.
-
- Args:
- model_id (`str`):
- ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`).
- revision (`str`, *optional*):
- Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the
- latest commit on `main` branch.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding
- the existing cache.
- resume_download (`bool`, *optional*, defaults to `False`):
- Whether to delete incompletely received files. Will attempt to resume the download if such a file exists.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint (e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`).
- token (`str` or `bool`, *optional*):
- The token to use as HTTP bearer authorization for remote files. By default, it will use the token
- cached when running `huggingface-cli login`.
- cache_dir (`str`, `Path`, *optional*):
- Path to the folder where cached files are stored.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- model_kwargs:
- Additional keyword arguments passed along to the [`~ModelHubMixin._from_pretrained`] method.
- """
- if os.path.isdir(model_id): # Can either be a local directory
- print("Loading dialogue template from local directory")
- template_file = os.path.join(model_id, TEMPLATE_FILENAME)
- else: # Or a template on the Hub
- template_file = hf_hub_download( # Download from the hub, passing same input args
- repo_id=model_id,
- filename=TEMPLATE_FILENAME,
- revision=revision,
- cache_dir=cache_dir,
- force_download=force_download,
- proxies=proxies,
- resume_download=resume_download,
- token=token,
- local_files_only=local_files_only,
- )
-
- # Load template
- with open(template_file, "r") as f:
- data = json.load(f)
- return cls.from_dict(data=data)
-
-
-# A shortened version of the system message in Anthropic's HHH prompt: https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt
-default_template = DialogueTemplate(
- system="Below is a dialogue between a human user and an AI assistant. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed.",
-)
-
-# OpenAI and OpenAssistant train on few to no system messages.
-# TODO: consider defining this as the `default` template
-no_system_template = DialogueTemplate(
- system="",
-)
-
-alpaca_template = DialogueTemplate(
- system="Below is an instruction that describes a task. Write a response that appropriately completes the request.",
- user_token="### Instruction:",
- assistant_token="### Response:",
-)
-
-SUPPORTED_DIALOGUE_TEMPLATES = {
- "default": default_template,
- "no_system": no_system_template,
- "alpaca": alpaca_template,
-}
-
-
-def get_dialogue_template(template: str) -> DialogueTemplate:
- if template not in SUPPORTED_DIALOGUE_TEMPLATES.keys():
- raise ValueError(f"Template {template} is not supported!")
- return SUPPORTED_DIALOGUE_TEMPLATES[template].copy()
-
-
-def prepare_dialogue(example, dialogue_template, is_train=True):
- """Format example to single- or multi-turn dialogue."""
- # TODO: make this simpler by just ensuring every dataset has a messages column
- if "messages" in example.keys() and example["messages"] is not None:
- dialogue_template.messages = example["messages"]
- elif all(k in example.keys() for k in ("prompt", "completion")):
- # Construct single-turn dialogue from prompt and completion
- dialogue_template.messages = [
- {"role": "user", "content": example["prompt"]},
- {"role": "assistant", "content": example["completion"]},
- ]
- elif "prompt" in example.keys():
- # Construct single-turn dialogue from prompt (inference only)
- dialogue_template.messages = [
- {"role": "user", "content": example["prompt"]},
- ]
- else:
- raise ValueError(
- f"Could not format example as dialogue! Require either `messages` or `[prompt, completion]` or `[prompt]` keys but found {list(example.keys())}"
- )
- if is_train:
- example["text"] = dialogue_template.get_training_prompt()
- else:
- example["text"] = dialogue_template.get_inference_prompt()
- return example
-
-
-def mask_user_labels(tokenizer, dialogue_template, labels):
- """Masks the user turns of a dialogue from the loss"""
- user_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.user_token)
- assistant_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.assistant_token)
- for idx, label_id in enumerate(labels):
- if label_id == user_token_id:
- current_idx = idx
- while labels[current_idx] != assistant_token_id and current_idx < len(labels):
- labels[current_idx] = IGNORE_INDEX
- current_idx += 1
\ No newline at end of file
diff --git a/spaces/mattricesound/RemFx/scripts/download_ckpts.sh b/spaces/mattricesound/RemFx/scripts/download_ckpts.sh
deleted file mode 100644
index 8c05a9fd44a0bc500cb2c2624b97bf46eac7d20d..0000000000000000000000000000000000000000
--- a/spaces/mattricesound/RemFx/scripts/download_ckpts.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#! /bin/bash
-
-# make ckpts directory if not exist
-mkdir -p ckpts
-
-# download ckpts and save to ckpts directory
-wget https://zenodo.org/record/8218621/files/classifier.ckpt?download=1 -O ckpts/classifier.ckpt
-wget https://zenodo.org/record/8218621/files/dcunet_chorus_aug.ckpt?download=1 -O ckpts/dcunet_chorus_aug.ckpt
-wget https://zenodo.org/record/8218621/files/dcunet_delay_aug.ckpt?download=1 -O ckpts/dcunet_delay_aug.ckpt
-wget https://zenodo.org/record/8218621/files/dcunet_reverb_aug.ckpt?download=1 -O ckpts/dcunet_reverb_aug.ckpt
-wget https://zenodo.org/record/8218621/files/demucs_compressor_aug.ckpt?download=1 -O ckpts/demucs_compressor_aug.ckpt
-wget https://zenodo.org/record/8218621/files/demucs_distortion_aug.ckpt?download=1 -O ckpts/demucs_distortion_aug.ckpt
\ No newline at end of file
diff --git a/spaces/merve/data-leak/source/measuring-fairness/init.js b/spaces/merve/data-leak/source/measuring-fairness/init.js
deleted file mode 100644
index 5a8df63793d90464eb148443787eb91e2b34180b..0000000000000000000000000000000000000000
--- a/spaces/merve/data-leak/source/measuring-fairness/init.js
+++ /dev/null
@@ -1,200 +0,0 @@
-/* Copyright 2020 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-
-
-d3.select('body').selectAppend('div.tooltip.tooltip-hidden')
-
-nCols = 12
-
-window.colors = {
- well: d3.color('#669399') + '',
- sick: d3.color('#EE2A2A') + '',
-
- // well: d3.color('green') + '',
- // sick: d3.color('purple'),
-
- // well: d3.color('#e9a3c9') + '',
- // sick: d3.color('#a1d76a'),
-
- // well: d3.color('#e9a3c9') + '',
- // sick: d3.color('#a1d76a'),
-
- // well: d3.color('#e9a3c9') + '',
- // sick: d3.color('#a1d76a'),
-
- // well: d3.color('#865327') + '',
- // sick: d3.color('#012394'),
-
- // well: d3.color('#012394') + '',
- // sick: d3.color('#FBC20F') + '',
-
- // well: d3.color('#012394') + '',
- // sick: d3.color('#E71E24') + '',
-
- // well: d3.color('#A9159C') + '',
- // sick: d3.color('#E71E24') + '',
-
- // well: d3.color('#A9159C') + '',
- // sick: d3.color('#012394') + '',
-
- // well: d3.color('orange') + '',
- // sick: d3.color('#012394') + '',
-
-
-}
-
-window.colors = {
- well: d3.interpolate(colors.well, '#fff')(.5),
- sick: d3.interpolate(colors.sick, '#fff')(.2),
-}
-
-window.lcolors = {
- well: d3.interpolate(colors.well, '#fff')(.5),
- sick: d3.interpolate(colors.sick, '#fff')(.35)
-}
-window.llcolors = {
- well: d3.interpolate(colors.well, '#fff')(.5),
- sick: d3.interpolate(colors.sick, '#fff')(1)
-}
-window.dcolors = {
- well: d3.interpolate(colors.well, '#000')(.65),
- sick: d3.interpolate(colors.sick, '#000')(.65)
-}
-
-// window.colors = {
-// well: d3.color('#BEF5FF') + '',
-// sick: d3.color('#FCC5C3') + '',
-// }
-
-// window.colors = {
-// well: d3.color('#669399') + '',
-// sick: d3.color('#EE2A2A') + '',
-// }
-
-// window.lcolors = {
-// well: d3.interpolate(colors.well, '#fff')(.3),
-// sick: d3.interpolate(colors.sick, '#fff')(.3)
-// }
-// window.llcolors = {
-// well: d3.interpolate(colors.well, '#fff')(.2),
-// sick: d3.interpolate(colors.sick, '#fff')(.2)
-// }
-
-// window.lcolors = {
-// well: '#CFFCF6',
-// sick: '#FFBD96'
-// }
-
-// copy(logColors())
-function logColors(){
- return `
- body{
- --colors-well: ${d3.rgb(colors.well)};
- --colors-sick: ${d3.rgb(colors.sick)};
- --lcolors-well: ${d3.rgb(lcolors.well)};
- --lcolors-sick: ${d3.rgb(lcolors.sick)};
- --dcolors-well: ${d3.rgb(dcolors.well)};
- --dcolors-sick: ${d3.rgb(dcolors.sick)};
- }
- `
-}
-
-
-
-window.init = function(){
- console.clear()
-
- graphSel = d3.select('#graph').html('').append('div')
- totalWidth = graphSel.node().offsetWidth
- totalWidth = 400
-
- c = d3.conventions({
- sel: graphSel.st({marginTop: 40}),
- margin: {top: 20},
- totalWidth,
- totalHeight: totalWidth,
- })
-
- students = makeStudents()
- sel = makeSel()
- mini = makeMini()
- slider = makeSlider()
- slides = makeSlides()
- gs = makeGS()
-
- function sizeGraphSel(){
- var scale = (totalWidth + 35)/(innerWidth - 10) // off by one, s is 35
- scale = d3.clamp(1, scale, 2)
-
- graphSel.st({
- transform: `scale(${1/scale})`,
- transformOrigin: '0px 0px',
-
- })
- }
- sizeGraphSel()
- d3.select(window).on('resize', sizeGraphSel)
-
-}
-init()
-
-
-
-
-
-!(function(){
- var footnums = '¹²³'
-
- d3.selectAll('.footstart').each(function(d, i){
- d3.select(this)
- .at({
- href: '#footend-' + i,
- })
- .text(footnums[i])
- .parent().at({id: 'footstart-' + i})
- })
-
- d3.selectAll('.footend').each(function(d, i){
- d3.select(this)
- .at({
- href: '#footstart-' + i,
- id: 'footend-' + i,
- })
- .text(footnums[i])
- })
-
-
- d3.selectAll('#sections wee, #graph .weepeople').attr('aria-hidden', true)
-
-})()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/merve/measuring-fairness/source/_posts/2020-09-27-diversity-metrics.md b/spaces/merve/measuring-fairness/source/_posts/2020-09-27-diversity-metrics.md
deleted file mode 100644
index 4c84423fe9a6f8566a0b7182bc378feec97d9654..0000000000000000000000000000000000000000
--- a/spaces/merve/measuring-fairness/source/_posts/2020-09-27-diversity-metrics.md
+++ /dev/null
@@ -1,100 +0,0 @@
----
-template: post.html
-title: Measuring Diversity
-titlex: Diversity and Inclusion Metrics
-summary: Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
-shareimg: https://pair.withgoogle.com/explorables/images/measuring-diversity.png
-permalink: /measuring-diversity/
-date: 2021-03-01
----
-
-
-
-Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for "CEO pictures" and sees a [page of white men](https://www.nytimes.com/interactive/2018/04/24/upshot/women-and-men-named-john.html), they may feel that only white men can be CEOs, further perpetuating lack of representation at companies' executive levels.
-
-Using the careful quantification outlined in a recent paper, [Diversity and Inclusion Metrics in Subset Selection](https://arxiv.org/pdf/2002.03256.pdf), we can quantify biases and push these systems to return a wider range of results.
-
-The mathematics of all this is a little easier to follow with abstract shapes. Let's take a look at some of them:
-
-
-
-Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return?
-
-
-
-Another diversity metric we care about is the percentage of dots... how close to 35% dots can you get?
-
-
-
-If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn't possible to reduce the difference of every metric to zero. One natural approach: find the selection with the **lowest mean difference** across all the metrics to get as close as possible to all the targets.
-
-In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the **lowest max difference**. Try minimizing both below:
-
-
-
-Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results?
-
-### Ranking Measures
-
-We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set's percentage of green, dots and small shapes are shown in the small histograms.
-
-
-
-At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets.
-
-
-
-Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for [intersectionality](https://en.wikipedia.org/wiki/Intersectionality). The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It's important to keep in mind what exactly you're trying to maximize and the dataset that you're operating on.
-
-### Which Measure is Best?
-
-In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context.
-
-For example, the doctors on the left have more variance along the shirt color attribute, but they're less diverse by gender than the doctors on the right. With the shirt color and gender targets we've picked, the two subsets have the same mean and max differences However, in most applications, it's more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color.
-
-
-
-Just selecting a diverse sample isn't sufficient either. [Diversity and Inclusion Metrics in Subset Selection](https://arxiv.org/pdf/2002.03256.pdf) introduces a way of measuring "inclusion" - how well does the searcher feel represented in the results?
-
-Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive.
-
-
-
-The context of the query and the searcher also plays in the quality of search results. A search for "work clothing" that shows a mixed palette of colors for men's clothing and only pink women's clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women's clothes might be appropriate to show for a "pink women work clothes" search or if the searcher had previously expressed a preference for pink.
-
-We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems.
-
-### More Reading
-
-The [Diversity and Inclusion Metrics](https://arxiv.org/pdf/2002.03256.pdf) paper has a [Colab](https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/source/measuring-diversity/diversity-and-inclusion.ipynb) with a detailed desciption of the metrics, additional visualizations and a reference Python implementation.
-
-The difficulties of [measuring fairness](https://pair.withgoogle.com/explorables/measuring-fairness/) in general have been well studied; subset selection is still an active area of research. [Fairness of Exposure in Rankings](https://www.cs.cornell.edu/~tj/publications/singh_joachims_18a.pdf) proposes a ranking algorithm that incorporates fairness constraints. [Toward creating a fairer ranking in search engine results](https://www.ilab.cs.rutgers.edu/~rg522/publication/gao-2020-ipm/gao-2020-ipm.pdf) measures diversity bias in actual search results.
-
-Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the [People + AI Guidebook](https://pair.withgoogle.com/chapter/feedback-controls/).
-
-### Credits
-
-Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell\* and Timnit Gebru\* // March 2021
-
-*Work done while at Google
-
-Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece.
-
-
-
More Explorables
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/miesnerjacob/Multi-task-NLP/app.py b/spaces/miesnerjacob/Multi-task-NLP/app.py
deleted file mode 100644
index 0d0704cce399e8311c98b41c660fef3c7ea737cb..0000000000000000000000000000000000000000
--- a/spaces/miesnerjacob/Multi-task-NLP/app.py
+++ /dev/null
@@ -1,254 +0,0 @@
-import pandas as pd
-import streamlit as st
-from annotated_text import annotated_text
-from streamlit_option_menu import option_menu
-from sentiment_analysis import SentimentAnalysis
-from keyword_extraction import KeywordExtractor
-from part_of_speech_tagging import POSTagging
-from emotion_detection import EmotionDetection
-from named_entity_recognition import NamedEntityRecognition
-
-hide_streamlit_style = """
-
- """
-st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-
-@st.cache(allow_output_mutation=True)
-def load_sentiment_model():
- return SentimentAnalysis()
-
-@st.cache(allow_output_mutation=True)
-def load_keyword_model():
- return KeywordExtractor()
-
-@st.cache(allow_output_mutation=True)
-def load_pos_model():
- return POSTagging()
-
-@st.cache(allow_output_mutation=True)
-def load_emotion_model():
- return EmotionDetection()
-
-@st.cache(allow_output_mutation=True)
-def load_ner_model():
- return NamedEntityRecognition()
-
-
-sentiment_analyzer = load_sentiment_model()
-keyword_extractor = load_keyword_model()
-pos_tagger = load_pos_model()
-emotion_detector = load_emotion_model()
-ner = load_ner_model()
-
-example_text = "This is example text that contains both names of organizations like Hugging Face and cities like New York, all while portraying an upbeat attitude."
-
-with st.sidebar:
- page = option_menu(menu_title='Menu',
- menu_icon="robot",
- options=["Welcome!",
- "Sentiment Analysis",
- "Keyword Extraction",
- "Part of Speech Tagging",
- "Emotion Detection",
- "Named Entity Recognition"],
- icons=["house-door",
- "chat-dots",
- "key",
- "tag",
- "emoji-heart-eyes",
- "building"],
- default_index=0
- )
-
-st.title('Open-source NLP')
-
-if page == "Welcome!":
- st.header('Welcome!')
-
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- st.subheader("Quickstart")
- st.write(
- """
- Replace the example text below and flip through the pages in the menu to perform NLP tasks on-demand!
- Feel free to use the example text for a test run.
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- st.subheader("Introduction")
- st.write("""
- Hello! This application is a celebration of open-source and the power that programmers have been granted today
- by those who give back to the community. This tool was constructed using Streamlit, Huggingface Transformers,
- Transformers-Interpret, NLTK, Spacy, amongst other open-source Python libraries and models.
-
- Utilizing this tool you will be able to perform a multitude of Natural Language Processing Tasks on a range of
- different tasks. All you need to do is paste your input, select your task, and hit the start button!
-
- * This application currently supports:
- * Sentiment Analysis
- * Keyword Extraction
- * Part of Speech Tagging
- * Emotion Detection
- * Named Entity Recognition
-
- More features may be added in the future including article/tweet/youtube input, improved text annotation, model quality improvements,
- depending on community feedback. Please reach out to me at miesner.jacob@gmail.com or at my Linkedin page listed
- below if you have ideas or suggestions for improvement.
-
- If you would like to contribute yourself, feel free to fork the Github repository listed below and submit a merge request.
- """
- )
- st.subheader("Notes")
- st.write(
- """
- * This dashboard was constructed by myself, but every resource used is open-source! If you are interested in my other works you can view them here:
-
- [Project Github](https://github.com/MiesnerJacob/Multi-task-NLP-dashboard)
-
- [Jacob Miesner's Github](https://github.com/MiesnerJacob)
-
- [Jacob Miesner's Linkedin](https://www.linkedin.com/in/jacob-miesner-885050125/)
-
- [Jacob Miesner's Website](https://www.jacobmiesner.com)
-
- * The prediction justification for some of the tasks are printed as the model views them. For this reason the text may contain special tokens like [CLS] or [SEP] or even hashtags splitting words. If you are are familiar with language models you will recognize these, if you do not have prior experience with language models you can ignore these characters.
- """
- )
-
-elif page == "Sentiment Analysis":
- st.header('Sentiment Analysis')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds, html = sentiment_analyzer.run(text)
- st.success('All done!')
- st.write("")
- st.subheader("Sentiment Predictions")
- st.bar_chart(data=preds, width=0, height=0, use_container_width=True)
- st.write("")
- st.subheader("Sentiment Justification")
- raw_html = html._repr_html_()
- st.components.v1.html(raw_html, height=500)
-
-elif page == "Keyword Extraction":
- st.header('Keyword Extraction')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- max_keywords = st.slider('# of Keywords Max Limit', min_value=1, max_value=10, value=5, step=1)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- annotation, keywords = keyword_extractor.generate(text, max_keywords)
- st.success('All done!')
-
- if annotation:
- st.subheader("Keyword Annotation")
- st.write("")
- annotated_text(*annotation)
- st.text("")
-
- st.subheader("Extracted Keywords")
- st.write("")
- df = pd.DataFrame(keywords, columns=['Extracted Keywords'])
- csv = df.to_csv(index=False).encode('utf-8')
- st.download_button('Download Keywords to CSV', csv, file_name='news_intelligence_keywords.csv')
-
- data_table = st.table(df)
-
-elif page == "Part of Speech Tagging":
- st.header('Part of Speech Tagging')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds = pos_tagger.classify(text)
- st.success('All done!')
- st.write("")
- st.subheader("Part of Speech tags")
- annotated_text(*preds)
- st.write("")
- st.components.v1.iframe('https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html', height=1000)
-
-elif page == "Emotion Detection":
- st.header('Emotion Detection')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds, html = emotion_detector.run(text)
- st.success('All done!')
- st.write("")
- st.subheader("Emotion Predictions")
- st.bar_chart(data=preds, width=0, height=0, use_container_width=True)
- raw_html = html._repr_html_()
- st.write("")
- st.subheader("Emotion Justification")
- st.components.v1.html(raw_html, height=500)
-
-elif page == "Named Entity Recognition":
- st.header('Named Entity Recognition')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds, ner_annotation = ner.classify(text)
- st.success('All done!')
- st.write("")
- st.subheader("NER Predictions")
- annotated_text(*ner_annotation)
- st.write("")
- st.subheader("NER Prediction Metadata")
- st.write(preds)
diff --git a/spaces/mikkoar/marco/src/components/voice.tsx b/spaces/mikkoar/marco/src/components/voice.tsx
deleted file mode 100644
index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000
--- a/spaces/mikkoar/marco/src/components/voice.tsx
+++ /dev/null
@@ -1,52 +0,0 @@
-import React, { useEffect } from 'react'
-import { useSetAtom } from 'jotai'
-import { useBing } from '@/lib/hooks/use-bing'
-import Image from 'next/image'
-import VoiceIcon from '@/assets/images/voice.svg'
-import VoiceButton from './ui/voice'
-import { SR } from '@/lib/bots/bing/sr'
-import { voiceListenAtom } from '@/state'
-
-const sr = new SR(['发送', '清空', '退出'])
-
-const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => {
- const setListen = useSetAtom(voiceListenAtom)
- useEffect(() => {
- if (sr.listening) return
- sr.transcript = !isSpeaking
- }, [isSpeaking])
-
- useEffect(() => {
- sr.onchange = (msg: string, command?: string) => {
- switch (command) {
- case '退出':
- sr.stop()
- break;
- case '发送':
- sendMessage(input)
- case '清空':
- setInput('')
- break;
- default:
- setInput(input + msg)
- }
- }
- }, [input])
-
- const switchSR = (enable: boolean = false) => {
- setListen(enable)
- if (enable) {
- sr.start()
- } else {
- sr.stop()
- }
- }
-
- return sr.listening ? (
- switchSR(false)} />
- ) : (
- switchSR(true)} />
- )
-};
-
-export default Voice;
diff --git a/spaces/mipbkhn/PaddyDoctorPublic/README.md b/spaces/mipbkhn/PaddyDoctorPublic/README.md
deleted file mode 100644
index 0a1a41b8416a1ccb7f69cfa515d0797f4c9a63a8..0000000000000000000000000000000000000000
--- a/spaces/mipbkhn/PaddyDoctorPublic/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Paddy Doctor
-emoji: 📉
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/configs/paths_config.py b/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/configs/paths_config.py
deleted file mode 100644
index 4604f6063b8125364a52a492de52fcc54004f373..0000000000000000000000000000000000000000
--- a/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/configs/paths_config.py
+++ /dev/null
@@ -1,28 +0,0 @@
-dataset_paths = {
- # Face Datasets (In the paper: FFHQ - train, CelebAHQ - test)
- 'ffhq': '',
- 'celeba_test': '',
-
- # Cars Dataset (In the paper: Stanford cars)
- 'cars_train': '',
- 'cars_test': '',
-
- # Horse Dataset (In the paper: LSUN Horse)
- 'horse_train': '',
- 'horse_test': '',
-
- # Church Dataset (In the paper: LSUN Church)
- 'church_train': '',
- 'church_test': '',
-
- # Cats Dataset (In the paper: LSUN Cat)
- 'cats_train': '',
- 'cats_test': ''
-}
-
-model_paths = {
- 'stylegan_ffhq': 'pretrained_models/stylegan2-ffhq-config-f.pt',
- 'ir_se50': 'pretrained_models/model_ir_se50.pth',
- 'shape_predictor': 'pretrained_models/shape_predictor_68_face_landmarks.dat',
- 'moco': 'pretrained_models/moco_v2_800ep_pretrain.pth'
-}
diff --git a/spaces/monra/freegpt-webui-chimera/g4f/typing.py b/spaces/monra/freegpt-webui-chimera/g4f/typing.py
deleted file mode 100644
index e41a567ae49dd26d2ace2a3732b0e8f0bbbaa4b0..0000000000000000000000000000000000000000
--- a/spaces/monra/freegpt-webui-chimera/g4f/typing.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from typing import Dict, NewType, Union, Optional, List, get_type_hints
-
-sha256 = NewType('sha_256_hash', str)
\ No newline at end of file
diff --git a/spaces/mrLarry/image_variation/app.py b/spaces/mrLarry/image_variation/app.py
deleted file mode 100644
index 1b231076955bbdd7ad8cadcd1de0f825a7c41a32..0000000000000000000000000000000000000000
--- a/spaces/mrLarry/image_variation/app.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import torch
-from PIL import Image
-from torchvision import transforms
-from transformers import CLIPProcessor, CLIPModel
-
-# load the CLIP model
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
-model.to(device)
-
-# load the CLIP processor
-processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
-
-# load the image
-image_path = "path/to/image.jpg"
-image = Image.open(image_path)
-
-# resize the image
-resize = transforms.Resize((224, 224))
-image = resize(image)
-
-# convert the image to a tensor
-tensor = transforms.ToTensor()(image)
-tensor = tensor.to(device)
-
-# get the image features using the CLIP model
-with torch.no_grad():
- features = model.encode_image(tensor.unsqueeze(0))
-
-# generate variations of the image using the CLIP model and processor
-with torch.no_grad():
- outputs = model.generate_images(
- features=features,
- num_images=5, # number of different variations to generate
- max_length=50, # maximum length of the generated caption for the variation
- clip=processor,
- temperature=1.0, # temperature of the sampling process
- top_p=0.9, # top-p probability for the sampling process
- batch_size=1,
- device=device,
- )
-
-# save the generated images
-for i, output in enumerate(outputs):
- generated_image = transforms.functional.to_pil_image(output[0])
- generated_image.save(f"output/image_variation_{i}.jpg")
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/paraphraser/paraphrase.py b/spaces/mshukor/UnIVAL/fairseq/examples/paraphraser/paraphrase.py
deleted file mode 100644
index d3422fb3db9a381b73a854d2379df214ebe544a2..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/paraphraser/paraphrase.py
+++ /dev/null
@@ -1,85 +0,0 @@
-#!/usr/bin/env python3 -u
-
-import argparse
-import fileinput
-import logging
-import os
-import sys
-
-from fairseq.models.transformer import TransformerModel
-
-
-logging.getLogger().setLevel(logging.INFO)
-
-
-def main():
- parser = argparse.ArgumentParser(description="")
- parser.add_argument("--en2fr", required=True, help="path to en2fr model")
- parser.add_argument(
- "--fr2en", required=True, help="path to fr2en mixture of experts model"
- )
- parser.add_argument(
- "--user-dir", help="path to fairseq examples/translation_moe/src directory"
- )
- parser.add_argument(
- "--num-experts",
- type=int,
- default=10,
- help="(keep at 10 unless using a different model)",
- )
- parser.add_argument(
- "files",
- nargs="*",
- default=["-"],
- help='input files to paraphrase; "-" for stdin',
- )
- args = parser.parse_args()
-
- if args.user_dir is None:
- args.user_dir = os.path.join(
- os.path.dirname(os.path.dirname(os.path.abspath(__file__))), # examples/
- "translation_moe",
- "src",
- )
- if os.path.exists(args.user_dir):
- logging.info("found user_dir:" + args.user_dir)
- else:
- raise RuntimeError(
- "cannot find fairseq examples/translation_moe/src "
- "(tried looking here: {})".format(args.user_dir)
- )
-
- logging.info("loading en2fr model from:" + args.en2fr)
- en2fr = TransformerModel.from_pretrained(
- model_name_or_path=args.en2fr,
- tokenizer="moses",
- bpe="sentencepiece",
- ).eval()
-
- logging.info("loading fr2en model from:" + args.fr2en)
- fr2en = TransformerModel.from_pretrained(
- model_name_or_path=args.fr2en,
- tokenizer="moses",
- bpe="sentencepiece",
- user_dir=args.user_dir,
- task="translation_moe",
- ).eval()
-
- def gen_paraphrases(en):
- fr = en2fr.translate(en)
- return [
- fr2en.translate(fr, inference_step_args={"expert": i})
- for i in range(args.num_experts)
- ]
-
- logging.info("Type the input sentence and press return:")
- for line in fileinput.input(args.files):
- line = line.strip()
- if len(line) == 0:
- continue
- for paraphrase in gen_paraphrases(line):
- print(paraphrase)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_chexpert.py b/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_chexpert.py
deleted file mode 100644
index 73019ae362094a2bf80b9c09cf41478585266f16..0000000000000000000000000000000000000000
--- a/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_chexpert.py
+++ /dev/null
@@ -1,51 +0,0 @@
-
-
-from pathlib import Path
-from torchvision.utils import save_image
-import pandas as pd
-import torch
-import torch.nn.functional as F
-from medical_diffusion.data.datasets import CheXpert_Dataset
-import math
-
-path_out = Path().cwd()/'results'/'test'/'CheXpert'
-path_out.mkdir(parents=True, exist_ok=True)
-
-# path_root = Path('/mnt/hdd/datasets/chest/CheXpert/ChecXpert-v10/train')
-path_root = Path('/media/NAS/Chexpert_dataset/CheXpert-v1.0/train')
-mode = path_root.name
-labels = pd.read_csv(path_root.parent/f'{mode}.csv', index_col='Path')
-labels = labels[labels['Frontal/Lateral'] == 'Frontal']
-labels.loc[labels['Sex'] == 'Unknown', 'Sex'] = 'Female' # Must be "female" to match paper data
-labels.fillna(3, inplace=True)
-str_2_int = {'Sex': {'Male':0, 'Female':1}, 'Frontal/Lateral':{'Frontal':0, 'Lateral':1}, 'AP/PA':{'AP':0, 'PA':1, 'LL':2, 'RL':3}}
-labels.replace(str_2_int, inplace=True)
-
-# Get patients
-labels['patient'] = labels.index.str.split('/').str[2]
-labels.set_index('patient',drop=True, append=True, inplace=True)
-
-for c in labels.columns:
- print(labels[c].value_counts(dropna=False))
-
-ds = CheXpert_Dataset(
- crawler_ext='jpg',
- image_resize=(256, 256),
- # image_crop=(256, 256),
- path_root=path_root,
-)
-
-
-
-
-x = torch.stack([ds[n]['source'] for n in range(4)])
-b = x.shape[0]
-save_image(x, path_out/'samples_down_0.png', nrwos=int(math.sqrt(b)), normalize=True, scale_each=True )
-
-size_0 = torch.tensor(x.shape[2:])
-
-for i in range(3):
- new_size = torch.div(size_0, 2**(i+1), rounding_mode='floor' )
- x_i = F.interpolate(x, size=tuple(new_size), mode='nearest', align_corners=None)
- print(x_i.shape)
- save_image(x_i, path_out/f'samples_down_{i+1}.png', nrwos=int(math.sqrt(b)), normalize=True, scale_each=True)
\ No newline at end of file
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/transformer.py b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/transformer.py
deleted file mode 100644
index 28fafea52288603fea275f3a100790471825c34a..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/transformer.py
+++ /dev/null
@@ -1,240 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import Tensor, nn
-
-import math
-from typing import Tuple, Type
-
-from .common import MLPBlock
-
-
-class TwoWayTransformer(nn.Module):
- def __init__(
- self,
- depth: int,
- embedding_dim: int,
- num_heads: int,
- mlp_dim: int,
- activation: Type[nn.Module] = nn.ReLU,
- attention_downsample_rate: int = 2,
- ) -> None:
- """
- A transformer decoder that attends to an input image using
- queries whose positional embedding is supplied.
-
- Args:
- depth (int): number of layers in the transformer
- embedding_dim (int): the channel dimension for the input embeddings
- num_heads (int): the number of heads for multihead attention. Must
- divide embedding_dim
- mlp_dim (int): the channel dimension internal to the MLP block
- activation (nn.Module): the activation to use in the MLP block
- """
- super().__init__()
- self.depth = depth
- self.embedding_dim = embedding_dim
- self.num_heads = num_heads
- self.mlp_dim = mlp_dim
- self.layers = nn.ModuleList()
-
- for i in range(depth):
- self.layers.append(
- TwoWayAttentionBlock(
- embedding_dim=embedding_dim,
- num_heads=num_heads,
- mlp_dim=mlp_dim,
- activation=activation,
- attention_downsample_rate=attention_downsample_rate,
- skip_first_layer_pe=(i == 0),
- )
- )
-
- self.final_attn_token_to_image = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
- self.norm_final_attn = nn.LayerNorm(embedding_dim)
-
- def forward(
- self,
- image_embedding: Tensor,
- image_pe: Tensor,
- point_embedding: Tensor,
- ) -> Tuple[Tensor, Tensor]:
- """
- Args:
- image_embedding (torch.Tensor): image to attend to. Should be shape
- B x embedding_dim x h x w for any h and w.
- image_pe (torch.Tensor): the positional encoding to add to the image. Must
- have the same shape as image_embedding.
- point_embedding (torch.Tensor): the embedding to add to the query points.
- Must have shape B x N_points x embedding_dim for any N_points.
-
- Returns:
- torch.Tensor: the processed point_embedding
- torch.Tensor: the processed image_embedding
- """
- # BxCxHxW -> BxHWxC == B x N_image_tokens x C
- bs, c, h, w = image_embedding.shape
- image_embedding = image_embedding.flatten(2).permute(0, 2, 1)
- image_pe = image_pe.flatten(2).permute(0, 2, 1)
-
- # Prepare queries
- queries = point_embedding
- keys = image_embedding
-
- # Apply transformer blocks and final layernorm
- for layer in self.layers:
- queries, keys = layer(
- queries=queries,
- keys=keys,
- query_pe=point_embedding,
- key_pe=image_pe,
- )
-
- # Apply the final attention layer from the points to the image
- q = queries + point_embedding
- k = keys + image_pe
- attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys)
- queries = queries + attn_out
- queries = self.norm_final_attn(queries)
-
- return queries, keys
-
-
-class TwoWayAttentionBlock(nn.Module):
- def __init__(
- self,
- embedding_dim: int,
- num_heads: int,
- mlp_dim: int = 2048,
- activation: Type[nn.Module] = nn.ReLU,
- attention_downsample_rate: int = 2,
- skip_first_layer_pe: bool = False,
- ) -> None:
- """
- A transformer block with four layers: (1) self-attention of sparse
- inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp
- block on sparse inputs, and (4) cross attention of dense inputs to sparse
- inputs.
-
- Arguments:
- embedding_dim (int): the channel dimension of the embeddings
- num_heads (int): the number of heads in the attention layers
- mlp_dim (int): the hidden dimension of the mlp block
- activation (nn.Module): the activation of the mlp block
- skip_first_layer_pe (bool): skip the PE on the first layer
- """
- super().__init__()
- self.self_attn = Attention(embedding_dim, num_heads)
- self.norm1 = nn.LayerNorm(embedding_dim)
-
- self.cross_attn_token_to_image = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
- self.norm2 = nn.LayerNorm(embedding_dim)
-
- self.mlp = MLPBlock(embedding_dim, mlp_dim, activation)
- self.norm3 = nn.LayerNorm(embedding_dim)
-
- self.norm4 = nn.LayerNorm(embedding_dim)
- self.cross_attn_image_to_token = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
-
- self.skip_first_layer_pe = skip_first_layer_pe
-
- def forward(
- self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor
- ) -> Tuple[Tensor, Tensor]:
- # Self attention block
- if self.skip_first_layer_pe:
- queries = self.self_attn(q=queries, k=queries, v=queries)
- else:
- q = queries + query_pe
- attn_out = self.self_attn(q=q, k=q, v=queries)
- queries = queries + attn_out
- queries = self.norm1(queries)
-
- # Cross attention block, tokens attending to image embedding
- q = queries + query_pe
- k = keys + key_pe
- attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys)
- queries = queries + attn_out
- queries = self.norm2(queries)
-
- # MLP block
- mlp_out = self.mlp(queries)
- queries = queries + mlp_out
- queries = self.norm3(queries)
-
- # Cross attention block, image embedding attending to tokens
- q = queries + query_pe
- k = keys + key_pe
- attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries)
- keys = keys + attn_out
- keys = self.norm4(keys)
-
- return queries, keys
-
-
-class Attention(nn.Module):
- """
- An attention layer that allows for downscaling the size of the embedding
- after projection to queries, keys, and values.
- """
-
- def __init__(
- self,
- embedding_dim: int,
- num_heads: int,
- downsample_rate: int = 1,
- ) -> None:
- super().__init__()
- self.embedding_dim = embedding_dim
- self.internal_dim = embedding_dim // downsample_rate
- self.num_heads = num_heads
- assert self.internal_dim % num_heads == 0, "num_heads must divide embedding_dim."
-
- self.q_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.k_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.v_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.out_proj = nn.Linear(self.internal_dim, embedding_dim)
-
- def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor:
- b, n, c = x.shape
- x = x.reshape(b, n, num_heads, c // num_heads)
- return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head
-
- def _recombine_heads(self, x: Tensor) -> Tensor:
- b, n_heads, n_tokens, c_per_head = x.shape
- x = x.transpose(1, 2)
- return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C
-
- def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor:
- # Input projections
- q = self.q_proj(q)
- k = self.k_proj(k)
- v = self.v_proj(v)
-
- # Separate into heads
- q = self._separate_heads(q, self.num_heads)
- k = self._separate_heads(k, self.num_heads)
- v = self._separate_heads(v, self.num_heads)
-
- # Attention
- _, _, _, c_per_head = q.shape
- attn = q @ k.permute(0, 1, 3, 2) # B x N_heads x N_tokens x N_tokens
- attn = attn / math.sqrt(c_per_head)
- attn = torch.softmax(attn, dim=-1)
-
- # Get output
- out = attn @ v
- out = self._recombine_heads(out)
- out = self.out_proj(out)
-
- return out
diff --git a/spaces/naotakigawa/qatool/README.md b/spaces/naotakigawa/qatool/README.md
deleted file mode 100644
index 5a52f3e1ad34bf51c4dbe584941d29c861b557ba..0000000000000000000000000000000000000000
--- a/spaces/naotakigawa/qatool/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Qatool
-emoji: 🦀
-colorFrom: purple
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.24.0
-pinned: false
-app_port: 8501
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/models/decoders/unet_decoder.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/models/decoders/unet_decoder.py
deleted file mode 100644
index b55fcb30e1e9d38cca77aa4589794f415bdf4317..0000000000000000000000000000000000000000
--- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/models/decoders/unet_decoder.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from segmentation_models_pytorch.base import modules as md
-
-
-class DecoderBlock(nn.Module):
- def __init__(
- self,
- in_channels,
- skip_channels,
- out_channels,
- use_batchnorm=True,
- attention_type=None,
- ):
- super().__init__()
-
- self.conv1 = md.Conv2dReLU(
- in_channels + skip_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- use_batchnorm=use_batchnorm,
- )
-
- in_and_skip_channels = in_channels + skip_channels
-
- self.attention1 = md.Attention(attention_type,
- in_channels=in_and_skip_channels)
-
- self.conv2 = md.Conv2dReLU(
- out_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- use_batchnorm=use_batchnorm,
- )
-
- self.attention2 = md.Attention(attention_type,
- in_channels=out_channels)
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.skip_channels = skip_channels
-
- def forward(self, x, skip=None):
-
- if skip is None:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
-
- else:
-
- if x.shape[-1] != skip.shape[-1]:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
-
- if skip is not None:
-
- x = torch.cat([x, skip], dim=1)
- x = self.attention1(x)
-
- x = self.conv1(x)
- x = self.conv2(x)
- x = self.attention2(x)
-
- return x
-
-
-class CenterBlock(nn.Sequential):
- def __init__(self, in_channels, out_channels, use_batchnorm=True):
- conv1 = md.Conv2dReLU(
- in_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- use_batchnorm=use_batchnorm,
- )
- conv2 = md.Conv2dReLU(
- out_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- use_batchnorm=use_batchnorm,
- )
- super().__init__(conv1, conv2)
-
-
-class UnetDecoder(nn.Module):
- def __init__(self,
- encoder_channels,
- decoder_channels,
- n_blocks=5,
- use_batchnorm=True,
- attention_type=None,
- center=False):
- super().__init__()
-
- if n_blocks != len(decoder_channels):
- raise ValueError(
- f"Model depth is {n_blocks}, but you provided "
- f"decoder_channels for {len(decoder_channels)} blocks."
- )
-
- # remove first skip with same spatial resolution
- encoder_channels = encoder_channels[1:]
-
- # reverse channels to start from head of encoder
- encoder_channels = encoder_channels[::-1]
-
- # computing blocks input and output channels
- head_channels = encoder_channels[0]
-
- in_channels = [head_channels] + list(decoder_channels[:-1])
-
- skip_channels = list(encoder_channels[1:]) + [0]
-
- out_channels = decoder_channels
-
- if center:
-
- self.center = CenterBlock(
- head_channels, head_channels, use_batchnorm=use_batchnorm)
-
- else:
-
- self.center = nn.Identity()
-
- # combine decoder keyword arguments
- kwargs = dict(use_batchnorm=use_batchnorm,
- attention_type=attention_type)
-
- blocks = [
- DecoderBlock(in_ch, skip_ch, out_ch, **kwargs)
- for in_ch, skip_ch, out_ch in zip(in_channels,
- skip_channels,
- out_channels)
- ]
-
- self.blocks = nn.ModuleList(blocks)
-
- def forward(self, *features):
-
- features = features[1:]
-
- # remove first skip with same spatial resolution
-
- features = features[:: -1]
- # reverse channels to start from head of encoder
-
- head = features[0]
-
- skips = features[1:]
-
- x = self.center(head)
-
- for i, decoder_block in enumerate(self.blocks):
-
- skip = skips[i] if i < len(skips) else None
-
- x = decoder_block(x, skip)
-
- return x
-
-
-class SegmentationHead(nn.Sequential):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size=3,
- upsampling=1):
-
- conv2d = nn.Conv2d(in_channels,
- out_channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2)
-
- upsampling = nn.UpsamplingBilinear2d(
- scale_factor=upsampling) if upsampling > 1 else nn.Identity()
-
- super().__init__(conv2d, upsampling)
diff --git a/spaces/nateraw/lavila/docs/MODEL_ZOO.md b/spaces/nateraw/lavila/docs/MODEL_ZOO.md
deleted file mode 100644
index 1db4cc6e5df45214b262f3a7439683c77498454a..0000000000000000000000000000000000000000
--- a/spaces/nateraw/lavila/docs/MODEL_ZOO.md
+++ /dev/null
@@ -1,311 +0,0 @@
-# LAVILA Model Zoo
-
-## Multi-node Training
-We use multi-node training on a SLURM cluster with [submitit](https://github.com/facebookincubator/submitit) for producing the results and models in the paper.
-Please install `submitit` in your conda environment:
-```bash
-pip install submitit
-```
-
-
-## Pre-training
-
-Please refer to [PRETRAIN.md](./PRETRAIN.md).
-
-
-## Narrator
-
-| Visual Encoder | Text Decoder | METEOR | ROUGE-L | CIDEr | Pre-trained Vis. Encoder (md5) | checkpoint (md5) |
-| :------------: | :----------: | :----: | :-----: | :---: | :-------------------------------: | :--------: |
-| TSF-B | GPT-2 | 0.282 | 0.517 | 0.833 | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_base.baseline.ep_0003.pth) (dbcc4d) | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/narrator/vclm_openai_timesformer_base_gpt2_base.pt_ego4d.jobid_319630.ep_0002.md5sum_68a71f.pth) (68a71f) |
-| TSF-L@HR | GPT-2 XL | 0.298 | 0.539 | 0.977 | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_large_336px_distilbert_base.baseline.ep_0003.pth) (5c69b8) | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/narrator/vclm_openai_timesformer_large_336px_gpt2_xl.pt_ego4d.jobid_246897.ep_0003.md5sum_443263.pth) (443263) |
-
-
-Ego4D val split
-
-
-^ Note that the pre-trained checkpoint to evaluate CharadesEgo is different from that to evalute other datasets.
-Specifically, we use the checkpoint at epoch 1 to zero-shot evaluate CharadesEgo and the checkpoint that achieves best average mAP on EK-100 MIR to evaluate other datasets, as is done in [EgoVLP](https://arxiv.org/pdf/2206.01670.pdf).
-Our guess is that since CharadesEgo videos (captured by head-mounted mobile cameras) are visually different from Ego4D/EPIC-Kitchens videos (captured by professional action cameras, eg GoPro), pre-training on Ego4D videos for longer will lead to some potential domain discrepancy.
-
-^^ We use the checkpoints released by [EgoVLP](https://github.com/showlab/EgoVLP) and convert them to be compatible with this codebase. Also note that our reproduced numbers are better than the reported numbers, especially on EK-100 MIR since we evaluate on raw videos directly (for more details, check out Appendix F & Table 10 in our paper).
-
-1. EK-100 MIR
-
-
-```bash
-python eval_zeroshot.py --dataset ek100_mir --root datasets/EK100/video_ht256px/ --clip-length 4 --resume $PATH
-```
-By increasing the number of frames per clip, eg `--clip-length 16`, you are expected to see a better performance.
-
-
-
-### Multi-node training (Slurm)
-```bash
-# TimeSformer-Base
-python run_with_submitit_finetune_retrieval.py \
- --pretrain-model $PATH \
- --use-checkpoint --nodes 4
-
-# TimeSformer-Large
-python run_with_submitit_finetune_retrieval.py \
- --pretrain-model $PATH \
- --batch-size 4 \
- --use-checkpoint --nodes 4
-```
-
-### Single-machine training
-```bash
-torchrun --nproc_per_node=8 \
- main_finetune_retrieval.py \
- --output-dir $OUT_DIR \
- --pretrain-model $PATH \
- --use-checkpoint
-```
-
-Note that you might see a slight drop of performance when training on a single node compared to multiple nodes (everything else being the same) because of a smaller total batch size.
-
-### Evaluation
-
-Evaluation is done every `--eval-freq 5` epochs by default during fine-tuning.
-If you want to evaluate any checkpoint after fine-tuning, please switch to `--evaluate` mode and specify the path to the checkpoint by `--resume $FINETUNED_CHECKPOINT`.
-```bash
-torchrun --nproc_per_node=1 \
- main_finetune_retrieval.py \
- --output-dir $OUT_DIR \
- --pretrain-model $PATH \
- --use-checkpoint \
- --evaluate \
- --resume $FINETUNED_CHECKPOINT
-```
-
-
-
-
-^ Note that the pre-trained checkpoint for fine-tuning CharadesEgo is different from that for fine-tuning EK-100 or EGTEA. Same reason stated above.
-
-Training and evaluating scripts
-
diff --git a/spaces/naver/SuperFeatures/how/networks/how_net.py b/spaces/naver/SuperFeatures/how/networks/how_net.py
deleted file mode 100644
index f3ccd39958d06ce2dcf58449fc60f767d0f5fa59..0000000000000000000000000000000000000000
--- a/spaces/naver/SuperFeatures/how/networks/how_net.py
+++ /dev/null
@@ -1,94 +0,0 @@
-"""Module of the HOW method"""
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torchvision
-
-
-class HOWNet(nn.Module):
- """Network for the HOW method
-
- :param list features: A list of torch.nn.Module which act as feature extractor
- :param torch.nn.Module attention: Attention layer
- :param torch.nn.Module smoothing: Smoothing layer
- :param torch.nn.Module dim_reduction: Dimensionality reduction layer
- :param dict meta: Metadata that are stored with the network
- :param dict runtime: Runtime options that can be used as default for e.g. inference
- """
-
- def __init__(self, features, attention, smoothing, dim_reduction, meta, runtime):
- super().__init__()
-
- self.features = features
- self.attention = attention
- self.smoothing = smoothing
- self.dim_reduction = dim_reduction
-
- self.meta = meta
- self.runtime = runtime
-
-
- def copy_excluding_dim_reduction(self):
- """Return a copy of this network without the dim_reduction layer"""
- meta = {**self.meta, "outputdim": self.meta['backbone_dim']}
- return self.__class__(self.features, self.attention, self.smoothing, None, meta, self.runtime)
-
- def copy_with_runtime(self, runtime):
- """Return a copy of this network with a different runtime dict"""
- return self.__class__(self.features, self.attention, self.smoothing, self.dim_reduction, self.meta, runtime)
-
-
- # Methods of nn.Module
-
- @staticmethod
- def _set_batchnorm_eval(mod):
- if mod.__class__.__name__.find('BatchNorm') != -1:
- # freeze running mean and std
- mod.eval()
-
- def train(self, mode=True):
- res = super().train(mode)
- if mode:
- self.apply(HOWNet._set_batchnorm_eval)
- return res
-
- def parameter_groups(self, optimizer_opts):
- """Return torch parameter groups"""
- layers = [self.features, self.attention, self.smoothing]
- parameters = [{'params': x.parameters()} for x in layers if x is not None]
- if self.dim_reduction:
- # Do not update dimensionality reduction layer
- parameters.append({'params': self.dim_reduction.parameters(), 'lr': 0.0})
- return parameters
-
-
- # Forward
- def features_attentions(self, x, *, scales):
- """Return a tuple (features, attentions) where each is a list containing requested scales"""
- feats = []
- masks = []
- for s in scales:
- xs = nn.functional.interpolate(x, scale_factor=s, mode='bilinear', align_corners=False)
- o = self.features(xs)
- m = self.attention(o)
- if self.smoothing:
- o = self.smoothing(o)
- if self.dim_reduction:
- o = self.dim_reduction(o)
- feats.append(o)
- masks.append(m)
-
- # Normalize max weight to 1
- mx = max(x.max() for x in masks)
- masks = [x/mx for x in masks]
-
- return feats, masks
-
- def __repr__(self):
- meta_str = "\n".join(" %s: %s" % x for x in self.meta.items())
- return "%s(meta={\n%s\n})" % (self.__class__.__name__, meta_str)
-
- def meta_repr(self):
- """Return meta representation"""
- return str(self)
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Omar-Series-Mbc-English-Subtitles-PORTABLE-Download-For-15.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Omar-Series-Mbc-English-Subtitles-PORTABLE-Download-For-15.md
deleted file mode 100644
index e4ebdb2ee5b01c5bf91da4f6db604e7f974b5e3d..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Omar-Series-Mbc-English-Subtitles-PORTABLE-Download-For-15.md
+++ /dev/null
@@ -1,59 +0,0 @@
-## Omar Series Mbc English Subtitles Download For 15
-
-
-
- 
-
-
-
-**Click Here [https://kneedacexbrew.blogspot.com/?d=2tw0Ec](https://kneedacexbrew.blogspot.com/?d=2tw0Ec)**
-
-
-
- Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Omar Series Mbc English Subtitles Download For 15":
-
-# How to Download Omar Series MBC English Subtitles for Episode 15
-
-
-
-If you are looking for a way to download Omar Series MBC English subtitles for episode 15, you have come to the right place. Omar Series is a historical drama series that depicts the life of Omar Ibn Al-Khattab (may Allah be pleased with him), the second caliph of Islam, and his companions. The series is produced by MBC and has 30 episodes in total.
-
-
-
-Omar Series is one of the most popular and acclaimed Islamic series in the world, as it showcases the values of equality, justice, tolerance, wisdom, compassion and leadership that made Omar Ibn Al-Khattab (may Allah be pleased with him) one of the most revered men in our tradition. The series is also praised for its authenticity, accuracy and high production quality.
-
-
-
-However, Omar Series is originally in Arabic language, and many viewers who do not understand Arabic may want to watch it with English subtitles. Fortunately, there are some websites that offer Omar Series MBC English subtitles for download, such as Archive.org[^1^], Children of the Ummah[^2^] and IslamiCity[^3^]. These websites provide free and easy access to Omar Series MBC English subtitles for all episodes, including episode 15.
-
-
-
-To download Omar Series MBC English subtitles for episode 15, you can follow these simple steps:
-
-
-
-1. Go to one of the websites that offer Omar Series MBC English subtitles for download, such as Archive.org[^1^], Children of the Ummah[^2^] or IslamiCity[^3^].
-
-2. Find the episode 15 of Omar Series on the website. You can use the search function or browse through the categories.
-
-3. Click on the episode 15 of Omar Series to open it. You will see a video player and a download button below it.
-
-4. Click on the download button to download the video file of episode 15 of Omar Series. You may need to choose a format and a quality before downloading.
-
-5. After downloading the video file of episode 15 of Omar Series, you will also need to download the subtitle file of episode 15 of Omar Series. The subtitle file is usually in .srt format and has the same name as the video file.
-
-6. You can find the subtitle file of episode 15 of Omar Series on the same website where you downloaded the video file. It may be under a separate section or a link below the video player.
-
-7. Click on the subtitle file of episode 15 of Omar Series to download it. You may need to right-click and choose "Save link as" or "Save target as" depending on your browser.
-
-8. After downloading both the video file and the subtitle file of episode 15 of Omar Series, you can watch it on your computer or device using a media player that supports subtitles, such as VLC Media Player or MX Player.
-
-9. Make sure that both the video file and the subtitle file of episode 15 of Omar Series have the same name and are in the same folder. Then, open the video file with your media player and enable subtitles from the settings or menu.
-
-10. Enjoy watching episode 15 of Omar Series with English subtitles!
-
-
-
-We hope this article has helped you to download Omar Series MBC English subtitles for episode 15. If you have any questions or feedback, please leave a comment below. Thank you for reading!
-
- dfd1c89656
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Red Gate Sql Compare 11 BEST Crack.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Red Gate Sql Compare 11 BEST Crack.md
deleted file mode 100644
index 58343d71fe91008f43b8a50355a8ffaa5ced77f9..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Red Gate Sql Compare 11 BEST Crack.md
+++ /dev/null
@@ -1,202 +0,0 @@
-
-
Red Gate Sql Compare 11 Crack: What Is It and How to Avoid It?
-
If you work with SQL Server databases, you may have heard of Red Gate Sql Compare 11, a powerful tool that helps you compare and deploy SQL Server schemas quickly and accurately. But you may also have come across some websites or files that claim to offer a crack for this software, which supposedly allows you to use it for free or without a license. In this article, we will explain what a crack is, why it is risky and illegal to use one, and how you can avoid it. We will also show you some alternatives to using a crack for Red Gate Sql Compare 11, so you can enjoy its features and benefits without compromising your security or integrity.
Before we dive into the details of what a crack is and how to avoid it, let's first understand what Red Gate Sql Compare 11 is and what it does.
-
What is Red Gate Sql Compare 11 and what does it do?
-
Red Gate Sql Compare 11 is a software product developed by Red Gate Software, a leading provider of tools for working with databases and data. It is part of the SQL Toolbelt, a suite of tools that covers every aspect of SQL Server development and administration.
-
Red Gate Sql Compare 11 is designed to help you compare and deploy SQL Server database schemas, which are the structures that define the tables, views, stored procedures, functions, triggers, indexes, constraints, and other objects in your database. By comparing two database schemas, you can see what has changed between them, down to individual lines of SQL code. You can also generate scripts to synchronize the schemas, either by applying the changes directly to the target database or by saving them as files for later use. You can also compare backups, scripts folders, snapshots, or source control projects with live databases or vice versa.
-
Red Gate Sql Compare 11 is useful for various scenarios, such as:
-
-
Migrating database changes from development to testing to production environments
-
Auditing database changes and ensuring compliance with standards
-
Resolving database conflicts and errors
-
Updating legacy databases or integrating new features
-
Documenting database changes and creating reports
Red Gate Sql Compare 11 is compatible with all versions of SQL Server from 2005 to 2019, as well as Azure SQL Database and SQL Server on Amazon RDS. It supports Windows authentication and SQL Server authentication, as well as encrypted connections. It also integrates with popular source control systems, such as Git, Subversion, Team Foundation Server, and Mercurial.
-
-
What is a crack and why do some people use it?
-
A crack is a type of software that modifies or bypasses the security features of another software, such as a license key, a serial number, a trial period, or a digital signature. A crack is usually created by hackers or crackers who reverse-engineer the original software and alter its code or behavior. A crack is often distributed as a file or a program that can be downloaded from the internet or shared through peer-to-peer networks.
-
Some people use a crack for various reasons, such as:
-
-
They want to use a software for free or without paying for a license
-
They want to extend the trial period or remove the limitations of a software
-
They want to access features or functions that are not available in their version of a software
-
They want to bypass the activation or registration process of a software
-
They want to experiment with hacking or cracking techniques
-
-
However, using a crack is not only unethical and illegal, but also risky and dangerous. In the next section, we will explain what are the risks and consequences of using a crack.
-
What are the risks and consequences of using a crack?
-
Using a crack for Red Gate Sql Compare 11 or any other software can expose you to various risks and consequences, such as:
-
-
Legal issues: Using a crack violates the terms and conditions of the software license agreement, which is a legally binding contract between you and the software vendor. By using a crack, you are infringing the intellectual property rights of the software vendor and potentially committing software piracy, which is a criminal offense in many countries. You could face legal actions, such as lawsuits, fines, or even imprisonment, from the software vendor or the authorities.
-
Security issues: Using a crack can compromise the security of your computer and your data. A crack can contain malware, such as viruses, worms, trojans, spyware, ransomware, or keyloggers, that can infect your computer and cause damage, such as deleting or encrypting your files, stealing your personal or financial information, logging your keystrokes, monitoring your online activity, or taking control of your computer. A crack can also create vulnerabilities in your computer that can be exploited by hackers or cybercriminals.
-
Quality issues: Using a crack can affect the quality and performance of the software. A crack can introduce errors, bugs, glitches, or conflicts in the software that can cause it to malfunction, crash, freeze, or slow down. A crack can also prevent you from receiving updates, patches, fixes, or support from the software vendor. A crack can also interfere with other programs or processes on your computer and cause compatibility or stability issues.
-
Ethical issues: Using a crack can harm the reputation and revenue of the software vendor and the industry. A crack can reduce the incentive and motivation of the software vendor to invest in research and development, innovation, improvement, or customer service. A crack can also discourage other potential customers from buying or using the software legitimately. A crack can also undermine the trust and respect between the software vendor and the users.
-
-
As you can see, using a crack for Red Gate Sql Compare 11 is not worth it. You are putting yourself at risk of legal troubles, security breaches, quality problems, and ethical dilemmas. You are also missing out on the features and benefits of Red Gate Sql Compare 11 that we will discuss in the next section.
-
Features and Benefits of Red Gate Sql Compare 11
-
Now that we have explained what a crack is and how to avoid it, let's focus on the positive side of Red Gate Sql Compare 11. In this section, we will show you how Red Gate Sql Compare 11 can help you compare and deploy SQL Server schemas easily and efficiently. We will also highlight some of the main features and benefits of Red Gate Sql Compare 11 that make it stand out from other similar tools. Finally, we will tell you how you can get a free trial or buy a license for Red Gate Sql Compare 11.
-
How does Red Gate Sql Compare 11 help you compare and deploy SQL Server schemas?
-
Red Gate Sql
Red Gate Sql Compare 11 helps you compare and deploy SQL Server schemas in a few simple steps:
-
-
Select the source and target databases that you want to compare. You can choose from live databases, backups, scripts folders, snapshots, or source control projects.
-
Click on the Compare button to start the comparison. Red Gate Sql Compare 11 will analyze the schemas of both databases and show you the differences in a clear and intuitive interface.
-
Review the differences and select the ones that you want to synchronize. You can filter, group, sort, or exclude the differences according to various criteria. You can also edit the SQL scripts that are generated by Red Gate Sql Compare 11 to customize the synchronization.
-
Click on the Deploy button to execute the synchronization. Red Gate Sql Compare 11 will apply the changes to the target database or save them as files for later use. You can also generate a deployment report that summarizes the changes and the results.
-
-
With Red Gate Sql Compare 11, you can compare and deploy SQL Server schemas in minutes, instead of hours or days. You can also automate the process by using the command line interface or the API of Red Gate Sql Compare 11. You can also integrate Red Gate Sql Compare 11 with your existing tools and workflows, such as Visual Studio, SSMS, PowerShell, or TeamCity.
-
What are the main features and benefits of Red Gate Sql Compare 11?
-
Red Gate Sql Compare 11 has many features and benefits that make it a superior tool for comparing and deploying SQL Server schemas. Here are some of them:
-
-
Accuracy: Red Gate Sql Compare 11 compares SQL Server schemas with high precision and reliability. It detects every change, even minor ones, and shows you exactly what has changed and where. It also handles complex scenarios, such as renamed objects, dependencies, permissions, collations, extended properties, CLR objects, and more.
-
Speed: Red Gate Sql Compare 11 compares SQL Server schemas with high performance and efficiency. It uses a smart comparison engine that optimizes the comparison process and reduces the load on your servers. It also allows you to compare multiple databases or objects at once, or run comparisons in parallel.
-
Flexibility: Red Gate Sql Compare 11 compares SQL Server schemas with high versatility and adaptability. It supports all versions of SQL Server from 2005 to 2019, as well as Azure SQL Database and SQL Server on Amazon RDS. It also supports various types of sources and targets, such as live databases, backups, scripts folders, snapshots, or source control projects.
-
Simplicity: Red Gate Sql Compare 11 compares SQL Server schemas with high ease of use and convenience. It has a user-friendly interface that guides you through the comparison and deployment process. It also has a comprehensive documentation and help system that provides you with tips, tutorials, videos, examples, and support.
-
Security: Red Gate Sql Compare 11 compares SQL Server schemas with high security and safety. It uses encrypted connections and authentication methods to protect your data and credentials. It also allows you to backup your databases before deploying changes, or create rollback scripts in case of errors or failures.
-
-
With Red Gate Sql Compare 11, you can enjoy all these features and benefits without worrying about cracks or malware. You can also get a free trial or buy a license for Red Gate Sql Compare 11 in the next section.
-
How can you get a free trial or buy a license for Red Gate Sql Compare 11?
-
If you want to try Red Gate Sql Compare 11 for yourself, you can download a free trial from here. The free trial lasts for 14 days and gives you access to all the features and functions of Red Gate Sql Compare 11. You can also extend the trial period by contacting Red Gate Software.
-
If you want to buy a license for Red Gate Sql Compare 11, you can do so from here. The price of a license depends on the number of users and servers that you need. You can also get discounts for bulk purchases or renewals. A license for Red Gate Sql Compare 11 includes:
-
-
A perpetual license key that allows you to use Red Gate Sql Compare 11 indefinitely
-
A one-year support and upgrades contract that gives you access to updates, patches, fixes, new features, and customer support
-
A one-year access to Redgate University, an online learning platform that offers courses, certifications, webinars , and events on Red Gate Software products and topics
-
A one-year access to SQL Compare Learning Hub, a collection of resources, articles, videos, and forums that help you learn and master Red Gate Sql Compare 11
-
-
By buying a license for Red Gate Sql Compare 11, you are supporting the development and improvement of this software, as well as the software industry in general. You are also ensuring that you get the best quality and service from Red Gate Software.
-
How to Detect and Remove a Crack for Red Gate Sql Compare 11
-
In case you have already installed or used a crack for Red Gate Sql Compare 11, or you suspect that someone else has done so on your computer, you need to take immediate action to detect and remove it. In this section, we will show you how to do that.
-
How can you tell if you have a crack for Red Gate Sql Compare 11 installed on your computer?
-
There are some signs that can indicate that you have a crack for Red Gate Sql Compare 11 installed on your computer, such as:
-
-
You see a message or a pop-up window that says something like "Red Gate Sql Compare 11 cracked by XYZ" or "Red Gate Sql Compare 11 activated by ABC" when you launch the software.
-
You see a file or a folder that has a name similar to "Red Gate Sql Compare 11 crack", "Red Gate Sql Compare 11 keygen", "Red Gate Sql Compare 11 patch", or "Red Gate Sql Compare 11 activator" in your computer.
-
You see an icon or a shortcut that has a name or an image related to cracking or hacking on your desktop or in your start menu.
-
You notice that the software does not ask you for a license key or a serial number when you install or run it.
-
You notice that the software does not connect to the internet or check for updates when you use it.
-
You notice that the software has some features or functions that are not supposed to be available in your version or edition of it.
-
You notice that the software has some errors, bugs, glitches, or conflicts that affect its functionality or performance.
-
You notice that your computer has some problems, such as slow speed, frequent crashes, strange behavior, or unusual activity.
-
You notice that your antivirus or firewall software detects or blocks some suspicious files or programs related to the software.
-
-
If you notice any of these signs, you should assume that you have a crack for Red Gate Sql Compare 11 installed on your computer and proceed to remove it as soon as possible.
-
How can you remove a crack for Red Gate Sql Compare 11 from your computer?
-
To remove a crack for Red Gate Sql Compare 11 from your computer, you need to follow these steps:
-
-
Uninstall Red Gate Sql Compare 11 from your computer. You can do this by using the Windows Control Panel or the uninstaller program that comes with the software. Make sure to delete all the files and folders associated with the software from your computer.
-
Delete any files or folders that are related to the crack from your computer. You can use the Windows File Explorer or a third-party file manager program to search for and delete them. You can also use the Windows Disk Cleanup tool or a third-party disk cleaner program to remove any temporary or junk files from your computer.
-
Scan your computer with an antivirus or anti-malware program. You can use the Windows Defender program that comes with Windows, or a third-party antivirus or anti-malware program of your choice. Make sure to update the program and its database before running the scan. Follow the instructions of the program to detect and remove any malware that may have been installed by the crack.
-
Restore your computer to a previous state. You can use the Windows System Restore tool or a third-party backup and restore program to restore your computer to a state before you installed or used the crack. This can help you undo any changes or damage that may have been caused by the crack.
-
Reinstall Red Gate Sql Compare 11 on your computer. You can download the latest version of the software from here. Make sure to use a valid license key or serial number when installing the software. You can also contact Red Gate Software for assistance if you have any issues with reinstalling the software.
-
-
By following these steps, you can remove a crack for Red Gate Sql Compare 11 from your computer and restore its normal functioning. You can also prevent future infections by following some best practices in the next section.
-
How can you protect your computer from cracks and malware in the future?
-
To protect your computer from cracks and malware in the future, you need to follow some best practices, such as:
-
-
Avoid downloading or using cracks or pirated software: This is the most important and effective way to prevent cracks and malware from infecting your computer. Cracks and pirated software are illegal, unethical, and risky. They can expose you to various problems, such as legal troubles, security breaches, quality issues, and ethical dilemmas. They can also harm the software vendor and the industry. Instead of using cracks or pirated software, you should use legitimate software that is licensed, updated, and supported by the software vendor.
-
Use a reputable antivirus or anti-malware program: This is another essential way to protect your computer from cracks and malware. An antivirus or anti-malware program can help you detect and remove any malware that may have been installed by a crack or other sources. You should use a reputable antivirus or anti-malware program that has good reviews, ratings, and recommendations from experts and users. You should also update the program and its database regularly to keep up with the latest threats and solutions.
-
Use a firewall or a VPN: This is another useful way to protect your computer from cracks and malware. A firewall or a VPN can help you block or encrypt the incoming and outgoing traffic on your computer. This can prevent hackers or cybercriminals from accessing your computer or data, or injecting malware into your system. You should use a firewall or a VPN that has strong security features and settings. You should also configure the firewall or the VPN according to your needs and preferences.
-
Use common sense and caution: This is another important way to protect your computer from cracks and malware. You should use common sense and caution when browsing the internet, downloading files, opening attachments, clicking links, or entering information. You should avoid visiting suspicious or untrustworthy websites, downloading files from unknown or unreliable sources, opening attachments from unfamiliar or unsolicited emails, clicking links that look suspicious or misleading, or entering information on websites that are not secure or verified. You should also check the reputation and credibility of the websites, files, emails, links, or information that you encounter online.
-
-
By following these best practices, you can protect your computer from cracks and malware in the future. You can also enjoy the benefits of using legitimate software, such as Red Gate Sql Compare 11.
-
Alternatives to Using a Crack for Red Gate Sql Compare 11
-
If you are looking for alternatives to using a crack for Red Gate Sql Compare 11, you have some options to choose from. In this section, we will show you some free or low-cost alternatives to using a crack for Red Gate Sql Compare 11. We will also compare their advantages and disadvantages with Red Gate Sql Compare 11. Finally, we will help you choose the best alternative for your needs and budget.
-
What are some free or low-cost alternatives to using a crack for Red Gate Sql Compare 11?
-
Some of the free or low-cost alternatives to using a crack for Red Gate Sql Compare 11 are:
A tool that compares database schemas for various DBMSs, including SQL Server
-
Free
-
-
-
These are some of the free or low-cost alternatives to using a crack for Red Gate Sql Compare 11 that you can consider. However, they are not necessarily equivalent or superior to Red Gate Sql Compare 11. In the next section, we will compare their advantages and disadvantages with Red Gate Sql Compare 11.
-
What are some advantages and disadvantages of using these alternatives?
-
Each of these alternatives has its own advantages and disadvantages when compared to Red Gate Sql Compare 11. Here are some of them:
-
-
-
Name
-
Advantages
-
Disadvantages
-
-
-
ApexSQL Diff
-
- Supports SQL Server 2019 and Azure SQL Database - Supports various types of sources and targets - Supports command line interface and API - Supports integration with source control systems - Supports backup comparison and synchronization - Supports data comparison and synchronization
-
- More expensive than Red Gate Sql Compare 11 - Less user-friendly and intuitive than Red Gate Sql Compare 11 - Less accurate and reliable than Red Gate Sql Compare 11 - Less flexible and adaptable than Red Gate Sql Compare 11 - Less secure and safe than Red Gate Sql Compare 11
-
-
-
dbForge Schema Compare for SQL Server
-
- Supports SQL Server 2019 and Azure SQL Database - Supports various types of sources and targets - Supports command line interface and API - Supports integration with source control systems - Supports backup comparison and synchronization - Supports data comparison and synchronization
-
- More expensive than Red Gate Sql Compare 11 - Less user-friendly and intuitive than Red Gate Sql Compare 11 - Less accurate and reliable than Red Gate Sql Compare 11 - Less flexible and adaptable than Red Gate Sql Compare 11 - Less secure and safe than Red Gate Sql Compare 11
-
-
-
IDERA DB Schema Compare
-
- Free to use - Supports SQL Server 2019 and Azure SQL Database - Supports live database comparison only
-
- Less user-friendly and intuitive than Red Gate Sql Compare 11 - Less accurate and reliable than Red Gate Sql Compare 11 - Less flexible and adaptable than Red Gate Sql Compare 11 - Less secure and safe than Red Gate Sql Compare 11 - Does not support command line interface or API - Does not support integration with source control systems - Does not support backup comparison or synchronization - Does not support data comparison or synchronization
-
-
-
SQL Workbench/J Database Comparison Tool
- Free to use - Supports various DBMSs, including SQL Server - Supports live database comparison only
-
- Less user-friendly and intuitive than Red Gate Sql Compare 11 - Less accurate and reliable than Red Gate Sql Compare 11 - Less flexible and adaptable than Red Gate Sql Compare 11 - Less secure and safe than Red Gate Sql Compare 11 - Does not support command line interface or API - Does not support integration with source control systems - Does not support backup comparison or synchronization - Does not support data comparison or synchronization
-
-
-
SchemaCrawler
-
- Free to use - Supports various DBMSs, including SQL Server - Supports live database comparison only
-
- Less user-friendly and intuitive than Red Gate Sql Compare 11 - Less accurate and reliable than Red Gate Sql Compare 11 - Less flexible and adaptable than Red Gate Sql Compare 11 - Less secure and safe than Red Gate Sql Compare 11 - Does not support command line interface or API - Does not support integration with source control systems - Does not support backup comparison or synchronization - Does not support data comparison or synchronization
-
-
-
As you can see, none of these alternatives can match the quality and performance of Red Gate Sql Compare 11. They either have fewer features, higher prices, lower reliability, or higher risks. Therefore, we recommend that you stick with Red Gate Sql Compare 11 as your tool of choice for comparing and deploying SQL Server schemas. In the next section, we will help you choose the best alternative for your needs and budget.
-
How can you choose the best alternative for your needs and budget?
-
If you still want to explore other options besides Red Gate Sql Compare 11, you need to consider some factors that can help you choose the best alternative for your needs and budget. Here are some of them:
-
-
Your requirements: You need to define your requirements for comparing and deploying SQL Server schemas, such as the frequency, complexity, scope, and purpose of your comparisons and deployments. You also need to consider the features and functions that you need from a tool, such as the types of sources and targets, the level of accuracy and reliability, the integration with other tools and systems, the automation and customization options, the backup and rollback capabilities, the data comparison and synchronization features, and more.
-
Your budget: You need to determine your budget for buying or using a tool for comparing and deploying SQL Server schemas. You also need to consider the total cost of ownership of a tool, which includes not only the initial price or license fee, but also the maintenance, support, upgrade, training, and security costs.
-
Your preferences: You need to consider your preferences for using a tool for comparing and deploying SQL Server schemas, such as the ease of use, convenience, performance, security, quality, reputation, and customer service of a tool. You also need to consider the feedback and reviews of other users who have used or are using a tool.
-
-
By considering these factors, you can narrow down your choices and select the best alternative for your needs and budget. However, we still believe that Red Gate Sql Compare 11 is the best option for most users who want to compare and deploy SQL Server schemas easily and efficiently.
-
Conclusion
-
In this article, we have discussed what a crack is and how to avoid it. We have also shown you how Red Gate Sql Compare 11 can help you compare and deploy SQL Server schemas quickly and accurately. We have also highlighted some of the main features and benefits of Red Gate Sql Compare 11 that make it stand out from other similar tools. Finally, we have shown you how you can get a free trial or buy a license for Red Gate Sql Compare 11.
-
We hope that this article has helped you understand why using a crack for Red Gate Sql Compare 11 is not worth it. You are putting yourself at risk of legal troubles, security breaches, quality problems , and ethical dilemmas. You are also missing out on the features and benefits of Red Gate Sql Compare 11 that can help you compare and deploy SQL Server schemas easily and efficiently. You are also harming the software vendor and the industry that provide you with valuable tools and services.
-
Instead of using a crack for Red Gate Sql Compare 11, you should use legitimate software that is licensed, updated, and supported by the software vendor. You can also try some free or low-cost alternatives to using a crack for Red Gate Sql Compare 11, but they may not be as good or as reliable as Red Gate Sql Compare 11. You should also follow some best practices to protect your computer from cracks and malware in the future.
-
By doing so, you can enjoy the advantages of using Red Gate Sql Compare 11 without compromising your security or integrity. You can also support the development and improvement of this software, as well as the software industry in general. You can also build trust and respect between yourself and the software vendor and the users.
-
If you want to learn more about Red Gate Sql Compare 11, you can visit here. If you want to download a free trial or buy a license for Red Gate Sql Compare 11, you can visit here. If you have any questions or feedback about Red Gate Sql Compare 11, you can contact Red Gate Software here.
-
Thank you for reading this article. We hope that you have found it useful and informative. Please share it with your friends and colleagues who may be interested in this topic. Please also leave us a comment below and let us know what you think about this article and Red Gate Sql Compare 11.
-
FAQs
-
Here are some frequently asked questions about Red Gate Sql Compare 11 and cracks:
-
What is the difference between a crack and a keygen?
-
A crack is a type of software that modifies or bypasses the security features of another software, such as a license key, a serial number, a trial period, or a digital signature. A keygen is a type of software that generates valid license keys or serial numbers for another software. Both are illegal and risky to use, as they violate the terms and conditions of the software license agreement and may contain malware.
-
Is it illegal to use a crack for Red Gate Sql Compare 11?
-
Yes, it is illegal to use a crack for Red Gate Sql Compare 11, as it violates the terms and conditions of the software license agreement, which is a legally binding contract between you and the software vendor. By using a crack, you are infringing the intellectual property rights of the software vendor and potentially committing software piracy, which is a criminal offense in many countries. You could face legal actions, such as lawsuits, fines, or even imprisonment, from the software vendor or the authorities.
-
Can a crack for Red Gate Sql Compare 11 damage my database or data?
-
Yes, a crack for Red Gate Sql Compare 11 can damage your database or data, as it can introduce errors, bugs, glitches, or conflicts in the software that can cause it to malfunction, crash, freeze, or slow down. A crack can also prevent you from receiving updates, patches , fixes, or support from the software vendor. A crack can also interfere with other programs or processes on your computer and cause compatibility or stability issues. A crack can also contain malware, such as viruses, worms, trojans, spyware, ransomware, or keyloggers, that can infect your computer and cause damage, such as deleting or encrypting your files, stealing your personal or financial information, logging your keystrokes, monitoring your online activity, or taking control of your computer.
-
How can I contact Red Gate Software for support or feedback?
-
You can contact Red Gate Software for support or feedback by visiting here. You can also find various resources, such as documentation, help articles, videos, forums, webinars, and events on their website. You can also follow them on social media platforms, such as Twitter, Facebook, LinkedIn, and YouTube.
-
Where can I find more information about Red Gate Sql Compare 11?
-
You can find more information about Red Gate Sql Compare 11 by visiting here. You can also download a free trial or buy a license for Red Gate Sql Compare 11 by visiting here. You can also access Redgate University and SQL Compare Learning Hub by visiting here and here respectively.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tools/lightning_train_net.py b/spaces/nikitaPDL2023/assignment4/detectron2/tools/lightning_train_net.py
deleted file mode 100644
index 7a8c5d851649d05710b128b13d1d339fb0b7b125..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/tools/lightning_train_net.py
+++ /dev/null
@@ -1,239 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Lightning Trainer should be considered beta at this point
-# We have confirmed that training and validation run correctly and produce correct results
-# Depending on how you launch the trainer, there are issues with processes terminating correctly
-# This module is still dependent on D2 logging, but could be transferred to use Lightning logging
-
-import logging
-import os
-import time
-import weakref
-from collections import OrderedDict
-from typing import Any, Dict, List
-import pytorch_lightning as pl # type: ignore
-from pytorch_lightning import LightningDataModule, LightningModule
-
-import detectron2.utils.comm as comm
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import build_detection_test_loader, build_detection_train_loader
-from detectron2.engine import (
- DefaultTrainer,
- SimpleTrainer,
- default_argument_parser,
- default_setup,
- default_writers,
- hooks,
-)
-from detectron2.evaluation import print_csv_format
-from detectron2.evaluation.testing import flatten_results_dict
-from detectron2.modeling import build_model
-from detectron2.solver import build_lr_scheduler, build_optimizer
-from detectron2.utils.events import EventStorage
-from detectron2.utils.logger import setup_logger
-
-from train_net import build_evaluator
-
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger("detectron2")
-
-
-class TrainingModule(LightningModule):
- def __init__(self, cfg):
- super().__init__()
- if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2
- setup_logger()
- self.cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size())
- self.storage: EventStorage = None
- self.model = build_model(self.cfg)
-
- self.start_iter = 0
- self.max_iter = cfg.SOLVER.MAX_ITER
-
- def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
- checkpoint["iteration"] = self.storage.iter
-
- def on_load_checkpoint(self, checkpointed_state: Dict[str, Any]) -> None:
- self.start_iter = checkpointed_state["iteration"]
- self.storage.iter = self.start_iter
-
- def setup(self, stage: str):
- if self.cfg.MODEL.WEIGHTS:
- self.checkpointer = DetectionCheckpointer(
- # Assume you want to save checkpoints together with logs/statistics
- self.model,
- self.cfg.OUTPUT_DIR,
- )
- logger.info(f"Load model weights from checkpoint: {self.cfg.MODEL.WEIGHTS}.")
- # Only load weights, use lightning checkpointing if you want to resume
- self.checkpointer.load(self.cfg.MODEL.WEIGHTS)
-
- self.iteration_timer = hooks.IterationTimer()
- self.iteration_timer.before_train()
- self.data_start = time.perf_counter()
- self.writers = None
-
- def training_step(self, batch, batch_idx):
- data_time = time.perf_counter() - self.data_start
- # Need to manually enter/exit since trainer may launch processes
- # This ideally belongs in setup, but setup seems to run before processes are spawned
- if self.storage is None:
- self.storage = EventStorage(0)
- self.storage.__enter__()
- self.iteration_timer.trainer = weakref.proxy(self)
- self.iteration_timer.before_step()
- self.writers = (
- default_writers(self.cfg.OUTPUT_DIR, self.max_iter)
- if comm.is_main_process()
- else {}
- )
-
- loss_dict = self.model(batch)
- SimpleTrainer.write_metrics(loss_dict, data_time)
-
- opt = self.optimizers()
- self.storage.put_scalar(
- "lr", opt.param_groups[self._best_param_group_id]["lr"], smoothing_hint=False
- )
- self.iteration_timer.after_step()
- self.storage.step()
- # A little odd to put before step here, but it's the best way to get a proper timing
- self.iteration_timer.before_step()
-
- if self.storage.iter % 20 == 0:
- for writer in self.writers:
- writer.write()
- return sum(loss_dict.values())
-
- def training_step_end(self, training_step_outpus):
- self.data_start = time.perf_counter()
- return training_step_outpus
-
- def training_epoch_end(self, training_step_outputs):
- self.iteration_timer.after_train()
- if comm.is_main_process():
- self.checkpointer.save("model_final")
- for writer in self.writers:
- writer.write()
- writer.close()
- self.storage.__exit__(None, None, None)
-
- def _process_dataset_evaluation_results(self) -> OrderedDict:
- results = OrderedDict()
- for idx, dataset_name in enumerate(self.cfg.DATASETS.TEST):
- results[dataset_name] = self._evaluators[idx].evaluate()
- if comm.is_main_process():
- print_csv_format(results[dataset_name])
-
- if len(results) == 1:
- results = list(results.values())[0]
- return results
-
- def _reset_dataset_evaluators(self):
- self._evaluators = []
- for dataset_name in self.cfg.DATASETS.TEST:
- evaluator = build_evaluator(self.cfg, dataset_name)
- evaluator.reset()
- self._evaluators.append(evaluator)
-
- def on_validation_epoch_start(self, _outputs):
- self._reset_dataset_evaluators()
-
- def validation_epoch_end(self, _outputs):
- results = self._process_dataset_evaluation_results(_outputs)
-
- flattened_results = flatten_results_dict(results)
- for k, v in flattened_results.items():
- try:
- v = float(v)
- except Exception as e:
- raise ValueError(
- "[EvalHook] eval_function should return a nested dict of float. "
- "Got '{}: {}' instead.".format(k, v)
- ) from e
- self.storage.put_scalars(**flattened_results, smoothing_hint=False)
-
- def validation_step(self, batch, batch_idx: int, dataloader_idx: int = 0) -> None:
- if not isinstance(batch, List):
- batch = [batch]
- outputs = self.model(batch)
- self._evaluators[dataloader_idx].process(batch, outputs)
-
- def configure_optimizers(self):
- optimizer = build_optimizer(self.cfg, self.model)
- self._best_param_group_id = hooks.LRScheduler.get_best_param_group_id(optimizer)
- scheduler = build_lr_scheduler(self.cfg, optimizer)
- return [optimizer], [{"scheduler": scheduler, "interval": "step"}]
-
-
-class DataModule(LightningDataModule):
- def __init__(self, cfg):
- super().__init__()
- self.cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size())
-
- def train_dataloader(self):
- return build_detection_train_loader(self.cfg)
-
- def val_dataloader(self):
- dataloaders = []
- for dataset_name in self.cfg.DATASETS.TEST:
- dataloaders.append(build_detection_test_loader(self.cfg, dataset_name))
- return dataloaders
-
-
-def main(args):
- cfg = setup(args)
- train(cfg, args)
-
-
-def train(cfg, args):
- trainer_params = {
- # training loop is bounded by max steps, use a large max_epochs to make
- # sure max_steps is met first
- "max_epochs": 10**8,
- "max_steps": cfg.SOLVER.MAX_ITER,
- "val_check_interval": cfg.TEST.EVAL_PERIOD if cfg.TEST.EVAL_PERIOD > 0 else 10**8,
- "num_nodes": args.num_machines,
- "gpus": args.num_gpus,
- "num_sanity_val_steps": 0,
- }
- if cfg.SOLVER.AMP.ENABLED:
- trainer_params["precision"] = 16
-
- last_checkpoint = os.path.join(cfg.OUTPUT_DIR, "last.ckpt")
- if args.resume:
- # resume training from checkpoint
- trainer_params["resume_from_checkpoint"] = last_checkpoint
- logger.info(f"Resuming training from checkpoint: {last_checkpoint}.")
-
- trainer = pl.Trainer(**trainer_params)
- logger.info(f"start to train with {args.num_machines} nodes and {args.num_gpus} GPUs")
-
- module = TrainingModule(cfg)
- data_module = DataModule(cfg)
- if args.eval_only:
- logger.info("Running inference")
- trainer.validate(module, data_module)
- else:
- logger.info("Running training")
- trainer.fit(module, data_module)
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-if __name__ == "__main__":
- parser = default_argument_parser()
- args = parser.parse_args()
- logger.info("Command Line Args:", args)
- main(args)
diff --git a/spaces/niks-salodkar/Age-Prediction-Demo/README.md b/spaces/niks-salodkar/Age-Prediction-Demo/README.md
deleted file mode 100644
index 5ad84ac094d98269efdd72863312dd55a207de9c..0000000000000000000000000000000000000000
--- a/spaces/niks-salodkar/Age-Prediction-Demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Age Prediction Demo
-emoji: 📉
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_rbs.py b/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_rbs.py
deleted file mode 100644
index e8aa879b42649117c4309023ab9a3f141315fd5e..0000000000000000000000000000000000000000
--- a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_rbs.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import re
-
-from urlextract import URLExtract
-
-from src.start_up.start_up_bad_words_rule import create_bad_word_rule
-from src.config import config
-from src.rule_based_system.HTMLRule import HTMLRule
-from src.rule_based_system.PersonalDetailsRule import PersonalDetailsRule
-from src.rule_based_system.RuleBasedSystem import RuleBasedSystem
-from src.rule_based_system.TextLengthRule import TextLengthRule
-from src.rule_based_system.UrlRule import UrlRule
-
-
-def create_strong_rbs() -> RuleBasedSystem:
- text_length_rule = TextLengthRule()
-
- url_rule = UrlRule(URLExtract())
-
- mail_rule = PersonalDetailsRule([r'[\w.+-]+@[\w-]+\.[\w.-]+'], True)
-
- strict_bad_word_rule = create_bad_word_rule(config['bad_words_strict'], True)
-
- return RuleBasedSystem([
- text_length_rule, # todo: check if this make sense to add here, 500 was our own chosen max length
- url_rule,
- mail_rule,
- strict_bad_word_rule
- ])
-
-
-def create_weak_rbs() -> RuleBasedSystem:
- phone_regex = r"(^\+[0-9]{2}|^\+[0-9]{2}\(0\)|^\(\+[0-9]{2}\)\(0\)|^00[0-9]{2}|^0)([0-9]{9}$|[0-9\-\s]{10}$)"
- phone_home_local = re.compile(r".*?(\(?\d{3}\D{0,3}\d{2}\D{0,3}\d{2}).*?", re.S)
- phone_home = re.compile(r".*?(\(?\d{3}\D{0,3}\d{3}\D{0,3}\d{2}\D{0,3}\d{2}).*?", re.S)
- phone_mobile = re.compile(r".*?(\(?\d{2}\D{0,3}\d{3}\D{0,3}\d{3}\D{0,3}\d{2}).*?", re.S)
- phone_mobile_international = re.compile(r".*?(\(?\d{3}\D{0,3}\d{3}\D{0,3}\d{3}\D{0,3}\d{2}).*?", re.S)
-
- phone_regexes = [phone_regex, phone_home_local, phone_home, phone_mobile, phone_mobile_international]
- phone_number_rule = PersonalDetailsRule(phone_regexes, False)
-
- html_rule = HTMLRule()
-
- ambiguous_bad_word_rule = create_bad_word_rule(config['bad_words_ambiguous'], False)
-
- # rule systems
- return RuleBasedSystem([
- phone_number_rule,
- html_rule,
- ambiguous_bad_word_rule
- ])
diff --git a/spaces/nugrahatheo/Credit_Card_Fraud_Detection/prediction.py b/spaces/nugrahatheo/Credit_Card_Fraud_Detection/prediction.py
deleted file mode 100644
index 10036f76e708fe87197284efec6b4c95906003cb..0000000000000000000000000000000000000000
--- a/spaces/nugrahatheo/Credit_Card_Fraud_Detection/prediction.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import streamlit as st
-import pandas as pd
-import numpy as np
-import pickle
-import json
-
-# Load all files
-
-with open('list_num_cols.txt', 'r') as file_1:
- list_num_cols = json.load(file_1)
-
-with open('list_cat_cols_1.txt', 'r') as file_2:
- list_cat_cols_1 = json.load(file_2)
-
-with open('list_cat_cols_2.txt', 'r') as file_3:
- list_cat_cols_2 = json.load(file_3)
-
-with open('list_cat_cols_3.txt', 'r') as file_4:
- list_cat_cols_3 = json.load(file_4)
-
-with open('model_scaler.pkl', 'rb') as file_5:
- scaler = pickle.load(file_5)
-
-with open('model_encoder_1.pkl', 'rb') as file_6:
- OHE_1 = pickle.load(file_6)
-
-with open('model_encoder_2.pkl', 'rb') as file_7:
- OHE_2 = pickle.load(file_7)
-
-with open('model_encoder_3.pkl', 'rb') as file_8:
- OHE_3 = pickle.load(file_8)
-
-with open('model_logreg.pkl', 'rb') as file_9:
- model_logreg = pickle.load(file_9)
-
-with open('model_dtc.pkl', 'rb') as file_10:
- model_dtc = pickle.load(file_10)
-
-with open('model_rfc.pkl', 'rb') as file_11:
- model_rfc = pickle.load(file_11)
-
-with open('model_gbc.pkl', 'rb') as file_12:
- model_gbc = pickle.load(file_12)
-
-def run():
- st.write('##### Form Credit Card Fraud Detection')
- # Making Form
- with st.form(key='Form Credit Card Fraud Detection'):
- cc_num = st.text_input('CC Number')
- merchant = st.text_input('Merchant')
- category = st.selectbox('Category', ('misc_net', 'grocery_pos', 'entertainment', 'gas_transport', 'misc_pos',
- 'grocery_net', 'shopping_net', 'shopping_pos', 'food_dining', 'personal_care',
- 'health_fitness', 'travel', 'kids_pets', 'home'), index=0)
- amt = st.number_input('Amount', min_value=1, max_value=999999999, value=1)
- first = st.text_input('First Name')
- last = st.text_input('Last Name')
- gender = st.selectbox('Gender', ('M','F'), index=0)
- street = st.text_input('Street')
- city = st.text_input('City')
- state = st.selectbox('State', ('NC','WA','ID','MT','VA','PA','KS','TN','IA','WV','FL','CA','NM','NJ',
- 'OK','IN','MA','TX','WI','MI','WY','HI','NE','OR','LA','DC','KY','NY',
- 'MS','UT','AL','AR','MD','GA','ME','AZ','MN','OH','CO','VT','MO','SC',
- 'NV','IL','NH','SD','AK','ND','CT','RI','DE'), index=0)
- zip = st.number_input('ZIP', min_value=10000, max_value=99999, value=25456)
- city_pop = st.number_input('City Population', min_value=100, max_value=9999999, value=1000)
- job = st.text_input('Job')
- dob = st.text_input('Date Of Birth', help=('YYYY-MM-dd'))
-
- st.markdown('---')
-
- submited_1 = st.form_submit_button('Detection using Logistic Regression')
- submited_2 = st.form_submit_button('Detection using Decision Tree Classifier')
- submited_3 = st.form_submit_button('Detection using Random Forest Classifier')
- submited_4 = st.form_submit_button('Detection using Gradient Boosting Classifier')
-
- data_inf = {
- 'cc_num' : cc_num,
- 'merchant' : merchant,
- 'category' : category,
- 'amt' : amt,
- 'first' : first,
- 'last' : last,
- 'gender' : gender,
- 'street' : street,
- 'city' : city,
- 'state' : state,
- 'zip' : zip,
- 'city_pop' : city_pop,
- 'job' : job,
- 'dob' : dob
- }
-
- data_inf = pd.DataFrame([data_inf])
- st.dataframe(data_inf)
-
- if submited_1:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using Logistik Regression
- y_pred_inf_logreg = model_logreg.predict(data_inf_final)
- if y_pred_inf_logreg[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
- elif submited_2:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using Decision Tree Classifier
- y_pred_inf_dtc = model_dtc.predict(data_inf_final)
- if y_pred_inf_dtc[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
- elif submited_3:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using Random Forest Classifier
- y_pred_inf_rfc = model_rfc.predict(data_inf_final)
- if y_pred_inf_rfc[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
- else:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using GradientBoosting Classifier
- y_pred_inf_gbc = model_gbc.predict(data_inf_final)
- if y_pred_inf_gbc[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
-
-
-if __name__ == '__main__':
- run()
\ No newline at end of file
diff --git a/spaces/oliver2023/chatgpt-on-wechat/docker/build.debian.sh b/spaces/oliver2023/chatgpt-on-wechat/docker/build.debian.sh
deleted file mode 100644
index e7caa15dc1796f0f2e4d0d1e5709fc196794f803..0000000000000000000000000000000000000000
--- a/spaces/oliver2023/chatgpt-on-wechat/docker/build.debian.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-# fetch latest release tag
-CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
- grep '"tag_name":' | \
- sed -E 's/.*"([^"]+)".*/\1/'`
-
-# build image
-docker build -f Dockerfile.debian \
- --build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
- -t zhayujie/chatgpt-on-wechat .
-
-# tag image
-docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:debian
-docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
\ No newline at end of file
diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/clevrtex/resnet/baseline.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/clevrtex/resnet/baseline.py
deleted file mode 100644
index fab9b2f23244de5ca43185481da8eabb4ee130e3..0000000000000000000000000000000000000000
--- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/clevrtex/resnet/baseline.py
+++ /dev/null
@@ -1,198 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-r"""Config for unsupervised training on CLEVRTex."""
-
-import ml_collections
-
-
-def get_config():
- """Get the default hyperparameter configuration."""
- config = ml_collections.ConfigDict()
-
- config.seed = 42
- config.seed_data = True
-
- config.batch_size = 64
- config.num_train_steps = 500000 # from the original Slot Attention
- config.init_checkpoint = ml_collections.ConfigDict()
- config.init_checkpoint.xid = 0 # Disabled by default.
- config.init_checkpoint.wid = 1
-
- config.optimizer_configs = ml_collections.ConfigDict()
- config.optimizer_configs.optimizer = "adam"
-
- config.optimizer_configs.grad_clip = ml_collections.ConfigDict()
- config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm"
- config.optimizer_configs.grad_clip.clip_value = 0.05
-
- config.lr_configs = ml_collections.ConfigDict()
- config.lr_configs.learning_rate_schedule = "compound"
- config.lr_configs.factors = "constant * cosine_decay * linear_warmup"
- config.lr_configs.warmup_steps = 10000 # from the original Slot Attention
- config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps")
- # from the original Slot Attention
- config.lr_configs.base_learning_rate = 4e-4
-
- config.eval_pad_last_batch = False # True
- config.log_loss_every_steps = 50
- config.eval_every_steps = 5000
- config.checkpoint_every_steps = 5000
-
- config.train_metrics_spec = {
- "loss": "loss",
- "ari": "ari",
- "ari_nobg": "ari_nobg",
- }
- config.eval_metrics_spec = {
- "eval_loss": "loss",
- "eval_ari": "ari",
- "eval_ari_nobg": "ari_nobg",
- }
-
- config.data = ml_collections.ConfigDict({
- "dataset_name": "tfds",
- # The TFDS dataset will be created in the directory below
- # if you follow the README in datasets/clevrtex.
- "data_dir": "~/tensorflow_datasets",
- "tfds_name": "clevr_tex",
- "shuffle_buffer_size": config.batch_size * 8,
- "resolution": (128, 128)
- })
-
- config.max_instances = 11
- config.num_slots = config.max_instances # Only used for metrics.
- config.logging_min_n_colors = config.max_instances
-
- config.preproc_train = [
- "tfds_image_to_tfds_video",
- "video_from_tfds",
- "central_crop(height=192,width=192)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.preproc_eval = [
- "tfds_image_to_tfds_video",
- "video_from_tfds",
- "central_crop(height=192,width=192)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.eval_slice_size = 1
- config.eval_slice_keys = ["video", "segmentations_video"]
-
- # Dictionary of targets and corresponding channels. Losses need to match.
- targets = {"video": 3}
- config.losses = {"recon": {"targets": list(targets)}}
- config.losses = ml_collections.ConfigDict({
- f"recon_{target}": {"loss_type": "recon", "key": target}
- for target in targets})
-
- config.model = ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SAVi",
-
- # Encoder.
- "encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.FrameEncoder",
- "reduction": "spatial_flatten",
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.ResNet34",
- "num_classes": None,
- "axis_name": "time",
- "norm_type": "group",
- "small_inputs": True
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "project_add",
- "output_transform": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.MLP",
- "hidden_size": 128,
- "layernorm": "pre"
- }),
- }),
- }),
-
- # Corrector.
- "corrector": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SlotAttention",
- "num_iterations": 3,
- "qkv_size": 64,
- "mlp_size": 128,
- }),
-
- # Predictor.
- # Removed since we are running a single frame.
- "predictor": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Identity"
- }),
-
- # Initializer.
- "initializer": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.ParamStateInit",
- "shape": (11, 64), # (num_slots, slot_size)
- }),
-
- # Decoder.
- "decoder": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder",
- "resolution": (16, 16), # Update if data resolution or strides change
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.CNN",
- "features": [64, 64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1), (1, 1)],
- "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)],
- "layer_transpose": [True, True, True, False, False]
- }),
- "target_readout": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Readout",
- "keys": list(targets),
- "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension
- "module": "invariant_slot_attention.modules.MLP",
- "num_hidden_layers": 0,
- "hidden_size": 0,
- "output_size": targets[k]}) for k in targets],
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "project_add"
- }),
- }),
- "decode_corrected": True,
- "decode_predicted": False,
- })
-
- # Which video-shaped variables to visualize.
- config.debug_var_video_paths = {
- "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long
- }
-
- # Define which attention matrices to log/visualize.
- config.debug_var_attn_paths = {
- "corrector_attn": "corrector/InvertedDotProductAttention_0/GeneralizedDotProductAttention_0/attn" # pylint: disable=line-too-long
- }
-
- # Widths of attention matrices (for reshaping to image grid).
- config.debug_var_attn_widths = {
- "corrector_attn": 16,
- }
-
- return config
-
-
diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/equiv_transl.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/equiv_transl.py
deleted file mode 100644
index f5658b121974dc0c68cfb00b09b658eea066f17d..0000000000000000000000000000000000000000
--- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/equiv_transl.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-r"""Config for unsupervised training on MultiShapeNet-Easy."""
-
-import ml_collections
-
-
-def get_config():
- """Get the default hyperparameter configuration."""
- config = ml_collections.ConfigDict()
-
- config.seed = 42
- config.seed_data = True
-
- config.batch_size = 64
- config.num_train_steps = 500000 # from the original Slot Attention
- config.init_checkpoint = ml_collections.ConfigDict()
- config.init_checkpoint.xid = 0 # Disabled by default.
- config.init_checkpoint.wid = 1
-
- config.optimizer_configs = ml_collections.ConfigDict()
- config.optimizer_configs.optimizer = "adam"
-
- config.optimizer_configs.grad_clip = ml_collections.ConfigDict()
- config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm"
- config.optimizer_configs.grad_clip.clip_value = 0.05
-
- config.lr_configs = ml_collections.ConfigDict()
- config.lr_configs.learning_rate_schedule = "compound"
- config.lr_configs.factors = "constant * cosine_decay * linear_warmup"
- config.lr_configs.warmup_steps = 10000 # from the original Slot Attention
- config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps")
- # from the original Slot Attention
- config.lr_configs.base_learning_rate = 4e-4
-
- config.eval_pad_last_batch = False # True
- config.log_loss_every_steps = 50
- config.eval_every_steps = 5000
- config.checkpoint_every_steps = 5000
-
- config.train_metrics_spec = {
- "loss": "loss",
- "ari": "ari",
- "ari_nobg": "ari_nobg",
- }
- config.eval_metrics_spec = {
- "eval_loss": "loss",
- "eval_ari": "ari",
- "eval_ari_nobg": "ari_nobg",
- }
-
- config.data = ml_collections.ConfigDict({
- "dataset_name": "multishapenet_easy",
- "shuffle_buffer_size": config.batch_size * 8,
- "resolution": (128, 128)
- })
-
- config.max_instances = 11
- config.num_slots = config.max_instances # Only used for metrics.
- config.logging_min_n_colors = config.max_instances
-
- config.preproc_train = [
- "sunds_to_tfds_video",
- "video_from_tfds",
- "subtract_one_from_segmentations",
- "central_crop(height=240, width=240)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.preproc_eval = [
- "sunds_to_tfds_video",
- "video_from_tfds",
- "subtract_one_from_segmentations",
- "central_crop(height=240, width=240)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.eval_slice_size = 1
- config.eval_slice_keys = ["video", "segmentations_video"]
-
- # Dictionary of targets and corresponding channels. Losses need to match.
- targets = {"video": 3}
- config.losses = {"recon": {"targets": list(targets)}}
- config.losses = ml_collections.ConfigDict({
- f"recon_{target}": {"loss_type": "recon", "key": target}
- for target in targets})
-
- config.model = ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SAVi",
-
- # Encoder.
- "encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.FrameEncoder",
- "reduction": "spatial_flatten",
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SimpleCNN",
- "features": [64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1)]
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "concat"
- }),
- }),
-
- # Corrector.
- "corrector": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SlotAttentionTranslEquiv",
- "num_iterations": 3,
- "qkv_size": 64,
- "mlp_size": 128,
- "grid_encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.MLP",
- "hidden_size": 128,
- "layernorm": "pre"
- }),
- "add_rel_pos_to_values": True, # V3
- "zero_position_init": False, # Random positions.
- }),
-
- # Predictor.
- # Removed since we are running a single frame.
- "predictor": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Identity"
- }),
-
- # Initializer.
- "initializer": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.ParamStateInitRandomPositions",
- "shape":
- (11, 64), # (num_slots, slot_size)
- }),
-
- # Decoder.
- "decoder": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder",
- "resolution": (16, 16), # Update if data resolution or strides change
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.CNN",
- "features": [64, 64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1), (1, 1)],
- "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)],
- "layer_transpose": [True, True, True, False, False]
- }),
- "target_readout": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Readout",
- "keys": list(targets),
- "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension
- "module": "invariant_slot_attention.modules.MLP",
- "num_hidden_layers": 0,
- "hidden_size": 0,
- "output_size": targets[k]}) for k in targets],
- }),
- "relative_positions": True,
- "pos_emb": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.RelativePositionEmbedding",
- "embedding_type":
- "linear",
- "update_type":
- "project_add",
- }),
- }),
- "decode_corrected": True,
- "decode_predicted": False,
- })
-
- # Which video-shaped variables to visualize.
- config.debug_var_video_paths = {
- "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long
- }
-
- # Define which attention matrices to log/visualize.
- config.debug_var_attn_paths = {
- "corrector_attn": "corrector/InvertedDotProductAttentionKeyPerQuery_0/attn" # pylint: disable=line-too-long
- }
-
- # Widths of attention matrices (for reshaping to image grid).
- config.debug_var_attn_widths = {
- "corrector_attn": 16,
- }
-
- return config
-
-
diff --git a/spaces/onnx/sub_pixel_cnn_2016/app.py b/spaces/onnx/sub_pixel_cnn_2016/app.py
deleted file mode 100644
index fc04ec04eb940ea5b94d765ed4f22027014cca35..0000000000000000000000000000000000000000
--- a/spaces/onnx/sub_pixel_cnn_2016/app.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import io
-import numpy as np
-import onnxruntime
-from torch import nn
-import torch.utils.model_zoo as model_zoo
-import torch.onnx
-import torch.nn as nn
-import torch.nn.init as init
-import matplotlib.pyplot as plt
-import json
-from PIL import Image, ImageDraw, ImageFont
-from resizeimage import resizeimage
-import numpy as np
-import pdb
-import onnx
-import gradio as gr
-import os
-
-class SuperResolutionNet(nn.Module):
- def __init__(self, upscale_factor, inplace=False):
- super(SuperResolutionNet, self).__init__()
-
- self.relu = nn.ReLU(inplace=inplace)
- self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))
- self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
- self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))
- self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))
- self.pixel_shuffle = nn.PixelShuffle(upscale_factor)
-
- self._initialize_weights()
-
- def forward(self, x):
- x = self.relu(self.conv1(x))
- x = self.relu(self.conv2(x))
- x = self.relu(self.conv3(x))
- x = self.pixel_shuffle(self.conv4(x))
- return x
-
- def _initialize_weights(self):
- init.orthogonal_(self.conv1.weight, init.calculate_gain('relu'))
- init.orthogonal_(self.conv2.weight, init.calculate_gain('relu'))
- init.orthogonal_(self.conv3.weight, init.calculate_gain('relu'))
- init.orthogonal_(self.conv4.weight)
-
-# Create the super-resolution model by using the above model definition.
-torch_model = SuperResolutionNet(upscale_factor=3)
-
-model_url = 'https://s3.amazonaws.com/pytorch/test_data/export/superres_epoch100-44c6958e.pth'
-batch_size = 1 # just a random number
-
-# Initialize model with the pretrained weights
-map_location = lambda storage, loc: storage
-if torch.cuda.is_available():
- map_location = None
-torch_model.load_state_dict(model_zoo.load_url(model_url, map_location=map_location))
-
-
-
-x = torch.randn(1, 1, 224, 224, requires_grad=True)
-torch_model.eval()
-
-
-
-os.system("wget https://github.com/AK391/models/raw/main/vision/super_resolution/sub_pixel_cnn_2016/model/super-resolution-10.onnx")
-
-# Start from ORT 1.10, ORT requires explicitly setting the providers parameter if you want to use execution providers
-# other than the default CPU provider (as opposed to the previous behavior of providers getting set/registered by default
-# based on the build flags) when instantiating InferenceSession.
-# For example, if NVIDIA GPU is available and ORT Python package is built with CUDA, then call API as following:
-# onnxruntime.InferenceSession(path/to/model, providers=['CUDAExecutionProvider'])
-ort_session = onnxruntime.InferenceSession("super-resolution-10.onnx")
-
-
-def inference(img):
- orig_img = Image.open(img)
- img = resizeimage.resize_cover(orig_img, [224,224], validate=False)
- img_ycbcr = img.convert('YCbCr')
- img_y_0, img_cb, img_cr = img_ycbcr.split()
- img_ndarray = np.asarray(img_y_0)
-
- img_4 = np.expand_dims(np.expand_dims(img_ndarray, axis=0), axis=0)
- img_5 = img_4.astype(np.float32) / 255.0
-
- ort_inputs = {ort_session.get_inputs()[0].name: img_5}
- ort_outs = ort_session.run(None, ort_inputs)
- img_out_y = ort_outs[0]
-
- img_out_y = Image.fromarray(np.uint8((img_out_y[0] * 255.0).clip(0, 255)[0]), mode='L')
- final_img = Image.merge(
- "YCbCr", [
- img_out_y,
- img_cb.resize(img_out_y.size, Image.BICUBIC),
- img_cr.resize(img_out_y.size, Image.BICUBIC),
- ]).convert("RGB")
- return final_img
-
-title="sub_pixel_cnn_2016"
-description="The Super Resolution machine learning model sharpens and upscales the input image to refine the details and improve quality."
-gr.Interface(inference,gr.inputs.Image(type="filepath"),gr.outputs.Image(type="pil"),title=title,description=description).launch()
\ No newline at end of file
diff --git a/spaces/osanseviero/hugging_eats/app.py b/spaces/osanseviero/hugging_eats/app.py
deleted file mode 100644
index 7efb362fc755be345cf9ca778657fe2f15bdbf63..0000000000000000000000000000000000000000
--- a/spaces/osanseviero/hugging_eats/app.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import googlemaps
-import os
-from huggingface_hub import Repository
-import googlemaps
-import gradio as gr
-import csv
-from datasets import load_dataset
-
-from bokeh.io import show
-from bokeh.plotting import gmap
-from bokeh.models import GMapOptions, ColumnDataSource, HoverTool
-from bokeh.embed import json_item
-from datasets import load_dataset
-
-
-
-MAPS_API = os.environ['MAPS_API']
-OS_API_KEY = os.environ['OS_API_KEY']
-HF_TOKEN = os.environ['HF_TOKEN']
-
-google_maps_client = googlemaps.Client(key=MAPS_API)
-
-DATASET_REPO_URL = "https://huggingface.co/datasets/osanseviero/hugging_eats"
-DATA_FILENAME = "data.csv"
-DATA_FILE = os.path.join("data", DATA_FILENAME)
-
-repo = Repository(
- local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN
-)
-
-def predict(place, hugging_secret):
- if hugging_secret != OS_API_KEY:
- return "INVALID SECRET - you cannot save places"
-
- geocode_result = google_maps_client.geocode(place)
- if geocode_result == None:
- return "PLACE NOT FOUND"
-
- print("Saving place")
- lat = geocode_result[0]["geometry"]["location"]["lat"]
- lng = geocode_result[0]["geometry"]["location"]["lng"]
-
- repo.git_pull(rebase=True)
- with open(DATA_FILE, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=["name", "lat", "lng"])
- writer.writerow(
- {"name": place, "lat": lat, "lng": lng}
- )
- print("Pushing place")
-
- repo.push_to_hub()
- return "PLACE SAVED!"
-
-iface_submit = gr.Interface(
- predict,
- inputs=[
- gr.inputs.Textbox(label="Address or place name"),
- gr.inputs.Textbox(label="Hugging Secret"),
- ],
- outputs="text"
-)
-
-
-def plot_map():
- dataset = load_dataset('osanseviero/hugging_eats')
- data = dataset["train"].to_pandas()
- data = data.drop_duplicates()
-
- gmap_options = GMapOptions(lat=data["lat"][0], lng=data["lng"][0],
- map_type="satellite", zoom=12)
- # the tools are defined below:
- p = gmap(MAPS_API, gmap_options, title='Pays de Gex',
- tools=['reset', 'wheel_zoom', 'pan', 'zoom_in'])
-
-
- data_source = ColumnDataSource(data)
-
- center = p.circle('lng', 'lat', size=10, alpha=0.5,
- color='yellow', source=data_source)
-
- TOOLTIPS = [
- ("name", "@name"),
- ]
- p.add_tools(HoverTool(tooltips = TOOLTIPS))
-
- return json_item(p)
-
-
-iface_display = gr.Interface(
- plot_map,
- inputs=None,
- outputs=gr.Plot(type="bokeh")
-)
-
-demo = gr.TabbedInterface([iface_display, iface_submit], ["Browse Places", "Submit Places (HF only)"]).launch()
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/self_attention_guidance.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/self_attention_guidance.md
deleted file mode 100644
index 854505f182021bd0630d537e86494e7c1638d373..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/self_attention_guidance.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-# Self-Attention Guidance
-
-[Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://huggingface.co/papers/2210.00939) is by Susung Hong et al.
-
-The abstract from the paper is:
-
-*Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.*
-
-You can find additional information about Self-Attention Guidance on the [project page](https://ku-cvlab.github.io/Self-Attention-Guidance), [original codebase](https://github.com/KU-CVLAB/Self-Attention-Guidance), and try it out in a [demo](https://huggingface.co/spaces/susunghong/Self-Attention-Guidance) or [notebook](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb).
-
-
-
-Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
-
-
-
-## StableDiffusionSAGPipeline
-[[autodoc]] StableDiffusionSAGPipeline
- - __call__
- - all
-
-## StableDiffusionOutput
-[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/index.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/index.md
deleted file mode 100644
index a83dd0d0b29e5eee20b3d66b950d1b064aa9e964..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/index.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
-
-
-
-
-
-
-
-# Diffusers
-
-🤗 Diffusers는 이미지, 오디오, 심지어 분자의 3D 구조를 생성하기 위한 최첨단 사전 훈련된 diffusion 모델을 위한 라이브러리입니다. 간단한 추론 솔루션을 찾고 있든, 자체 diffusion 모델을 훈련하고 싶든, 🤗 Diffusers는 두 가지 모두를 지원하는 모듈식 툴박스입니다. 저희 라이브러리는 [성능보다 사용성](conceptual/philosophy#usability-over-performance), [간편함보다 단순함](conceptual/philosophy#simple-over-easy), 그리고 [추상화보다 사용자 지정 가능성](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction)에 중점을 두고 설계되었습니다.
-
-이 라이브러리에는 세 가지 주요 구성 요소가 있습니다:
-
-- 몇 줄의 코드만으로 추론할 수 있는 최첨단 [diffusion 파이프라인](api/pipelines/overview).
-- 생성 속도와 품질 간의 균형을 맞추기 위해 상호교환적으로 사용할 수 있는 [노이즈 스케줄러](api/schedulers/overview).
-- 빌딩 블록으로 사용할 수 있고 스케줄러와 결합하여 자체적인 end-to-end diffusion 시스템을 만들 수 있는 사전 학습된 [모델](api/models).
-
-
- )
-}
diff --git a/spaces/perilli/tortoise-tts-v2/tortoise/read.py b/spaces/perilli/tortoise-tts-v2/tortoise/read.py
deleted file mode 100644
index 9ee9ad6e1c79d4a1953f7f80fcb511f8296554ab..0000000000000000000000000000000000000000
--- a/spaces/perilli/tortoise-tts-v2/tortoise/read.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import argparse
-import os
-
-import torch
-import torchaudio
-
-from api import TextToSpeech
-from tortoise.utils.audio import load_audio, get_voices, load_voices
-
-
-def split_and_recombine_text(texts, desired_length=200, max_len=300):
- # TODO: also split across '!' and '?'. Attempt to keep quotations together.
- texts = [s.strip() + "." for s in texts.split('.')]
-
- i = 0
- while i < len(texts):
- ltxt = texts[i]
- if len(ltxt) >= desired_length or i == len(texts)-1:
- i += 1
- continue
- if len(ltxt) + len(texts[i+1]) > max_len:
- i += 1
- continue
- texts[i] = f'{ltxt} {texts[i+1]}'
- texts.pop(i+1)
- return texts
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--textfile', type=str, help='A file containing the text to read.', default="tortoise/data/riding_hood.txt")
- parser.add_argument('--voice', type=str, help='Selects the voice to use for generation. See options in voices/ directory (and add your own!) '
- 'Use the & character to join two voices together. Use a comma to perform inference on multiple voices.', default='pat')
- parser.add_argument('--output_path', type=str, help='Where to store outputs.', default='results/longform/')
- parser.add_argument('--preset', type=str, help='Which voice preset to use.', default='standard')
- parser.add_argument('--regenerate', type=str, help='Comma-separated list of clip numbers to re-generate, or nothing.', default=None)
- parser.add_argument('--voice_diversity_intelligibility_slider', type=float,
- help='How to balance vocal diversity with the quality/intelligibility of the spoken text. 0 means highly diverse voice (not recommended), 1 means maximize intellibility',
- default=.5)
- parser.add_argument('--model_dir', type=str, help='Where to find pretrained model checkpoints. Tortoise automatically downloads these to .models, so this'
- 'should only be specified if you have custom checkpoints.', default='.models')
- args = parser.parse_args()
- tts = TextToSpeech(models_dir=args.model_dir)
-
- outpath = args.output_path
- selected_voices = args.voice.split(',')
- regenerate = args.regenerate
- if regenerate is not None:
- regenerate = [int(e) for e in regenerate.split(',')]
-
- for selected_voice in selected_voices:
- voice_outpath = os.path.join(outpath, selected_voice)
- os.makedirs(voice_outpath, exist_ok=True)
-
- with open(args.textfile, 'r', encoding='utf-8') as f:
- text = ''.join([l for l in f.readlines()])
- texts = split_and_recombine_text(text)
-
- if '&' in selected_voice:
- voice_sel = selected_voice.split('&')
- else:
- voice_sel = [selected_voice]
-
- voice_samples, conditioning_latents = load_voices(voice_sel)
- all_parts = []
- for j, text in enumerate(texts):
- if regenerate is not None and j not in regenerate:
- all_parts.append(load_audio(os.path.join(voice_outpath, f'{j}.wav'), 24000))
- continue
- gen = tts.tts_with_preset(text, voice_samples=voice_samples, conditioning_latents=conditioning_latents,
- preset=args.preset, clvp_cvvp_slider=args.voice_diversity_intelligibility_slider)
- gen = gen.squeeze(0).cpu()
- torchaudio.save(os.path.join(voice_outpath, f'{j}.wav'), gen, 24000)
- all_parts.append(gen)
- full_audio = torch.cat(all_parts, dim=-1)
- torchaudio.save(os.path.join(voice_outpath, 'combined.wav'), full_audio, 24000)
-
diff --git a/spaces/pierluigizagaria/crysis-voice-cloning/Makefile b/spaces/pierluigizagaria/crysis-voice-cloning/Makefile
deleted file mode 100644
index ad23323414bd2175956f6aef92f223a02f7258be..0000000000000000000000000000000000000000
--- a/spaces/pierluigizagaria/crysis-voice-cloning/Makefile
+++ /dev/null
@@ -1,11 +0,0 @@
-.PHONY: quality style
-
-# Check that source code meets quality standards
-quality:
- black --check --diff .
- ruff .
-
-# Format source code automatically
-style:
- black .
- ruff . --fix
diff --git a/spaces/pinkq/Newbing/src/app/page.tsx b/spaces/pinkq/Newbing/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/pinkq/Newbing/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/pinkq/Newbing/src/components/chat-header.tsx b/spaces/pinkq/Newbing/src/components/chat-header.tsx
deleted file mode 100644
index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000
--- a/spaces/pinkq/Newbing/src/components/chat-header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import LogoIcon from '@/assets/images/logo.svg'
-import Image from 'next/image'
-
-export function ChatHeader() {
- return (
-
-
-
欢迎使用新必应
-
由 AI 支持的网页版 Copilot
-
- )
-}
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/helpers.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/helpers.py
deleted file mode 100644
index 018f0d6ac863f2e4a27636c721669061887ae554..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/helpers.py
+++ /dev/null
@@ -1,1100 +0,0 @@
-# helpers.py
-import html.entities
-import re
-import sys
-import typing
-
-from . import __diag__
-from .core import *
-from .util import (
- _bslash,
- _flatten,
- _escape_regex_range_chars,
- replaced_by_pep8,
-)
-
-
-#
-# global helpers
-#
-def counted_array(
- expr: ParserElement,
- int_expr: typing.Optional[ParserElement] = None,
- *,
- intExpr: typing.Optional[ParserElement] = None,
-) -> ParserElement:
- """Helper to define a counted list of expressions.
-
- This helper defines a pattern of the form::
-
- integer expr expr expr...
-
- where the leading integer tells how many expr expressions follow.
- The matched tokens returns the array of expr tokens as a list - the
- leading count token is suppressed.
-
- If ``int_expr`` is specified, it should be a pyparsing expression
- that produces an integer value.
-
- Example::
-
- counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd']
-
- # in this parser, the leading integer value is given in binary,
- # '10' indicating that 2 values are in the array
- binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2))
- counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd']
-
- # if other fields must be parsed after the count but before the
- # list items, give the fields results names and they will
- # be preserved in the returned ParseResults:
- count_with_metadata = integer + Word(alphas)("type")
- typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items")
- result = typed_array.parse_string("3 bool True True False")
- print(result.dump())
-
- # prints
- # ['True', 'True', 'False']
- # - items: ['True', 'True', 'False']
- # - type: 'bool'
- """
- intExpr = intExpr or int_expr
- array_expr = Forward()
-
- def count_field_parse_action(s, l, t):
- nonlocal array_expr
- n = t[0]
- array_expr <<= (expr * n) if n else Empty()
- # clear list contents, but keep any named results
- del t[:]
-
- if intExpr is None:
- intExpr = Word(nums).set_parse_action(lambda t: int(t[0]))
- else:
- intExpr = intExpr.copy()
- intExpr.set_name("arrayLen")
- intExpr.add_parse_action(count_field_parse_action, call_during_try=True)
- return (intExpr + array_expr).set_name("(len) " + str(expr) + "...")
-
-
-def match_previous_literal(expr: ParserElement) -> ParserElement:
- """Helper to define an expression that is indirectly defined from
- the tokens matched in a previous expression, that is, it looks for
- a 'repeat' of a previous expression. For example::
-
- first = Word(nums)
- second = match_previous_literal(first)
- match_expr = first + ":" + second
-
- will match ``"1:1"``, but not ``"1:2"``. Because this
- matches a previous literal, will also match the leading
- ``"1:1"`` in ``"1:10"``. If this is not desired, use
- :class:`match_previous_expr`. Do *not* use with packrat parsing
- enabled.
- """
- rep = Forward()
-
- def copy_token_to_repeater(s, l, t):
- if t:
- if len(t) == 1:
- rep << t[0]
- else:
- # flatten t tokens
- tflat = _flatten(t.as_list())
- rep << And(Literal(tt) for tt in tflat)
- else:
- rep << Empty()
-
- expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
- rep.set_name("(prev) " + str(expr))
- return rep
-
-
-def match_previous_expr(expr: ParserElement) -> ParserElement:
- """Helper to define an expression that is indirectly defined from
- the tokens matched in a previous expression, that is, it looks for
- a 'repeat' of a previous expression. For example::
-
- first = Word(nums)
- second = match_previous_expr(first)
- match_expr = first + ":" + second
-
- will match ``"1:1"``, but not ``"1:2"``. Because this
- matches by expressions, will *not* match the leading ``"1:1"``
- in ``"1:10"``; the expressions are evaluated first, and then
- compared, so ``"1"`` is compared with ``"10"``. Do *not* use
- with packrat parsing enabled.
- """
- rep = Forward()
- e2 = expr.copy()
- rep <<= e2
-
- def copy_token_to_repeater(s, l, t):
- matchTokens = _flatten(t.as_list())
-
- def must_match_these_tokens(s, l, t):
- theseTokens = _flatten(t.as_list())
- if theseTokens != matchTokens:
- raise ParseException(
- s, l, f"Expected {matchTokens}, found{theseTokens}"
- )
-
- rep.set_parse_action(must_match_these_tokens, callDuringTry=True)
-
- expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
- rep.set_name("(prev) " + str(expr))
- return rep
-
-
-def one_of(
- strs: Union[typing.Iterable[str], str],
- caseless: bool = False,
- use_regex: bool = True,
- as_keyword: bool = False,
- *,
- useRegex: bool = True,
- asKeyword: bool = False,
-) -> ParserElement:
- """Helper to quickly define a set of alternative :class:`Literal` s,
- and makes sure to do longest-first testing when there is a conflict,
- regardless of the input order, but returns
- a :class:`MatchFirst` for best performance.
-
- Parameters:
-
- - ``strs`` - a string of space-delimited literals, or a collection of
- string literals
- - ``caseless`` - treat all literals as caseless - (default= ``False``)
- - ``use_regex`` - as an optimization, will
- generate a :class:`Regex` object; otherwise, will generate
- a :class:`MatchFirst` object (if ``caseless=True`` or ``as_keyword=True``, or if
- creating a :class:`Regex` raises an exception) - (default= ``True``)
- - ``as_keyword`` - enforce :class:`Keyword`-style matching on the
- generated expressions - (default= ``False``)
- - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility,
- but will be removed in a future release
-
- Example::
-
- comp_oper = one_of("< = > <= >= !=")
- var = Word(alphas)
- number = Word(nums)
- term = var | number
- comparison_expr = term + comp_oper + term
- print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12"))
-
- prints::
-
- [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']]
- """
- asKeyword = asKeyword or as_keyword
- useRegex = useRegex and use_regex
-
- if (
- isinstance(caseless, str_type)
- and __diag__.warn_on_multiple_string_args_to_oneof
- ):
- warnings.warn(
- "More than one string argument passed to one_of, pass"
- " choices as a list or space-delimited string",
- stacklevel=2,
- )
-
- if caseless:
- isequal = lambda a, b: a.upper() == b.upper()
- masks = lambda a, b: b.upper().startswith(a.upper())
- parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral
- else:
- isequal = lambda a, b: a == b
- masks = lambda a, b: b.startswith(a)
- parseElementClass = Keyword if asKeyword else Literal
-
- symbols: List[str] = []
- if isinstance(strs, str_type):
- strs = typing.cast(str, strs)
- symbols = strs.split()
- elif isinstance(strs, Iterable):
- symbols = list(strs)
- else:
- raise TypeError("Invalid argument to one_of, expected string or iterable")
- if not symbols:
- return NoMatch()
-
- # reorder given symbols to take care to avoid masking longer choices with shorter ones
- # (but only if the given symbols are not just single characters)
- if any(len(sym) > 1 for sym in symbols):
- i = 0
- while i < len(symbols) - 1:
- cur = symbols[i]
- for j, other in enumerate(symbols[i + 1 :]):
- if isequal(other, cur):
- del symbols[i + j + 1]
- break
- elif masks(cur, other):
- del symbols[i + j + 1]
- symbols.insert(i, other)
- break
- else:
- i += 1
-
- if useRegex:
- re_flags: int = re.IGNORECASE if caseless else 0
-
- try:
- if all(len(sym) == 1 for sym in symbols):
- # symbols are just single characters, create range regex pattern
- patt = f"[{''.join(_escape_regex_range_chars(sym) for sym in symbols)}]"
- else:
- patt = "|".join(re.escape(sym) for sym in symbols)
-
- # wrap with \b word break markers if defining as keywords
- if asKeyword:
- patt = rf"\b(?:{patt})\b"
-
- ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols))
-
- if caseless:
- # add parse action to return symbols as specified, not in random
- # casing as found in input string
- symbol_map = {sym.lower(): sym for sym in symbols}
- ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()])
-
- return ret
-
- except re.error:
- warnings.warn(
- "Exception creating Regex for one_of, building MatchFirst", stacklevel=2
- )
-
- # last resort, just use MatchFirst
- return MatchFirst(parseElementClass(sym) for sym in symbols).set_name(
- " | ".join(symbols)
- )
-
-
-def dict_of(key: ParserElement, value: ParserElement) -> ParserElement:
- """Helper to easily and clearly define a dictionary by specifying
- the respective patterns for the key and value. Takes care of
- defining the :class:`Dict`, :class:`ZeroOrMore`, and
- :class:`Group` tokens in the proper order. The key pattern
- can include delimiting markers or punctuation, as long as they are
- suppressed, thereby leaving the significant key text. The value
- pattern can include named results, so that the :class:`Dict` results
- can include named token fields.
-
- Example::
-
- text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
- attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- print(attr_expr[1, ...].parse_string(text).dump())
-
- attr_label = label
- attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)
-
- # similar to Dict, but simpler call format
- result = dict_of(attr_label, attr_value).parse_string(text)
- print(result.dump())
- print(result['shape'])
- print(result.shape) # object attribute access works too
- print(result.as_dict())
-
- prints::
-
- [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']]
- - color: 'light blue'
- - posn: 'upper left'
- - shape: 'SQUARE'
- - texture: 'burlap'
- SQUARE
- SQUARE
- {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'}
- """
- return Dict(OneOrMore(Group(key + value)))
-
-
-def original_text_for(
- expr: ParserElement, as_string: bool = True, *, asString: bool = True
-) -> ParserElement:
- """Helper to return the original, untokenized text for a given
- expression. Useful to restore the parsed fields of an HTML start
- tag into the raw tag text itself, or to revert separate tokens with
- intervening whitespace back to the original matching input text. By
- default, returns a string containing the original parsed text.
-
- If the optional ``as_string`` argument is passed as
- ``False``, then the return value is
- a :class:`ParseResults` containing any results names that
- were originally matched, and a single token containing the original
- matched text from the input string. So if the expression passed to
- :class:`original_text_for` contains expressions with defined
- results names, you must set ``as_string`` to ``False`` if you
- want to preserve those results name values.
-
- The ``asString`` pre-PEP8 argument is retained for compatibility,
- but will be removed in a future release.
-
- Example::
-
- src = "this is test bold text normal text "
- for tag in ("b", "i"):
- opener, closer = make_html_tags(tag)
- patt = original_text_for(opener + ... + closer)
- print(patt.search_string(src)[0])
-
- prints::
-
- [' bold text']
- ['text']
- """
- asString = asString and as_string
-
- locMarker = Empty().set_parse_action(lambda s, loc, t: loc)
- endlocMarker = locMarker.copy()
- endlocMarker.callPreparse = False
- matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end")
- if asString:
- extractText = lambda s, l, t: s[t._original_start : t._original_end]
- else:
-
- def extractText(s, l, t):
- t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]]
-
- matchExpr.set_parse_action(extractText)
- matchExpr.ignoreExprs = expr.ignoreExprs
- matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection)
- return matchExpr
-
-
-def ungroup(expr: ParserElement) -> ParserElement:
- """Helper to undo pyparsing's default grouping of And expressions,
- even if all but one are non-empty.
- """
- return TokenConverter(expr).add_parse_action(lambda t: t[0])
-
-
-def locatedExpr(expr: ParserElement) -> ParserElement:
- """
- (DEPRECATED - future code should use the :class:`Located` class)
- Helper to decorate a returned token with its starting and ending
- locations in the input string.
-
- This helper adds the following results names:
-
- - ``locn_start`` - location where matched expression begins
- - ``locn_end`` - location where matched expression ends
- - ``value`` - the actual parsed results
-
- Be careful if the input text contains ```` characters, you
- may want to call :class:`ParserElement.parse_with_tabs`
-
- Example::
-
- wd = Word(alphas)
- for match in locatedExpr(wd).search_string("ljsdf123lksdjjf123lkkjj1222"):
- print(match)
-
- prints::
-
- [[0, 'ljsdf', 5]]
- [[8, 'lksdjjf', 15]]
- [[18, 'lkkjj', 23]]
- """
- locator = Empty().set_parse_action(lambda ss, ll, tt: ll)
- return Group(
- locator("locn_start")
- + expr("value")
- + locator.copy().leaveWhitespace()("locn_end")
- )
-
-
-def nested_expr(
- opener: Union[str, ParserElement] = "(",
- closer: Union[str, ParserElement] = ")",
- content: typing.Optional[ParserElement] = None,
- ignore_expr: ParserElement = quoted_string(),
- *,
- ignoreExpr: ParserElement = quoted_string(),
-) -> ParserElement:
- """Helper method for defining nested lists enclosed in opening and
- closing delimiters (``"("`` and ``")"`` are the default).
-
- Parameters:
-
- - ``opener`` - opening character for a nested list
- (default= ``"("``); can also be a pyparsing expression
- - ``closer`` - closing character for a nested list
- (default= ``")"``); can also be a pyparsing expression
- - ``content`` - expression for items within the nested lists
- (default= ``None``)
- - ``ignore_expr`` - expression for ignoring opening and closing delimiters
- (default= :class:`quoted_string`)
- - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility
- but will be removed in a future release
-
- If an expression is not provided for the content argument, the
- nested expression will capture all whitespace-delimited content
- between delimiters as a list of separate values.
-
- Use the ``ignore_expr`` argument to define expressions that may
- contain opening or closing characters that should not be treated as
- opening or closing characters for nesting, such as quoted_string or
- a comment expression. Specify multiple expressions using an
- :class:`Or` or :class:`MatchFirst`. The default is
- :class:`quoted_string`, but if no expressions are to be ignored, then
- pass ``None`` for this argument.
-
- Example::
-
- data_type = one_of("void int short long char float double")
- decl_data_type = Combine(data_type + Opt(Word('*')))
- ident = Word(alphas+'_', alphanums+'_')
- number = pyparsing_common.number
- arg = Group(decl_data_type + ident)
- LPAR, RPAR = map(Suppress, "()")
-
- code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment))
-
- c_function = (decl_data_type("type")
- + ident("name")
- + LPAR + Opt(DelimitedList(arg), [])("args") + RPAR
- + code_body("body"))
- c_function.ignore(c_style_comment)
-
- source_code = '''
- int is_odd(int x) {
- return (x%2);
- }
-
- int dec_to_hex(char hchar) {
- if (hchar >= '0' && hchar <= '9') {
- return (ord(hchar)-ord('0'));
- } else {
- return (10+ord(hchar)-ord('A'));
- }
- }
- '''
- for func in c_function.search_string(source_code):
- print("%(name)s (%(type)s) args: %(args)s" % func)
-
-
- prints::
-
- is_odd (int) args: [['int', 'x']]
- dec_to_hex (int) args: [['char', 'hchar']]
- """
- if ignoreExpr != ignore_expr:
- ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr
- if opener == closer:
- raise ValueError("opening and closing strings cannot be the same")
- if content is None:
- if isinstance(opener, str_type) and isinstance(closer, str_type):
- opener = typing.cast(str, opener)
- closer = typing.cast(str, closer)
- if len(opener) == 1 and len(closer) == 1:
- if ignoreExpr is not None:
- content = Combine(
- OneOrMore(
- ~ignoreExpr
- + CharsNotIn(
- opener + closer + ParserElement.DEFAULT_WHITE_CHARS,
- exact=1,
- )
- )
- ).set_parse_action(lambda t: t[0].strip())
- else:
- content = empty.copy() + CharsNotIn(
- opener + closer + ParserElement.DEFAULT_WHITE_CHARS
- ).set_parse_action(lambda t: t[0].strip())
- else:
- if ignoreExpr is not None:
- content = Combine(
- OneOrMore(
- ~ignoreExpr
- + ~Literal(opener)
- + ~Literal(closer)
- + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
- )
- ).set_parse_action(lambda t: t[0].strip())
- else:
- content = Combine(
- OneOrMore(
- ~Literal(opener)
- + ~Literal(closer)
- + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
- )
- ).set_parse_action(lambda t: t[0].strip())
- else:
- raise ValueError(
- "opening and closing arguments must be strings if no content expression is given"
- )
- ret = Forward()
- if ignoreExpr is not None:
- ret <<= Group(
- Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer)
- )
- else:
- ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer))
- ret.set_name("nested %s%s expression" % (opener, closer))
- return ret
-
-
-def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")):
- """Internal helper to construct opening and closing tag expressions, given a tag name"""
- if isinstance(tagStr, str_type):
- resname = tagStr
- tagStr = Keyword(tagStr, caseless=not xml)
- else:
- resname = tagStr.name
-
- tagAttrName = Word(alphas, alphanums + "_-:")
- if xml:
- tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes)
- openTag = (
- suppress_LT
- + tagStr("tag")
- + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue)))
- + Opt("/", default=[False])("empty").set_parse_action(
- lambda s, l, t: t[0] == "/"
- )
- + suppress_GT
- )
- else:
- tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word(
- printables, exclude_chars=">"
- )
- openTag = (
- suppress_LT
- + tagStr("tag")
- + Dict(
- ZeroOrMore(
- Group(
- tagAttrName.set_parse_action(lambda t: t[0].lower())
- + Opt(Suppress("=") + tagAttrValue)
- )
- )
- )
- + Opt("/", default=[False])("empty").set_parse_action(
- lambda s, l, t: t[0] == "/"
- )
- + suppress_GT
- )
- closeTag = Combine(Literal("") + tagStr + ">", adjacent=False)
-
- openTag.set_name("<%s>" % resname)
- # add start results name in parse action now that ungrouped names are not reported at two levels
- openTag.add_parse_action(
- lambda t: t.__setitem__(
- "start" + "".join(resname.replace(":", " ").title().split()), t.copy()
- )
- )
- closeTag = closeTag(
- "end" + "".join(resname.replace(":", " ").title().split())
- ).set_name("%s>" % resname)
- openTag.tag = resname
- closeTag.tag = resname
- openTag.tag_body = SkipTo(closeTag())
- return openTag, closeTag
-
-
-def make_html_tags(
- tag_str: Union[str, ParserElement]
-) -> Tuple[ParserElement, ParserElement]:
- """Helper to construct opening and closing tag expressions for HTML,
- given a tag name. Matches tags in either upper or lower case,
- attributes with namespaces and with quoted or unquoted values.
-
- Example::
-
- text = '
-
-The best free DU Meter alternative is GlassWire. ... Other interesting free alternatives to DU Meter are NetSpeedMonitor (Free) ... Installer & portable. Read moreThe best free DU Meter alternative is GlassWire.
-Google Chrome.
-Google.
-Download.
-The best free OpenVPN client for PC. ...
-Other interesting free DU Meter alternatives are NetSpeedMonitor (Free) and BlueByte (Free).
-NetSpeedMonitor (Free) is a free utility for monitoring Internet connection.
-It allows you to monitor the download and upload speed of all ports connected to the PC, for each connection individually and as a whole. 8a78ff9644
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Excelrecoverytoolboxcrackserialkeygen __FULL__.md b/spaces/quidiaMuxgu/Expedit-SAM/Excelrecoverytoolboxcrackserialkeygen __FULL__.md
deleted file mode 100644
index ab12e71837e25be9382f97c5393ee38c4b35cbab..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Excelrecoverytoolboxcrackserialkeygen __FULL__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-BlueSoleil is offered as a free download with limitations. Faster PC? ... BlueSoleil support is available ONLY from its developer IVT Corporation. Popular in ... 4d29de3e1b
-
-
-
diff --git a/spaces/radames/MusicGen-Continuation/tests/quantization/test_vq.py b/spaces/radames/MusicGen-Continuation/tests/quantization/test_vq.py
deleted file mode 100644
index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000
--- a/spaces/radames/MusicGen-Continuation/tests/quantization/test_vq.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.quantization.vq import ResidualVectorQuantizer
-
-
-class TestResidualVectorQuantizer:
-
- def test_rvq(self):
- x = torch.randn(1, 16, 2048)
- vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8)
- res = vq(x, 1.)
- assert res.x.shape == torch.Size([1, 16, 2048])
diff --git a/spaces/radames/candle-segment-anything-wasm/style.css b/spaces/radames/candle-segment-anything-wasm/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/radames/candle-segment-anything-wasm/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Command and conquer red alert 3 registration code Where to find it if you bought the game legally.md b/spaces/raedeXanto/academic-chatgpt-beta/Command and conquer red alert 3 registration code Where to find it if you bought the game legally.md
deleted file mode 100644
index 30e5e0706523f7fd4bb0009bb06b9017e3fe83f8..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Command and conquer red alert 3 registration code Where to find it if you bought the game legally.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Command and Conquer Red Alert 3 Registration Code Crack: How to Play the Game for Free
-
If you are a fan of real-time strategy games, you might have heard of Command and Conquer Red Alert 3, a popular game that was released in 2008 by Electronic Arts. The game is set in an alternate history where World War III is raging between three factions: the Allies, the Soviet Union, and the Empire of the Rising Sun. The game features a co-operative campaign mode, where you can team up with another player or an AI-controlled commander, as well as a competitive multiplayer mode, where you can battle against other players online.
-
command and conquer red alert 3 registration code crack
However, to play the game, you need a registration code, which is a unique serial number that verifies your ownership of the game. The registration code is usually provided when you purchase the game from an official source, such as EA's website or Steam. But what if you don't have a registration code, or you lost it, or you want to play the game without paying for it? Is there a way to crack the registration code and play the game for free?
-
The answer is yes, but it comes with some risks. In this article, we will show you two methods to get a registration code crack for Command and Conquer Red Alert 3, and explain the pros and cons of each method. We will also answer some frequently asked questions about the game and its registration code. Let's get started!
-
What is Command and Conquer Red Alert 3?
-
Command and Conquer Red Alert 3 is a real-time strategy game that was developed by EA Los Angeles and published by Electronic Arts in 2008. It is the third installment in the Red Alert series, which is a spin-off of the Command and Conquer franchise. The game is set in an alternate history where World War III is raging between three factions: the Allies, led by US President Howard T. Ackerman; the Soviet Union, led by Premier Anatoly Cherdenko; and the Empire of the Rising Sun, led by Emperor Yoshiro.
-
The game features a co-operative campaign mode, where you can team up with another player or an AI-controlled commander to complete missions across different locations around the world. Each faction has its own storyline, units, buildings, and special abilities. The game also features a competitive multiplayer mode, where you can battle against other players online using one of the three factions. The game has received generally positive reviews from critics and players alike, who praised its gameplay, graphics, humor, and co-op mode.
-
Why do you need a registration code?
-
A registration code is a unique serial number that verifies your ownership of the game. The registration code is usually provided when you purchase the game from an official source, such as EA's website or Steam. You need to enter the registration code when you install the game on your computer or when you launch it for the first time. The registration code is also required to access online features of the game, such as multiplayer mode or updates.
-
The purpose of the registration code is to prevent piracy and unauthorized distribution of the game. By requiring a registration code, EA hopes to ensure that only legitimate customers can play the game and enjoy its full features. However, some people may not have a registration code for various reasons, such as losing it, buying a second-hand copy of the game, such as buying a second-hand copy of the game, downloading it from an unofficial source, or borrowing it from a friend. In that case, you might want to use a crack to bypass the registration code and play the game for free.
-
red alert 3 serial key generator
-command and conquer 3 activation code free
-red alert 3 crack no cd download
-command and conquer red alert 3 keygen
-red alert 3 registration bypass
-command and conquer 3 license code
-red alert 3 crack only
-command and conquer red alert 3 serial number
-red alert 3 activation fix
-command and conquer 3 product key
-red alert 3 crack file
-command and conquer red alert 3 reloaded
-red alert 3 registration code not working
-command and conquer 3 registration key
-red alert 3 crack skidrow
-command and conquer red alert 3 patch
-red alert 3 activation code generator
-command and conquer 3 cd key crack
-red alert 3 crack razor1911
-command and conquer red alert 3 mods
-red alert 3 registration code generator online
-command and conquer 3 serial number generator
-red alert 3 crack download free
-command and conquer red alert 3 cheats
-red alert 3 registration code bypasser.exe download
-command and conquer 3 activation code generator online
-red alert 3 crack offline mode
-command and conquer red alert 3 trainer
-red alert 3 registration code invalid fix
-command and conquer 3 cd key generator online
-red alert 3 crack multiplayer lan
-command and conquer red alert 3 uprising keygen
-red alert 3 registration code already in use fix
-command and conquer 3 product key generator online
-red alert 3 crack windows 10
-command and conquer red alert 3 steam key
-red alert 3 activation code not working fix
-command and conquer 3 cd key changer
-red alert 3 crack no survey no password download
-command and conquer red alert 3 origin key
-red alert 3 registration code recovery tool download
-command and conquer 3 serial number recovery tool download
-red alert 3 crack direct download link
-command and conquer red alert 3 gameplay video download link
-red alert 3 registration code generator offline mode
-command and conquer 3 activation code generator offline mode
-red alert 3 crack for mac os x download
-command and conquer red alert 3 mac os x keygen
-red alert 3 registration code for mac os x
-command and conquer red alert 3 mac os x patch
-
What are the risks of using a crack?
-
A crack is a modified version of the game's executable file or a program that alters the game's behavior to bypass the registration code. A crack can be downloaded from various websites that offer pirated games or software. However, using a crack is not without risks. Some of the risks are:
-
-
Legal risk: Using a crack is illegal and violates the game's end user license agreement (EULA). You could face legal consequences if you are caught using a crack or distributing it to others.
-
Security risk: A crack could contain malware, such as viruses, trojans, worms, spyware, or ransomware, that could harm your computer or steal your personal information. You could also expose your computer to hackers or cybercriminals who could access your files or accounts.
-
Performance risk: A crack could cause the game to malfunction, crash, freeze, or lag. You could also experience graphical glitches, audio problems, or compatibility issues with your operating system or hardware.
-
Update risk: A crack could prevent you from updating the game to the latest version or accessing online features of the game, such as multiplayer mode or patches. You could also miss out on new content, bug fixes, or improvements that are released by the game developers.
-
-
Therefore, using a crack is not recommended and could have negative consequences for you and your computer. You should always use a legitimate registration code and purchase the game from an official source if you want to enjoy the game safely and legally.
-
How to get a registration code crack for Command and Conquer Red Alert 3
-
If you still want to use a crack to play Command and Conquer Red Alert 3 for free, despite the risks involved, there are two methods that you can try. However, we do not endorse or support these methods and we are not responsible for any damage or loss that may result from using them. Use them at your own risk and discretion.
-
Method 1: Use a serial number generator
-
A serial number generator is a program that generates random serial numbers that can be used as registration codes for various games or software. You can download a serial number generator for Command and Conquer Red Alert 3 from some websites that offer pirated games or software. Here are the steps to use this method:
-
Step 1: Download a serial number generator
-
You can search for a serial number generator for Command and Conquer Red Alert 3 on Google or other search engines. You will find many websites that claim to offer such a program. However, be careful and avoid clicking on suspicious links or downloading files from untrusted sources. Some of these websites could be scams or contain malware that could harm your computer.
-
One of the websites that offers a serial number generator for Command and Conquer Red Alert 3 is Smart Serials. You can visit this website and click on the "Download Command & Conquer Red Alert 3 Serial number" button. You will be redirected to another page where you will have to verify that you are human by completing a captcha. After that, you will be able to download the serial number generator as a ZIP file.
-
Step 2: Run the generator and copy a serial number
-
After downloading the ZIP file, you will have to extract it using a program like WinRAR or 7-Zip. You will find an executable file named "Command & Conquer Red Alert 3 Keygen.exe" inside the extracted folder. Double-click on this file to run the serial number generator.
-
You will see a window with a button that says "Generate". Click on this button and wait for a few seconds. The generator will produce a random serial number that looks like this: "XXXX-XXXX-XXXX-XXXX-XXXX". Copy this serial number by selecting it and pressing Ctrl+C on your keyboard.
-
Step 3: Enter the serial number when prompted by the game
-
Now that you have copied a serial number from the generator, you can use it to install or launch Command and Conquer Red Alert 3 on your computer. If you have already installed the game, you can launch it by double-clicking on its icon on your desktop or in your Start menu. If you have not installed the game yet, you can insert the game disc into your CD/DVD drive or mount the game image using a program like Daemon Tools or PowerISO.
-
When you install or launch the game, you will be prompted to enter your registration code. Paste the serial number that you copied from the generator by pressing Ctrl+V on your keyboard. Click on "Done" and wait for the game to verify your registration code. If the serial number is valid, you will be able to play the game without any problems.
-
However, this method has some drawbacks. First of all, the serial number generator may not work for all versions of the game or for all regions. You may have to try different generators or serial numbers until you find one that works for you. Second, the serial number generator may contain malware or viruses that could harm your computer. You should scan the file with an antivirus program before running it. Third, the serial number generator may not be compatible with the latest updates or patches of the game. You may have to disable your internet connection or firewall to prevent the game from checking for updates or verifying your registration code online.
-
Method 2: Use a pre-cracked version of the game
-
A pre-cracked version of the game is a modified version of the game that has already been cracked by someone else and does not require a registration code to play. You can download a pre-cracked version of Command and Conquer Red Alert 3 from some websites that offer pirated games or software. Here are the steps to use this method:
-
Step 1: Download a pre-cracked version of the game from a reputable source
-
You can search for a pre-cracked version of Command and Conquer Red Alert 3 on Google or other search engines. You will find many websites that claim to offer such a version. However, be careful and avoid clicking on suspicious links or downloading files from untrusted sources. Some of these websites could be scams or contain malware that could harm your computer.
-
One of the websites that offers a pre-cracked version of Command and Conquer Red Alert 3 is MrPcGamer. You can visit this website and click on the "Go To Donwload" button. You will be redirected to another page where you will have to choose one of the download links from different hosts. You can use any link that works for you, but we recommend using MegaUp.net or Mega.nz for faster and safer downloads. You will have to download 3 parts of the game, each around 2.5 GB in size.
-
Step 2: Install the game and follow the instructions
-
After downloading all 3 parts of the game, you will have to extract them using a program like WinRAR or 7-Zip. You will find an ISO file named "Command.and.Conquer.Red.Alert.3.MULTi12-PROPHET" inside the extracted folder. This is an image file that contains all the data of the game. You will have to mount this image file using a program like Daemon Tools or PowerISO.
-
After mounting the image file, you will see a virtual CD/DVD drive on your computer that contains the game's setup files. Double-click on this drive to open it and run the "Setup.exe" file to start installing the game. Follow the instructions on the screen and choose your preferred language and installation directory. The installation process may take some time depending on your computer's speed.
-
After installing the game, you will have to copy some files from the image file to your installation directory. Open the image file again and go to the "PROPHET" folder inside it. You will see two files named "ra3_1.12.game" and "ra3_1.12.game.bak". Copy these two files and paste them into your installation directory, which is usually located at "C:\Program Files (x86)\Electronic Arts\Command & Conquer Red Alert 3". When prompted, choose to replace or overwrite the existing files.
-
Step 3: Enjoy the game without needing a registration code
-
Now that you have installed and cracked the game, you can launch it by double-clicking on its icon on your desktop or in your Start menu. You will not be asked to enter a registration code when you launch the game. You will be able to play the game without any problems.
-
However, this method also has some drawbacks. First of all, downloading a pre-cracked version of the game is illegal and violates the game's end user license agreement (EULA). You could face legal consequences if you are caught using or distributing a pre-cracked version of the game. Second, downloading a pre-cracked version of the game could expose your computer to malware or viruses that could harm your computer. You should scan the file with an antivirus program before installing it. Third, downloading a pre-cracked version of the game could prevent you from updating the game to the latest version or accessing online features of the game, such as multiplayer mode or patches. You could also miss out on new content, bug fixes, or improvements that are released by the game developers.
-
Conclusion
-
In this article, we have shown you two methods to get a registration code crack for Command and Conquer Red Alert 3, and explained the pros and cons of each method. We have also answered some frequently asked questions about the game and its registration code. However, we do not endorse or support these methods and we are not responsible for any damage or loss that may result from using them. Use them at your own risk and discretion.
-
The best way to play Command and Conquer Red Alert 3 is to use a legitimate registration code and purchase the game from an official source, such as EA's website or Steam. This way, you can enjoy the game safely and legally, and support the game developers who worked hard to create this amazing game. You can also access online features of the game, such as multiplayer mode or updates, and get new content, bug fixes, or improvements that are released by the game developers.
-
Command and Conquer Red Alert 3 is a fun and exciting real-time strategy game that will keep you entertained for hours. Whether you play as the Allies, the Soviet Union, or the Empire of the Rising Sun, you will experience a unique storyline, units, buildings, and special abilities. You can also team up with another player or an AI-controlled commander in co-operative campaign mode, or battle against other players online in competitive multiplayer mode. The game features stunning graphics, humor, and co-op mode.
-
We hope you enjoyed this article and learned something new. If you have any questions or comments, feel free to leave them below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about Command and Conquer Red Alert 3 and its registration code:
-
-
Q: How can I find my lost registration code for Command and Conquer Red Alert 3?
-
A: If you have purchased the game from an official source, such as EA's website or Steam, you can find your registration code in your account settings or library. You can also contact EA's customer support and provide proof of purchase to retrieve your registration code.
-
Q: How can I update Command and Conquer Red Alert 3 to the latest version?
-
A: If you have purchased the game from an official source, such as EA's website or Steam, you can update the game automatically through your account settings or library. You can also download the latest patch manually from EA's website or other websites that offer official patches.
-
Q: How can I play Command and Conquer Red Alert 3 online with other players?
-
A: If you have purchased the game from an official source, such as EA's website or Steam, you can play online with other players through your account settings or library. You will need a valid registration code and an internet connection to access online features of the game.
-
Q: How can I play Command and Conquer Red Alert 3 offline without a registration code?
-
A: If you have purchased the game from an official source, such as EA's website or Steam, you can play offline without a registration code by using offline mode in your account settings or library. However, you will not be able to access online features of the game.
-
Q: How can I get more maps for Command and Conquer Red Alert 3?
-
A: If you have purchased the game from an official source, such as EA's website or Steam, you can get more maps for Command and Conquer Red Alert 3 by downloading them from EA's website or other websites that offer official maps. You can also create your own maps using the World Builder tool that comes with the game.
Fidelity: where you can adjust the level of fidelity enhancement that restores the lost detail and clarity of compressed audio files.
-
Ambience: where you can adjust the level of ambience enhancement that adds depth and spaciousness to your audio.
-
3D Surround: where you can adjust the level of 3D surround sound enhancement that creates a realistic sound field around you.
-
Dynamic Boost: where you can adjust the level of dynamic boost enhancement that increases the loudness and punch of your audio without distorting it.
-
HyperBass: where you can adjust the level of hyperbass enhancement that adds deep and rich bass to your audio.
-
-
You can also access more advanced options by clicking on the Advanced button on the bottom right corner of the Processing tab. You can access options such as High Frequency Restoration, Harmonic Fidelity Restoration, Spectrum Analyzer, etc.
-
How to troubleshoot and optimize DFX Audio Enhancer
-
DFX Audio Enhancer is a very reliable and stable software that works smoothly with most audio devices and applications. However, sometimes you may encounter some issues or problems that affect your audio quality or performance. Here are some tips on how to troubleshoot and optimize DFX Audio Enhancer:
The common issues and solutions of DFX Audio Enhancer
-
Some of the common issues that you may face while using DFX Audio Enhancer are:
-
-
No sound or distorted sound: This may happen if your output device or format is not compatible with DFX Audio Enhancer. To fix this, you can try changing your output device or format in the Audio Device Settings in the Options tab. You can also try disabling or enabling other audio enhancers or effects that may interfere with DFX Audio Enhancer.
-
No effect or weak effect: This may happen if your audio source or content is not compatible with DFX Audio Enhancer. To fix this, you can try changing your audio source or content to a different format or quality. You can also try increasing the level of effects in the Processing tab or selecting a different mode or preset.
-
No integration or conflict with media player or browser: This may happen if your media player or browser is not supported by DFX Audio Enhancer. To fix this, you can try updating your media player or browser to the latest version or installing a different media player or browser that is supported by DFX Audio Enhancer. You can also try disabling or enabling the integration of DFX Audio Enhancer with your media player or browser in the Options tab.
-
-
The best practices and tips for using DFX Audio Enhancer
-
Some of the best practices and tips for using DFX Audio Enhancer are:
-
-
Experiment with different modes and presets: DFX Audio Enhancer offers a variety of modes and presets that suit different types of audio content and preferences. You can experiment with different modes and presets to find the one that sounds best for you.
-
Adjust the effects according to your taste: DFX Audio Enhancer allows you to adjust the effects manually in the Processing tab. You can fine-tune the effects according to your taste and preference.
-
Use headphones or speakers for optimal sound quality: DFX Audio Enhancer can optimize your headphones or speakers for the best sound experience. You can select your headphones or speakers type in the Options tab and enjoy the enhanced sound quality.
-
Update DFX Audio Enhancer regularly: DFX Audio Enhancer is constantly updated with new features and improvements. You can check for updates in the Options tab and install them to enjoy the latest version of DFX Audio Enhancer.
-
-
Conclusion
-
In conclusion, DFX Audio Enhancer is a powerful software that enhances the sound quality of your PC by applying various effects and features. It works with any media player, browser, or application that plays audio on your PC. It can also optimize your headphones, speakers, or other audio devices for the best sound experience.
-
If you want to enjoy the best sound quality on your PC, you should download and install DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free from this link: https://www.dfxaudioenhancersetupfree.com. This is a trusted and verified source that offers the latest version of DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free.
-
You can also use DFX Audio Enhancer to improve your audio quality by selecting from different modes and presets, adjusting the effects manually, troubleshooting and optimizing DFX Audio Enhancer, etc.
-
I hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
Frequently Asked Questions
-
Here are some frequently asked questions about DFX Audio Enhancer:
-
DFX Audio Enhancer download for Windows 11[^1^]
-DFX Audio Enhancer free trial[^1^]
-DFX Audio Enhancer features and benefits[^2^]
-DFX Audio Enhancer review and rating[^2^]
-DFX Audio Enhancer 12.023 latest version[^1^]
-DFX Audio Enhancer 3D surround sound[^2^]
-DFX Audio Enhancer booming hyperbass[^2^]
-DFX Audio Enhancer customizable audio presets[^2^]
-DFX Audio Enhancer dynamic audio boost[^2^]
-DFX Audio Enhancer headphones optimization[^2^]
-DFX Audio Enhancer high fidelity restoration[^2^]
-DFX Audio Enhancer multiple processing modes[^2^]
-DFX Audio Enhancer preset to song association[^2^]
-DFX Audio Enhancer spectrum analyzer[^2^]
-DFX Audio Enhancer stereo ambiance[^2^]
-DFX Audio Enhancer compatibility and license[^2^]
-DFX Audio Enhancer Amazon integration[^2^]
-DFX Audio Enhancer DailyMotion integration[^2^]
-DFX Audio Enhancer Facebook integration[^2^]
-DFX Audio Enhancer Flickr integration[^2^]
-DFX Audio Enhancer Google Play integration[^2^]
-DFX Audio Enhancer Pandora integration[^2^]
-DFX Audio Enhancer Spotify integration[^2^]
-DFX Audio Enhancer Vimeo integration[^2^]
-DFX Audio Enhancer YouTube integration[^2^]
-DFX Audio Enhancer iTunes integration[^2^]
-DFX Audio Enhancer audio enhancer plugin app for Windows[^1^] [^2^]
-DFX Audio Enhancer sound quality of music, movies and other audio content played on a PC[^1^] [^2^]
-DFX Audio Enhancer free, but ad-supported audio enhancement software download[^1^] [^2^]
-DFX Audio Enhancer enhances your soundcard output with 3D surround support[^1^] [^2^]
-DFX Audio Enhancer marginally improves sound quality coming from your PC[^1^] [^2^]
-DFX Audio Enhancer supports all sound produced by your PC[^1^] [^2^]
-DFX Audio Enhancer similar to sound enhancements provided by your sound card manufacturer[^1^] [^2^]
-DFX Audio Enhancer emulates 3D surround sound, Bass Boost, HD restoration and sound-type optimizations[^1^] [^2^]
-DFX Audio Enhancer included with this package is a spectrum analyzer which is a graphical display of the sound currently being played[^1
-
-
Is DFX Audio Enhancer safe to use?
-
Yes, DFX Audio Enhancer is safe to use. It does not contain any viruses, malware, spyware, adware, etc. It does not harm your PC or audio files in any way. It only enhances your sound quality by applying various effects and features.
-
Is DFX Audio Enhancer free to use?
-
Yes, DFX Audio Enhancer is free to use. You can download and install it for free from this link: https://www.dfxaudioenhancersetupfree.com. This is a trusted and verified source that offers the latest version of DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free.
-
What are the system requirements for using DFX Audio Enhancer?
-
The system requirements for using DFX Audio Enhancer are:
-
A Windows PC running Windows XP/Vista/7/8/10 (32-bit or 64-bit)
-
A sound card compatible with DirectX 9.0c or higher
-
A media player, browser, or application that plays audio on your PC
-
A headphones, speakers, or other audio device connected to your PC
-
-
How do I uninstall DFX Audio Enhancer?
-
If you want to uninstall DFX Audio Enhancer from your PC, you can follow these steps:
-
Go to Start > Control Panel > Programs > Uninstall a program
-
Select DFX Audio Enhancer from the list of programs and click Uninstall
-
Follow the instructions on the screen to complete the uninstallation process
-
-
Where can I get more information about DFX Audio Enhancer?
-
If you want to get more information about DFX Audio Enhancer, you can visit these websites:
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Lagu India Mann Mp3 PATCHED.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Lagu India Mann Mp3 PATCHED.md
deleted file mode 100644
index 97ab1387bf28009f0ad39e43e768edb2c4872d9f..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download Lagu India Mann Mp3 PATCHED.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
Download Lagu India Mann Mp3
-
If you are a fan of romantic Bollywood movies, you might want to download Lagu India Mann Mp3, a collection of songs from the 1999 film Mann. Mann is a remake of the 1957 classic An Affair to Remember, starring Aamir Khan and Manisha Koirala as two lovers who meet on a cruise and fall in love, despite being engaged to other people. The film features some of the most popular songs of the late 90s, composed by Sanjeev Darshan and sung by Udit Narayan, Alka Yagnik, Anuradha Paudwal and others.
-
Some of the songs that you can download from Lagu India Mann Mp3 are:
Kehna Hai Tumse: A romantic duet that expresses the feelings of the protagonists for each other.
-
Nasha Yeh Pyar Ka: A peppy song that shows Aamir Khan's character trying to woo Manisha Koirala's character.
-
Tinak Tin Tana: A playful song that depicts the fun moments between the couple on the cruise.
-
Chaha Hai Tujhko: A sad song that portrays the pain of separation and longing for the beloved.
-
Mera Mann: A soulful song that reflects the inner turmoil and conflict of the characters.
-
-
You can download Lagu India Mann Mp3 from various websites that offer free or paid downloads of Bollywood songs. However, you should be careful about the quality and legality of the downloads, as some websites may contain viruses or malware, or violate copyright laws. You can also stream Lagu India Mann Mp3 online on platforms like YouTube[^1^] or Spotify.
-
Lagu India Mann Mp3 is a great way to enjoy the music and emotions of Mann, one of the most successful and acclaimed films of Aamir Khan and Manisha Koirala. Download Lagu India Mann Mp3 today and relive the magic of Mann!
If you want to know more about the film Mann, you can also watch it online or on DVD. Mann is a film that explores the themes of love, fate, sacrifice and destiny. It is a film that will make you laugh, cry and feel the emotions of the characters. Mann is a film that will stay with you for a long time.
-
-
Mann is also a film that showcases the acting talents of Aamir Khan and Manisha Koirala, who have given some of their best performances in this film. Aamir Khan is known for his versatility and perfectionism, and he brings out the charm and charisma of his character Dev Karan Singh, a playboy who falls in love for the first time. Manisha Koirala is known for her beauty and grace, and she portrays the innocence and dignity of her character Priya Verma, a painter who is engaged to a wealthy businessman. The chemistry between the two actors is palpable and realistic, and they make you root for their love story.
Mann is a film that you should not miss if you are a fan of Bollywood romance. It is a film that will touch your heart and soul. Download Lagu India Mann Mp3 and watch Mann today!
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Dammu Telugu Movie Download Dvdrip 2) The Fastest and Easiest Way to Watch the Awesome Film.md b/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Dammu Telugu Movie Download Dvdrip 2) The Fastest and Easiest Way to Watch the Awesome Film.md
deleted file mode 100644
index 2e1934c48d188d5620c923f2f07bbf49bf6ac452..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Dammu Telugu Movie Download Dvdrip 2) The Fastest and Easiest Way to Watch the Awesome Film.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
HD Online Player (Dammu Telugu Movie Download Dvdrip 2)
-
If you are a fan of Telugu movies, you might have heard of Dammu, a 2012 action drama film starring Jr. NTR, Trisha and Karthika Nair. The film was a hit at the box office and received positive reviews from critics and audiences alike. But what if you missed watching it on the big screen or want to watch it again at your convenience? Well, you are in luck because you can download Dammu Telugu movie in HD quality using an online player.
-
In this article, we will tell you everything you need to know about Dammu Telugu movie, how to download it in dvdrip format using an online player, where to watch it online legally and for free or with a subscription, and some frequently asked questions related to the topic. So, without further ado, let's get started.
-
HD Online Player (Dammu Telugu Movie Download Dvdrip 2)
Dammu (also known as Dhammu) is a 2012 Indian Telugu-language action film written and directed by Boyapati Srinu. The film follows Rama Chandra (Jr. NTR), an orphan who agrees to pose as the heir to a rich and powerful royal family called Suryavamsi, who are looking for an heir after their son dies in an accident. However, Rama Chandra soon finds himself in a conflict with another royal family called Chandravamsi, who are their rivals for generations.
-
The film features Jr. NTR as Rama Chandra / Raja Vasireddy Vijayadwaja Srisimha, Trisha as Sathya, Karthika Nair as Neelaveni, Nassar as Chandravamsi King, Brahmanandam as Jaanaki Bhanupriya as Rama Chandra's mother Venu Thottempudi as Rama Chandra's brother-in-law Abhinaya as Rama Chandra's elder sister Hari Teja as Rama Chandra's elder sister Chitralekha as Rama Chandra's elder sister Suman as Suryavamsi King/Raja Surya Pratap Singh Kota Srinivasa Rao as Raja of Veera Durgam Fort Ahuti Prasad as Neelaveni's father Ali as Rama Chandra's friend Rahul Dev as Police Officer Sampath Raj as Chandravamsi King's elder son Kishore as Chandravamsi King's middle son Venu Madhav as Chandravamsi King's youngest son.
-
The film was released theatrically on 27 April 2012 along with a Tamil dubbed version titled Singamagan. The film received mostly positive reviews from critics who praised Jr. NTR's performance, action sequences, music by M.M. Keeravani and cinematography by Arthur A. Wilson. The film was also a commercial success grossing over ₹55 crore worldwide.
-
Why should you watch Dammu Telugu movie?
-
There are many reasons why you should watch Dammu Telugu movie if you haven't already or want to watch it again. Here are some of them:
-
Watch Dammu Telugu Full Movie HD Online Free
-Dammu Telugu Movie Download in High Quality Dvdrip Format
-How to Stream Dammu Telugu Movie Online with HD Player
-Dammu Telugu Movie Dvdrip Download Link
-HD Online Player for Dammu Telugu Movie Streaming
-Dammu Telugu Full Movie Watch Online HD Quality
-Download Dammu Telugu Movie Dvdrip with Subtitles
-Best HD Online Player for Telugu Movies like Dammu
-Dammu Telugu Movie Online HD Streaming Sites
-Dammu Telugu Movie Dvdrip Torrent Download
-Watch Dammu Telugu Movie Online in HD without Buffering
-Dammu Telugu Movie Download Dvdrip 2 Parts
-HD Online Player for Dammu Telugu Movie on Mobile
-Dammu Telugu Full Movie Online HD 1080p
-Download Dammu Telugu Movie Dvdrip from Google Drive
-HD Online Player for Dammu Telugu Movie with English Subtitles
-Dammu Telugu Movie Watch Online HD Free Download
-Dammu Telugu Movie Download Dvdrip 2 GB Size
-HD Online Player for Dammu Telugu Movie on Laptop
-Dammu Telugu Full Movie Online HD 720p
-Download Dammu Telugu Movie Dvdrip from Mega.nz
-HD Online Player for Dammu Telugu Movie with Hindi Dubbing
-Dammu Telugu Movie Online HD Streaming without Ads
-Dammu Telugu Movie Download Dvdrip 2 CD Format
-HD Online Player for Dammu Telugu Movie on Smart TV
-Dammu Telugu Full Movie Online HD with Dolby Sound
-Download Dammu Telugu Movie Dvdrip from Mediafire
-HD Online Player for Dammu Telugu Movie with Tamil Dubbing
-Dammu Telugu Movie Online HD Streaming with Fast Speed
-Dammu Telugu Movie Download Dvdrip 2 in 1 File
-HD Online Player for Dammu Telugu Movie on Firestick
-Dammu Telugu Full Movie Online HD with Surround Sound
-Download Dammu Telugu Movie Dvdrip from Rapidgator
-HD Online Player for Dammu Telugu Movie with Malayalam Dubbing
-Dammu Telugu Movie Online HD Streaming with Subtitles Option
-Dammu Telugu Movie Download Dvdrip 2 Hours Duration
-HD Online Player for Dammu Telugu Movie on Chromecast
-Dammu Telugu Full Movie Online HD with Commentary Track
-Download Dammu Telugu Movie Dvdrip from Uploaded.net
-HD Online Player for Dammu Telugu Movie with Kannada Dubbing
-Dammu Telugu Movie Online HD Streaming with Resume Feature
-Dammu Telugu Movie Download Dvdrip 2 Different Resolutions
-HD Online Player for Dammu Telugu Movie on Roku
-Dammu Telugu Full Movie Online HD with Behind the Scenes Footage
-Download Dammu Telugu Movie Dvdrip from Zippyshare
-HD Online Player for Dammu Telugu Movie with Bengali Dubbing
-Dammu Telugu Movie Online HD Streaming with Rating Feature
-Dammu Telugu Movie Download DVDrip 2 Disc Set
-
-
The film is a perfect blend of action, drama, romance and comedy that will keep you entertained throughout.
-
The film showcases Jr. NTR's versatility as an actor who can pull off different roles with ease.
-
The film has some stunning action scenes that will leave you breathless.
-
The film has some melodious songs composed by M.M. Keeravani that will touch your heart.
-
The film has some beautiful visuals captured by Arthur A. Wilson that will make you admire the scenic locations.
-
The film has some powerful dialogues written by M.Rathnam that will make you clap your hands.
-
The film has some memorable performances by Trisha, Karthika Nair, Nassar, Brahmanandam and others that will make you laugh and cry.
-
-
How to download Dammu Telugu movie in HD quality?
-
If you want to download Dammu Telugu movie in HD quality using an online player then follow these simple steps:
-
-
Find an online player that allows you to download movies in dvdrip format.
-
Search for Dammu Telugu movie on the online player.
-
Select the desired quality (720p or 1080p) and click on download button.
-
Wait for the download to complete.
-
Enjoy watching Dammu Telugu movie on your device.
-
-
What is an online player?
-
An online player is a software or website that allows you to stream or download movies online without having to install any additional software or plugins on your device. An online player usually has a large collection of movies from different genres languages countries and years that you can choose from. An online player also offers various features such as subtitles multiple audio tracks resume playback speed control etc.
-
What is dvdrip format?
-
Dvdrip format is a video file format that is ripped from a DVD source. Dvdrip format usually has a high quality resolution (720p or 1080p) low file size (700 MB or 1 GB) compatibility with most devices (PCs smartphones tablets etc.) and convenience (no need for DVD players or discs).
-
What are the benefits of downloading movies in dvdrip format?
-
There are many benefits of downloading movies in dvdrip format such as:
-
-
You can enjoy watching movies in high quality resolution without compromising on speed or storage space.
-
You can watch movies offline without having to worry about internet connection or buffering issues.
-
You can watch movies on any device that supports dvdrip format without having to convert them into other formats.
-
You can save money by not having to buy or rent DVDs or pay for streaming services.
-
-
What are the drawbacks of downloading movies in dvdrip format?
-
However, there are also some drawbacks of downloading movies in dvdrip format such as:
-
-
You may be violating the copyrights of the movie makers and distributors by downloading movies illegally.
-
You may be exposing your device to malware or viruses by downloading movies from untrusted sources.
-
You may be facing legal consequences or penalties by downloading movies from banned or blocked websites.
-
You may be missing out on some features or extras that are available on DVDs or streaming platforms.
-
-
What are some tips to download movies safely and legally?
-
If you want to download movies safely and legally then follow these tips:
-
-
Use a VPN (virtual private network) to hide your IP address and location from the authorities and hackers.
-
Check the source of the movie before downloading it. Look for reviews ratings comments and feedback from other users.
-
Avoid downloading movies from torrent sites or peer-to-peer networks as they are prone to malware and viruses.
-
Respect the copyrights of the movie makers and distributors by downloading movies only from authorized websites or platforms.
-
-
Where can you watch Dammu Telugu movie online?
-
If you don't want to download Dammu Telugu movie then you can also watch it online legally and for free or with a subscription. Here are some of the websites and platforms where you can watch Dammu Telugu movie online:
-
ZEE5
-
ZEE5 is a video-on-demand platform that offers a variety of content in different languages genres and formats. You can watch Dammu Telugu movie on ZEE5 for free with ads or with a subscription that starts from ₹99 per month. ZEE5 also offers features such as offline download subtitles multiple audio tracks resume playback etc. You can watch ZEE5 on your PC smartphone tablet smart TV etc.
-
YouTube
-
YouTube is a video-sharing platform that allows users to upload watch share and comment on videos. You can watch Dammu Telugu movie on YouTube for free with ads or with a YouTube Premium subscription that costs ₹129 per month. YouTube also offers features such as offline download subtitles multiple audio tracks resume playback etc. You can watch YouTube on your PC smartphone tablet smart TV etc.
-
Other options
-
There are also other options to watch Dammu Telugu movie online such as Netflix Amazon Prime Video and Hotstar. However these platforms require a subscription that ranges from ₹199 to ₹999 per month. These platforms also offer features such as offline download subtitles multiple audio tracks resume playback etc. You can watch these platforms on your PC smartphone tablet smart TV etc.
-
Conclusion
-
Dammu Telugu movie is a 2012 action drama film starring Jr. NTR Trisha and Karthika Nair. The film is about an orphan who poses as the heir to a royal family and gets involved in a feud with another royal family. The film is a hit at the box office and received positive reviews from critics and audiences alike. You can download Dammu Telugu movie in HD quality using an online player or watch it online legally and for free or with a subscription. We hope this article has helped you to know more about Dammu Telugu movie and how to download or watch it online. If you have any questions or feedback please feel free to leave them in the comments section below.
-
FAQs
-
Here are some frequently asked questions and their answers related to Dammu Telugu movie and how to download or watch it online:
-
-
Who is the director of Dammu Telugu movie? The director of Dammu Telugu movie is Boyapati Srinu who is known for his action films such as Simha Legend Sarrainodu Jaya Janaki Nayaka Vinaya Vidheya Rama and Ala Vaikunthapurramuloo.
-
Who is the music director of Dammu Telugu movie? The music director of Dammu Telugu movie is M.M. Keeravani who is one of the most acclaimed composers in Telugu cinema. He has composed music for films such as Baahubali Eega Magadheera Sye Vikramarkudu Chatrapathi etc.
-
What are some of the hit songs from Dammu Telugu movie? Some of the hit songs from Dammu Telugu movie are Ruler Sound Of Vel Vaasthu Bagunde Raja Vasi Reddy O Lilly etc.
-
Is Dammu Telugu movie available on Netflix? No Dammu Telugu movie is not available on Netflix as of now. However you can watch it on ZEE5 YouTube or other platforms mentioned above.
-
Is Dammu Telugu movie dubbed in Hindi? Yes Dammu Telugu movie is dubbed in Hindi as Dhammu which was released in 2016 by Goldmines Telefilms.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_llm.py
deleted file mode 100644
index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_llm.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance
-of the ChatGPT API or LLM models."""
-from __future__ import annotations
-
-import contextlib
-import json
-from typing import Any, Dict
-
-from colorama import Fore
-from regex import regex
-
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_general import correct_json
-from autogpt.llm_utils import call_ai_function
-from autogpt.logs import logger
-from autogpt.speech import say_text
-
-JSON_SCHEMA = """
-{
- "command": {
- "name": "command name",
- "args": {
- "arg name": "value"
- }
- },
- "thoughts":
- {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user"
- }
-}
-"""
-
-CFG = Config()
-
-
-def auto_fix_json(json_string: str, schema: str) -> str:
- """Fix the given JSON string to make it parseable and fully compliant with
- the provided schema using GPT-3.
-
- Args:
- json_string (str): The JSON string to fix.
- schema (str): The schema to use to fix the JSON.
- Returns:
- str: The fixed JSON string.
- """
- # Try to fix the JSON using GPT:
- function_string = "def fix_json(json_string: str, schema:str=None) -> str:"
- args = [f"'''{json_string}'''", f"'''{schema}'''"]
- description_string = (
- "This function takes a JSON string and ensures that it"
- " is parseable and fully compliant with the provided schema. If an object"
- " or field specified in the schema isn't contained within the correct JSON,"
- " it is omitted. The function also escapes any double quotes within JSON"
- " string values to ensure that they are valid. If the JSON string contains"
- " any None or NaN values, they are replaced with null before being parsed."
- )
-
- # If it doesn't already start with a "`", add one:
- if not json_string.startswith("`"):
- json_string = "```json\n" + json_string + "\n```"
- result_string = call_ai_function(
- function_string, args, description_string, model=CFG.fast_llm_model
- )
- logger.debug("------------ JSON FIX ATTEMPT ---------------")
- logger.debug(f"Original JSON: {json_string}")
- logger.debug("-----------")
- logger.debug(f"Fixed JSON: {result_string}")
- logger.debug("----------- END OF FIX ATTEMPT ----------------")
-
- try:
- json.loads(result_string) # just check the validity
- return result_string
- except json.JSONDecodeError: # noqa: E722
- # Get the call stack:
- # import traceback
- # call_stack = traceback.format_exc()
- # print(f"Failed to fix JSON: '{json_string}' "+call_stack)
- return "failed"
-
-
-def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
- """Fix the given JSON string to make it parseable and fully compliant with two techniques.
-
- Args:
- json_string (str): The JSON string to fix.
-
- Returns:
- str: The fixed JSON string.
- """
-
- # Parse and print Assistant response
- assistant_reply_json = fix_and_parse_json(assistant_reply)
- if assistant_reply_json == {}:
- assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
- assistant_reply
- )
-
- if assistant_reply_json != {}:
- return assistant_reply_json
-
- logger.error(
- "Error: The following AI output couldn't be converted to a JSON:\n",
- assistant_reply,
- )
- if CFG.speak_mode:
- say_text("I have received an invalid JSON response from the OpenAI API.")
-
- return {}
-
-
-def fix_and_parse_json(
- json_to_load: str, try_to_fix_with_gpt: bool = True
-) -> Dict[Any, Any]:
- """Fix and parse JSON string
-
- Args:
- json_to_load (str): The JSON string.
- try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT.
- Defaults to True.
-
- Returns:
- str or dict[Any, Any]: The parsed JSON.
- """
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = json_to_load.replace("\t", "")
- return json.loads(json_to_load)
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = correct_json(json_to_load)
- return json.loads(json_to_load)
- # Let's do something manually:
- # sometimes GPT responds with something BEFORE the braces:
- # "I'm sorry, I don't understand. Please try again."
- # {"text": "I'm sorry, I don't understand. Please try again.",
- # "confidence": 0.0}
- # So let's try to find the first brace and then parse the rest
- # of the string
- try:
- brace_index = json_to_load.index("{")
- maybe_fixed_json = json_to_load[brace_index:]
- last_brace_index = maybe_fixed_json.rindex("}")
- maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1]
- return json.loads(maybe_fixed_json)
- except (json.JSONDecodeError, ValueError) as e:
- return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
-
-
-def try_ai_fix(
- try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str
-) -> Dict[Any, Any]:
- """Try to fix the JSON with the AI
-
- Args:
- try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI.
- exception (Exception): The exception that was raised.
- json_to_load (str): The JSON string to load.
-
- Raises:
- exception: If try_to_fix_with_gpt is False.
-
- Returns:
- str or dict[Any, Any]: The JSON string or dictionary.
- """
- if not try_to_fix_with_gpt:
- raise exception
- if CFG.debug_mode:
- logger.warn(
- "Warning: Failed to parse AI output, attempting to fix."
- "\n If you see this warning frequently, it's likely that"
- " your prompt is confusing the AI. Try changing it up"
- " slightly."
- )
- # Now try to fix this up using the ai_functions
- ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
-
- if ai_fixed_json != "failed":
- return json.loads(ai_fixed_json)
- # This allows the AI to react to the error message,
- # which usually results in it correcting its ways.
- # logger.error("Failed to fix AI output, telling the AI.")
- return {}
-
-
-def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
- if CFG.speak_mode and CFG.debug_mode:
- say_text(
- "I have received an invalid JSON response from the OpenAI API. "
- "Trying to fix it now."
- )
- logger.error("Attempting to fix JSON by finding outermost brackets\n")
-
- try:
- json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}")
- json_match = json_pattern.search(json_string)
-
- if json_match:
- # Extract the valid JSON object from the string
- json_string = json_match.group(0)
- logger.typewriter_log(
- title="Apparently json was fixed.", title_color=Fore.GREEN
- )
- if CFG.speak_mode and CFG.debug_mode:
- say_text("Apparently json was fixed.")
- else:
- return {}
-
- except (json.JSONDecodeError, ValueError):
- if CFG.debug_mode:
- logger.error(f"Error: Invalid JSON: {json_string}\n")
- if CFG.speak_mode:
- say_text("Didn't work. I will have to ignore this response then.")
- logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")
- json_string = {}
-
- return fix_and_parse_json(json_string)
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/content-disposition/README.md b/spaces/rayan-saleh/whisper2notion/server/node_modules/content-disposition/README.md
deleted file mode 100644
index 3a0bb055949cdaed008f0f85e111624214213873..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/content-disposition/README.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# content-disposition
-
-[![NPM Version][npm-image]][npm-url]
-[![NPM Downloads][downloads-image]][downloads-url]
-[![Node.js Version][node-version-image]][node-version-url]
-[![Build Status][github-actions-ci-image]][github-actions-ci-url]
-[![Test Coverage][coveralls-image]][coveralls-url]
-
-Create and parse HTTP `Content-Disposition` header
-
-## Installation
-
-```sh
-$ npm install content-disposition
-```
-
-## API
-
-```js
-var contentDisposition = require('content-disposition')
-```
-
-### contentDisposition(filename, options)
-
-Create an attachment `Content-Disposition` header value using the given file name,
-if supplied. The `filename` is optional and if no file name is desired, but you
-want to specify `options`, set `filename` to `undefined`.
-
-```js
-res.setHeader('Content-Disposition', contentDisposition('∫ maths.pdf'))
-```
-
-**note** HTTP headers are of the ISO-8859-1 character set. If you are writing this
-header through a means different from `setHeader` in Node.js, you'll want to specify
-the `'binary'` encoding in Node.js.
-
-#### Options
-
-`contentDisposition` accepts these properties in the options object.
-
-##### fallback
-
-If the `filename` option is outside ISO-8859-1, then the file name is actually
-stored in a supplemental field for clients that support Unicode file names and
-a ISO-8859-1 version of the file name is automatically generated.
-
-This specifies the ISO-8859-1 file name to override the automatic generation or
-disables the generation all together, defaults to `true`.
-
- - A string will specify the ISO-8859-1 file name to use in place of automatic
- generation.
- - `false` will disable including a ISO-8859-1 file name and only include the
- Unicode version (unless the file name is already ISO-8859-1).
- - `true` will enable automatic generation if the file name is outside ISO-8859-1.
-
-If the `filename` option is ISO-8859-1 and this option is specified and has a
-different value, then the `filename` option is encoded in the extended field
-and this set as the fallback field, even though they are both ISO-8859-1.
-
-##### type
-
-Specifies the disposition type, defaults to `"attachment"`. This can also be
-`"inline"`, or any other value (all values except inline are treated like
-`attachment`, but can convey additional information if both parties agree to
-it). The type is normalized to lower-case.
-
-### contentDisposition.parse(string)
-
-```js
-var disposition = contentDisposition.parse('attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt')
-```
-
-Parse a `Content-Disposition` header string. This automatically handles extended
-("Unicode") parameters by decoding them and providing them under the standard
-parameter name. This will return an object with the following properties (examples
-are shown for the string `'attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt'`):
-
- - `type`: The disposition type (always lower case). Example: `'attachment'`
-
- - `parameters`: An object of the parameters in the disposition (name of parameter
- always lower case and extended versions replace non-extended versions). Example:
- `{filename: "€ rates.txt"}`
-
-## Examples
-
-### Send a file for download
-
-```js
-var contentDisposition = require('content-disposition')
-var destroy = require('destroy')
-var fs = require('fs')
-var http = require('http')
-var onFinished = require('on-finished')
-
-var filePath = '/path/to/public/plans.pdf'
-
-http.createServer(function onRequest (req, res) {
- // set headers
- res.setHeader('Content-Type', 'application/pdf')
- res.setHeader('Content-Disposition', contentDisposition(filePath))
-
- // send file
- var stream = fs.createReadStream(filePath)
- stream.pipe(res)
- onFinished(res, function () {
- destroy(stream)
- })
-})
-```
-
-## Testing
-
-```sh
-$ npm test
-```
-
-## References
-
-- [RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1][rfc-2616]
-- [RFC 5987: Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters][rfc-5987]
-- [RFC 6266: Use of the Content-Disposition Header Field in the Hypertext Transfer Protocol (HTTP)][rfc-6266]
-- [Test Cases for HTTP Content-Disposition header field (RFC 6266) and the Encodings defined in RFCs 2047, 2231 and 5987][tc-2231]
-
-[rfc-2616]: https://tools.ietf.org/html/rfc2616
-[rfc-5987]: https://tools.ietf.org/html/rfc5987
-[rfc-6266]: https://tools.ietf.org/html/rfc6266
-[tc-2231]: http://greenbytes.de/tech/tc2231/
-
-## License
-
-[MIT](LICENSE)
-
-[npm-image]: https://img.shields.io/npm/v/content-disposition.svg
-[npm-url]: https://npmjs.org/package/content-disposition
-[node-version-image]: https://img.shields.io/node/v/content-disposition.svg
-[node-version-url]: https://nodejs.org/en/download
-[coveralls-image]: https://img.shields.io/coveralls/jshttp/content-disposition.svg
-[coveralls-url]: https://coveralls.io/r/jshttp/content-disposition?branch=master
-[downloads-image]: https://img.shields.io/npm/dm/content-disposition.svg
-[downloads-url]: https://npmjs.org/package/content-disposition
-[github-actions-ci-image]: https://img.shields.io/github/workflow/status/jshttp/content-disposition/ci/master?label=ci
-[github-actions-ci-url]: https://github.com/jshttp/content-disposition?query=workflow%3Aci
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Callofdutymodernwarfare3errorunabletocreatesteamappidtxt.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Callofdutymodernwarfare3errorunabletocreatesteamappidtxt.md
deleted file mode 100644
index 8f7b56f4bc66cd74662437dfc3f3e5b0a1f2cdc6..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Callofdutymodernwarfare3errorunabletocreatesteamappidtxt.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
How to Fix Call of Duty Modern Warfare 3 Error Unable to Create Steam_appid.txt
-
Call of Duty Modern Warfare 3 is a popular first-person shooter game that was released in 2011. However, some players may encounter an error message when they try to launch the game. The error message says that the game cannot run because it cannot see the steam API, and that it is usually caused by not having a steam_appid.txt file, or that Steam itself is not running.
-
This error can be frustrating and prevent you from enjoying the game. Fortunately, there are some possible solutions that you can try to fix this issue. Here are some steps that you can follow:
Make sure that Steam is running and that you are logged in to your account.
-
Verify the integrity of the game files. To do this, right-click on Call of Duty Modern Warfare 3 in your Steam library, select Properties, then go to the Local Files tab and click on Verify Integrity of Game Files. This will check for any missing or corrupted files and download them if necessary.
-
Create a steam_appid.txt file and place it in the game folder. To do this, open Notepad or any other text editor and type in 42690, which is the app ID for Call of Duty Modern Warfare 3. Save the file as steam_appid.txt and copy it to the game folder, which is usually located at C:\Program Files (x86)\Steam\steamapps\common\Call of Duty Modern Warfare 3.
-
Run the game as an administrator. To do this, right-click on the game's executable file (iw5sp.exe) or its shortcut, select Properties, then go to the Compatibility tab and check the box that says Run this program as an administrator. Click on Apply and OK, then try launching the game again.
-
-
If none of these steps work, you may need to contact Steam support or Activision support for further assistance. Hopefully, one of these solutions will help you fix the error and enjoy Call of Duty Modern Warfare 3 without any problems.
Call of Duty Modern Warfare 3 is a thrilling game that offers a variety of modes and features. You can play the single-player campaign, which follows the events of the previous games and takes you to different locations around the world. You can also play the multiplayer mode, which lets you compete with other players online in various modes such as Team Deathmatch, Domination, Search and Destroy, and more. You can also play the Spec Ops mode, which lets you team up with a friend or a bot and complete various missions and challenges.
-
Call of Duty Modern Warfare 3 is a game that will keep you entertained for hours with its fast-paced action and immersive graphics. However, if you encounter the error unable to create steam_appid.txt, you may not be able to enjoy the game at all. That's why it is important to follow the steps above and try to fix the issue as soon as possible. Once you do that, you can launch the game and have fun.
Call of Duty Modern Warfare 3 is one of the best-selling games of all time, and for good reason. It offers a thrilling and immersive experience that will make you feel like you are in the middle of a war. However, some players may face a frustrating error that prevents them from launching the game. The error says that the game cannot see the steam API and that it is usually caused by not having a steam_appid.txt file or by Steam not running.
-
If you are one of those players, don't worry. There are some simple solutions that you can try to fix this error and play the game without any issues. In this article, we will show you how to create a steam_appid.txt file, verify the integrity of the game files, run the game as an administrator, and contact support if none of these steps work. By following these steps, you should be able to fix the error and enjoy Call of Duty Modern Warfare 3 to the fullest.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Bullett Raja Movies 1080p Torrent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Bullett Raja Movies 1080p Torrent.md
deleted file mode 100644
index ff5bc45e2f9b0285b69941e7262a757dd5f0efc3..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Bullett Raja Movies 1080p Torrent.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
I would like to keep it short... Remove all the songs.Of course, not the one starring Mahi Gill. The movie could have been shorter. Seriously after interval I had no motivation to see the other half. It was quite predictable Raja Mishra would have his revenge in the second half and the movie shall end.But I did not quit and finally I was presented with a nice ending. A good thought has been given into the movie and has been seasoned with quite a masala. I would say on a Saturday when you are exhausted after working for last 5 days just go and watch the movie. The star cast has been chosen nicely and I totally appreciate the acting of Saif,Jimmy,Raj Babbar,Gulshan Grover....
-
Just take the movie out of this world. At the end of the movie when Raja and Rudra are riding in a pickup the truck fails. It suddenly turns into a rocket and they are travelling at a very high speed. The movie has the elements of good acting. Saif and Jimmy are simply fantastic.The chemistry between the two is brilliant. Its the third time I have seen Jimmy Shergil and Saif Ali Khan together. This time the movie is more commercial. Mythril has been playing his kind of character. I am quite satisfied with the movie. It gives one a feel of an average Indian crime plot in US. As expected nothing great has been done. I am sure and feel quite disappointed with the second half. It is a nice movie but one is not compelled to go watch the movie.
This is another disappointing movie by Nana Patekar. He has just back as Raja Mishra from the last movie Bullet Raja (2013).. So what makes him come back to play another Raja Mishra. He is a good actor and can act no doubt. But in this movie he doesn't show any empathy or sentiment for his character. At the end of the day he is a villain. There is no suspense or anxiety during the first half of the movie. It is the second half that is interesting. Seeing Saif Ali Khan and Jimmy Shergil in action is interesting enough. Overall, this movie is a plain and simple movie. I doubt if people will go out to see this movie on its first day. Most actors who come out with their own movies tend to put their best foot forward. If they fail this time, It will be a big fail and won't be doing good to them. It is disappointing that Nana Patekar is back after so many years with one more disappointing movie. He has been disappointing since his mother's death. Disappointed with Bullet Raja (2013). This is not one of the better movies. He is a great actor but he failed this time. This time it is the other actor Jimmy Shergil who is leading the movie. He is an actor who creates a movie better than its movie.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Frontschweine Vollversion Kostenlos Chip.epub.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Frontschweine Vollversion Kostenlos Chip.epub.md
deleted file mode 100644
index b90f8c88d2161e40b467298fc580482fc5a15a7b..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Frontschweine Vollversion Kostenlos Chip.epub.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
Frontschweine: Ein Klassiker der Strategiespiele jetzt als E-Book
-
Frontschweine ist ein Strategiespiel aus dem Jahr 1999, das die Spieler in die Rolle von anthropomorphen Schweinen versetzt, die in einem Krieg gegen andere Tierarten kämpfen. Das Spiel zeichnet sich durch seinen schwarzen Humor, seine abwechslungsreichen Missionen und seine originellen Waffen aus.
Jetzt können die Fans dieses Kultspiels das Abenteuer der Frontschweine in einem E-Book erleben, das alle Details der Story, der Charaktere und der Spielmechanik enthält. Das E-Book ist im EPUB-Format erhältlich und kann auf jedem kompatiblen Gerät gelesen werden.
-
Das Beste daran ist, dass das E-Book völlig kostenlos ist. Die Fans müssen nur die Website von Chip besuchen und den Download-Link anklicken. Dort finden sie auch weitere Informationen über das Spiel und seine Entwickler.
-
Frontschweine ist ein Muss für alle Liebhaber von Strategiespielen und von schrägem Humor. Das E-Book bietet die perfekte Gelegenheit, dieses Meisterwerk wieder zu entdecken oder zum ersten Mal zu genieÃen.
-
-
Das E-Book von Frontschweine enthält nicht nur die komplette Spielanleitung, sondern auch viele Hintergrundinformationen über die Welt und die Geschichte des Spiels. Die Leser erfahren mehr über die verschiedenen Fraktionen der Schweine, ihre Motive und ihre Persönlichkeiten. AuÃerdem werden die anderen Tierarten vorgestellt, die als Feinde oder Verbündete der Schweine auftreten.
-
-
Das E-Book bietet auch einen Einblick in die Entwicklung des Spiels, seine Inspirationen und seine Herausforderungen. Die Leser können sich einige Konzeptzeichnungen, Skizzen und Screenshots ansehen, die zeigen, wie das Spiel entstanden ist. Darüber hinaus enthält das E-Book einige Interviews mit den Entwicklern, die ihre Erfahrungen und Anekdoten teilen.
-
Frontschweine ist ein E-Book, das sowohl unterhaltsam als auch informativ ist. Es ist eine Hommage an ein Spiel, das viele Fans auf der ganzen Welt begeistert hat. Es ist auch eine Chance, mehr über die Kunst und die Technik des Videospiel-Designs zu lernen. Das E-Book ist ein Geschenk für alle Frontschweine-Liebhaber und für alle, die es werden wollen.
-
-
Das E-Book von Frontschweine ist nicht nur für die Fans des Spiels gedacht, sondern auch für alle, die sich für Strategiespiele im Allgemeinen interessieren. Das E-Book erklärt die Grundlagen des Genres, seine Geschichte und seine Entwicklung. Es zeigt auch, wie Frontschweine sich von anderen Strategiespielen unterscheidet und was es so einzigartig macht.
-
Das E-Book bietet auch einige Tipps und Tricks für die Spieler, die das Spiel meistern wollen. Es enthält eine detaillierte Beschreibung aller Waffen, Fahrzeuge und Gegenstände, die im Spiel verfügbar sind. Es gibt auch einige Ratschläge, wie man die verschiedenen Missionen erfolgreich abschlieÃen kann. Das E-Book ist eine wertvolle Ressource für alle, die das Beste aus dem Spiel herausholen wollen.
-
Das E-Book von Frontschweine ist ein umfassendes und faszinierendes Werk, das das Spiel in all seinen Aspekten beleuchtet. Es ist eine Lektüre, die jeden Strategiespiel-Fan begeistern wird. Es ist auch eine Möglichkeit, die Welt der Frontschweine zu erkunden und zu verstehen. Das E-Book ist ein Muss für alle, die dieses legendäre Spiel lieben oder kennenlernen wollen.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Yeh Jawaani Hai Deewani Full Movie Download In Hd Mp4golkes) Fixed.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Yeh Jawaani Hai Deewani Full Movie Download In Hd Mp4golkes) Fixed.md
deleted file mode 100644
index c86b8f474ed7793e1baca8913afeb4c9d300b816..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Yeh Jawaani Hai Deewani Full Movie Download In Hd Mp4golkes) Fixed.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-```html
-
How to Watch Yeh Jawaani Hai Deewani Full Movie Online in HD Quality
-
Yeh Jawaani Hai Deewani is a 2013 Bollywood romantic comedy film starring Ranbir Kapoor, Deepika Padukone, Aditya Roy Kapur and Kalki Koechlin. The film follows the lives of four friends who go on a trip to Manali and then reunite after eight years. The film was a huge box office hit and received positive reviews from critics and audiences alike.
-
HD Online Player (Yeh Jawaani Hai Deewani full movie download in hd mp4golkes)
If you want to watch Yeh Jawaani Hai Deewani full movie online in HD quality, you have several options. You can either stream it on a legal platform like Netflix, Amazon Prime Video, Hotstar or Zee5, or you can download it from a torrent site like Filmywap, Tamilrockers or Movierulz. However, we do not recommend downloading from illegal sources as it may harm your device and violate the copyright laws.
-
The best way to watch Yeh Jawaani Hai Deewani full movie online in HD quality is to use an HD online player that can play any video format without any hassle. One such player is VLC Media Player, which is free and easy to use. Here are the steps to watch Yeh Jawaani Hai Deewani full movie online in HD quality using VLC Media Player:
-
-
Download VLC Media Player from its official website and install it on your device.
-
Download Yeh Jawaani Hai Deewani full movie in HD mp4 format from a legal source or a torrent site.
-
Open VLC Media Player and click on Media > Open File.
-
Browse to the location where you saved the movie file and select it.
-
Click on Play and enjoy watching Yeh Jawaani Hai Deewani full movie online in HD quality.
-
-
We hope this article helped you watch Yeh Jawaani Hai Deewani full movie online in HD quality. If you liked this article, please share it with your friends and family who love Bollywood movies. Thank you for reading!
-```
-
-```html
-
Yeh Jawaani Hai Deewani is a film that celebrates friendship, love and life. The film has many memorable scenes and dialogues that have become iconic in Bollywood. Some of the most popular scenes are the trekking scene where Bunny and Naina bond over their dreams, the wedding scene where Bunny and Naina dance to the song "Badtameez Dil", the airport scene where Bunny confesses his love to Naina and the final scene where they reunite in Udaipur.
-
The film also has a stellar soundtrack composed by Pritam and sung by various artists like Arijit Singh, Benny Dayal, Shalmali Kholgade and Mohit Chauhan. The songs are catchy, romantic and energetic and suit the mood of the film. Some of the most popular songs are "Badtameez Dil", "Balam Pichkari", "Kabira", "Ilahi" and "Subhanallah".
-
-
Yeh Jawaani Hai Deewani is a film that will make you laugh, cry and fall in love. It is a film that you can watch with your friends, family or partner and enjoy every moment of it. It is a film that will stay with you for a long time and make you nostalgic for your own adventures. It is a film that you should not miss.
-
-AL. 49a0673df2. Cowboy Bebop English Dub Movie Torrent · Www TamilRockers Cc Kanithan 2016www TamilRockers Cc Kanithan 2016 4d29de3e1b
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Xilisoft Photo Slideshow Maker Serial Number VERIFIED.md b/spaces/scedlatioru/img-to-music/example/Xilisoft Photo Slideshow Maker Serial Number VERIFIED.md
deleted file mode 100644
index 928b2fb60fe63eec9c6787c974c89f6b6586da99..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Xilisoft Photo Slideshow Maker Serial Number VERIFIED.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Movi Slideshow Maker Crack can be a tool that is amazing creating professional slideshow videos on your computer. Moreover, it advanced features that can make it easy and fast to produce a ready-to-play slip in the shape of a video guide, which we may then circulate on the web or among friends. The program gives you to improve the standard of the images inserted, changing the lighting, colour, compare, and other available choices. You may also crop and turn the photographs as needed to fit better as part of your slideshow. A preview section is obtained, making sure that every step of just how you shall keep a vision on the result. Photos, videos, and also acoustics files you fill are stored in a -panel which you have the ability to set to show just specific types.
Movavi Slideshow Maker Serial Keyis a tool that makes creating a slideshow fast and straightforward. We obtain the slideshow in no time and we may create it in standard or custom formats. You may make slideshows of extensive collections of photos or you can crop and rotate pictures. The program additionally has flexible designing and editing tools.
-
Movavi Slideshow Maker Serial Key offers simply no limits as to the number of photos that we can trim, cut, combine and manipulate into new videos. A very simple interface permits us to carry on quickly. A variety of video and also photo editing tools are accessible, allowing us to make a gallery of the final results or to export them to hard drive. Movavi Slideshow Maker Serial Key has the ability to add synchronised music, modify their brightness, contrast, colour, and other options.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/models/__init__.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/models/__init__.py
deleted file mode 100644
index 00bde45f003698a5b15d3517ae47b59ef1d86e0c..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/models/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import importlib
-from copy import deepcopy
-from os import path as osp
-
-from basicsr.utils import get_root_logger, scandir
-from basicsr.utils.registry import MODEL_REGISTRY
-
-__all__ = ['build_model']
-
-# automatically scan and import model modules for registry
-# scan all the files under the 'models' folder and collect files ending with
-# '_model.py'
-model_folder = osp.dirname(osp.abspath(__file__))
-model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
-# import all the model modules
-_model_modules = [importlib.import_module(f'basicsr.models.{file_name}') for file_name in model_filenames]
-
-
-def build_model(opt):
- """Build model from options.
-
- Args:
- opt (dict): Configuration. It must constain:
- model_type (str): Model type.
- """
- opt = deepcopy(opt)
- model = MODEL_REGISTRY.get(opt['model_type'])(opt)
- logger = get_root_logger()
- logger.info(f'Model [{model.__class__.__name__}] is created.')
- return model
diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp
deleted file mode 100644
index 41c6df6f721bd95a525fd6a03dd9882e863de042..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp
+++ /dev/null
@@ -1,164 +0,0 @@
-// modify from
-// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c
-
-#include
-#include
-
-#include
-#include
-
-#define WITH_CUDA // always use cuda
-#ifdef WITH_CUDA
-int deform_conv_forward_cuda(at::Tensor input, at::Tensor weight,
- at::Tensor offset, at::Tensor output,
- at::Tensor columns, at::Tensor ones, int kW,
- int kH, int dW, int dH, int padW, int padH,
- int dilationW, int dilationH, int group,
- int deformable_group, int im2col_step);
-
-int deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset,
- at::Tensor gradOutput, at::Tensor gradInput,
- at::Tensor gradOffset, at::Tensor weight,
- at::Tensor columns, int kW, int kH, int dW,
- int dH, int padW, int padH, int dilationW,
- int dilationH, int group,
- int deformable_group, int im2col_step);
-
-int deform_conv_backward_parameters_cuda(
- at::Tensor input, at::Tensor offset, at::Tensor gradOutput,
- at::Tensor gradWeight, // at::Tensor gradBias,
- at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH,
- int padW, int padH, int dilationW, int dilationH, int group,
- int deformable_group, float scale, int im2col_step);
-
-void modulated_deform_conv_cuda_forward(
- at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,
- at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns,
- int kernel_h, int kernel_w, const int stride_h, const int stride_w,
- const int pad_h, const int pad_w, const int dilation_h,
- const int dilation_w, const int group, const int deformable_group,
- const bool with_bias);
-
-void modulated_deform_conv_cuda_backward(
- at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,
- at::Tensor offset, at::Tensor mask, at::Tensor columns,
- at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias,
- at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output,
- int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h,
- int pad_w, int dilation_h, int dilation_w, int group, int deformable_group,
- const bool with_bias);
-#endif
-
-int deform_conv_forward(at::Tensor input, at::Tensor weight,
- at::Tensor offset, at::Tensor output,
- at::Tensor columns, at::Tensor ones, int kW,
- int kH, int dW, int dH, int padW, int padH,
- int dilationW, int dilationH, int group,
- int deformable_group, int im2col_step) {
- if (input.device().is_cuda()) {
-#ifdef WITH_CUDA
- return deform_conv_forward_cuda(input, weight, offset, output, columns,
- ones, kW, kH, dW, dH, padW, padH, dilationW, dilationH, group,
- deformable_group, im2col_step);
-#else
- AT_ERROR("deform conv is not compiled with GPU support");
-#endif
- }
- AT_ERROR("deform conv is not implemented on CPU");
-}
-
-int deform_conv_backward_input(at::Tensor input, at::Tensor offset,
- at::Tensor gradOutput, at::Tensor gradInput,
- at::Tensor gradOffset, at::Tensor weight,
- at::Tensor columns, int kW, int kH, int dW,
- int dH, int padW, int padH, int dilationW,
- int dilationH, int group,
- int deformable_group, int im2col_step) {
- if (input.device().is_cuda()) {
-#ifdef WITH_CUDA
- return deform_conv_backward_input_cuda(input, offset, gradOutput,
- gradInput, gradOffset, weight, columns, kW, kH, dW, dH, padW, padH,
- dilationW, dilationH, group, deformable_group, im2col_step);
-#else
- AT_ERROR("deform conv is not compiled with GPU support");
-#endif
- }
- AT_ERROR("deform conv is not implemented on CPU");
-}
-
-int deform_conv_backward_parameters(
- at::Tensor input, at::Tensor offset, at::Tensor gradOutput,
- at::Tensor gradWeight, // at::Tensor gradBias,
- at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH,
- int padW, int padH, int dilationW, int dilationH, int group,
- int deformable_group, float scale, int im2col_step) {
- if (input.device().is_cuda()) {
-#ifdef WITH_CUDA
- return deform_conv_backward_parameters_cuda(input, offset, gradOutput,
- gradWeight, columns, ones, kW, kH, dW, dH, padW, padH, dilationW,
- dilationH, group, deformable_group, scale, im2col_step);
-#else
- AT_ERROR("deform conv is not compiled with GPU support");
-#endif
- }
- AT_ERROR("deform conv is not implemented on CPU");
-}
-
-void modulated_deform_conv_forward(
- at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,
- at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns,
- int kernel_h, int kernel_w, const int stride_h, const int stride_w,
- const int pad_h, const int pad_w, const int dilation_h,
- const int dilation_w, const int group, const int deformable_group,
- const bool with_bias) {
- if (input.device().is_cuda()) {
-#ifdef WITH_CUDA
- return modulated_deform_conv_cuda_forward(input, weight, bias, ones,
- offset, mask, output, columns, kernel_h, kernel_w, stride_h,
- stride_w, pad_h, pad_w, dilation_h, dilation_w, group,
- deformable_group, with_bias);
-#else
- AT_ERROR("modulated deform conv is not compiled with GPU support");
-#endif
- }
- AT_ERROR("modulated deform conv is not implemented on CPU");
-}
-
-void modulated_deform_conv_backward(
- at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones,
- at::Tensor offset, at::Tensor mask, at::Tensor columns,
- at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias,
- at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output,
- int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h,
- int pad_w, int dilation_h, int dilation_w, int group, int deformable_group,
- const bool with_bias) {
- if (input.device().is_cuda()) {
-#ifdef WITH_CUDA
- return modulated_deform_conv_cuda_backward(input, weight, bias, ones,
- offset, mask, columns, grad_input, grad_weight, grad_bias, grad_offset,
- grad_mask, grad_output, kernel_h, kernel_w, stride_h, stride_w,
- pad_h, pad_w, dilation_h, dilation_w, group, deformable_group,
- with_bias);
-#else
- AT_ERROR("modulated deform conv is not compiled with GPU support");
-#endif
- }
- AT_ERROR("modulated deform conv is not implemented on CPU");
-}
-
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("deform_conv_forward", &deform_conv_forward,
- "deform forward");
- m.def("deform_conv_backward_input", &deform_conv_backward_input,
- "deform_conv_backward_input");
- m.def("deform_conv_backward_parameters",
- &deform_conv_backward_parameters,
- "deform_conv_backward_parameters");
- m.def("modulated_deform_conv_forward",
- &modulated_deform_conv_forward,
- "modulated deform conv forward");
- m.def("modulated_deform_conv_backward",
- &modulated_deform_conv_backward,
- "modulated deform conv backward");
-}
diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/utils/download_util.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/utils/download_util.py
deleted file mode 100644
index 2a267915743ee3f3232bc8fe992466b52468979a..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/utils/download_util.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import math
-import os
-import requests
-from torch.hub import download_url_to_file, get_dir
-from tqdm import tqdm
-from urllib.parse import urlparse
-
-from .misc import sizeof_fmt
-
-
-def download_file_from_google_drive(file_id, save_path):
- """Download files from google drive.
- Ref:
- https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive # noqa E501
- Args:
- file_id (str): File id.
- save_path (str): Save path.
- """
-
- session = requests.Session()
- URL = 'https://docs.google.com/uc?export=download'
- params = {'id': file_id}
-
- response = session.get(URL, params=params, stream=True)
- token = get_confirm_token(response)
- if token:
- params['confirm'] = token
- response = session.get(URL, params=params, stream=True)
-
- # get file size
- response_file_size = session.get(URL, params=params, stream=True, headers={'Range': 'bytes=0-2'})
- print(response_file_size)
- if 'Content-Range' in response_file_size.headers:
- file_size = int(response_file_size.headers['Content-Range'].split('/')[1])
- else:
- file_size = None
-
- save_response_content(response, save_path, file_size)
-
-
-def get_confirm_token(response):
- for key, value in response.cookies.items():
- if key.startswith('download_warning'):
- return value
- return None
-
-
-def save_response_content(response, destination, file_size=None, chunk_size=32768):
- if file_size is not None:
- pbar = tqdm(total=math.ceil(file_size / chunk_size), unit='chunk')
-
- readable_file_size = sizeof_fmt(file_size)
- else:
- pbar = None
-
- with open(destination, 'wb') as f:
- downloaded_size = 0
- for chunk in response.iter_content(chunk_size):
- downloaded_size += chunk_size
- if pbar is not None:
- pbar.update(1)
- pbar.set_description(f'Download {sizeof_fmt(downloaded_size)} / {readable_file_size}')
- if chunk: # filter out keep-alive new chunks
- f.write(chunk)
- if pbar is not None:
- pbar.close()
-
-
-def load_file_from_url(url, model_dir=None, progress=True, file_name=None):
- """Load file form http url, will download models if necessary.
- Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py
- Args:
- url (str): URL to be downloaded.
- model_dir (str): The path to save the downloaded model. Should be a full path. If None, use pytorch hub_dir.
- Default: None.
- progress (bool): Whether to show the download progress. Default: True.
- file_name (str): The downloaded file name. If None, use the file name in the url. Default: None.
- Returns:
- str: The path to the downloaded file.
- """
- if model_dir is None: # use the pytorch hub_dir
- hub_dir = get_dir()
- model_dir = os.path.join(hub_dir, 'checkpoints')
-
- os.makedirs(model_dir, exist_ok=True)
-
- parts = urlparse(url)
- filename = os.path.basename(parts.path)
- if file_name is not None:
- filename = file_name
- cached_file = os.path.abspath(os.path.join(model_dir, filename))
- if not os.path.exists(cached_file):
- print(f'Downloading: "{url}" to {cached_file}\n')
- download_url_to_file(url, cached_file, hash_prefix=None, progress=progress)
- return cached_file
\ No newline at end of file
diff --git a/spaces/seduerr/text_analytics/text_analytics/indices/referential_cohesion_indices.py b/spaces/seduerr/text_analytics/text_analytics/indices/referential_cohesion_indices.py
deleted file mode 100644
index 38152bd8981854ca49774e7b81481502e4472453..0000000000000000000000000000000000000000
--- a/spaces/seduerr/text_analytics/text_analytics/indices/referential_cohesion_indices.py
+++ /dev/null
@@ -1,402 +0,0 @@
-import multiprocessing
-import spacy
-import statistics
-
-from itertools import combinations
-from spacy.tokens import Span
-from text_analytics.constants import ACCEPTED_LANGUAGES
-from text_analytics.utils.statistics_results import StatisticsResults
-from text_analytics.utils.utils import is_word
-from text_analytics.utils.utils import is_content_word
-from text_analytics.utils.utils import split_text_into_paragraphs
-from typing import Callable
-from typing import List
-
-
-class ReferentialCohesionIndices:
- '''
- This class will handle all operations to find the synthactic pattern density indices of a text according to Coh-Metrix.
- '''
- # TODO: Implement multiprocessing
- def __init__(self, nlp, language: str='en') -> None:
- '''
- The constructor will initialize this object that calculates the synthactic pattern density indices for a specific language of those that are available.
-
- Parameters:
- nlp: The spacy model that corresponds to a language.
- language(str): The language that the texts to process will have.
-
- Returns:
- None.
- '''
- if not language in ACCEPTED_LANGUAGES:
- raise ValueError(f'Language {language} is not supported yet')
-
- self.language = language
- self._nlp = nlp
-
- def _calculate_overlap_for_adjacent_sentences(self, text: str, disable_pipeline: List, sentence_analyzer: Callable, statistic_type: str='mean', workers: int=-1) -> StatisticsResults:
- '''
- This method calculates the overlap for adjacent sentences in a text. MULTIPROCESSING STILL NOT IMPLEMENTED.
-
- Parameters:
- text(str): The text to be analyzed.
- disable_pipeline(List): The pipeline elements to be disabled.
- sentence_analyzer(Callable): The function that analyzes sentences to check cohesion.
- statistic_type(str): Whether to calculate the mean and/or the standard deviation. It accepts 'mean', 'std' or 'all'.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- StatisticsResults: The standard deviation and mean of the overlap.
- '''
- # TODO MULTIPROCESSING. WORKERS IS JUST A PLACEHOLDER
- if len(text) == 0:
- raise ValueError('The text is empty.')
- elif statistic_type not in ['mean', 'std', 'all']:
- raise ValueError('\'statistic_type\' can only take \'mean\', \'std\' or \'all\'.')
- else:
- self._nlp.get_pipe('referential cohesion adjacent sentences analyzer').sentence_analyzer = sentence_analyzer
- doc = self._nlp(text, disable=disable_pipeline)
- stat_results = StatisticsResults() # Create empty container
-
- if len(doc._.referential_cohesion_adjacent) == 0:
- return stat_results
- else:
- if statistic_type in ['mean', 'all']:
- stat_results.mean = statistics.mean(doc._.referential_cohesion_adjacent)
-
- if statistic_type in ['std', 'all']:
- stat_results.std = statistics.pstdev(doc._.referential_cohesion_adjacent)
-
- return stat_results
-
- def _calculate_overlap_for_all_sentences(self, text: str, disable_pipeline: List, sentence_analyzer: Callable, statistic_type: str='all', workers: int=-1) -> StatisticsResults:
- '''
- This method calculates the overlap for all sentences in a text. MULTIPROCESSING STILL NOT IMPLEMENTED.
-
- Parameters:
- text(str): The text to be analyzed.
- disable_pipeline(List): The pipeline elements to be disabled.
- sentence_analyzer(Callable): The function that analyzes sentences to check cohesion.
- statistic_type(str): Whether to calculate the mean and/or the standard deviation. It accepts 'mean', 'std' or 'all'.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- StatisticsResults: The standard deviation and mean of the overlap.
- '''
- # TODO MULTIPROCESSING. WORKERS IS JUST A PLACEHOLDER.
- if len(text) == 0:
- raise ValueError('The text is empty.')
- elif statistic_type not in ['mean', 'std', 'all']:
- raise ValueError('\'statistic_type\' can only take \'mean\', \'std\' or \'all\'.')
- else:
- self._nlp.get_pipe('referential cohesion all sentences analyzer').sentence_analyzer = sentence_analyzer
- doc = self._nlp(text, disable=disable_pipeline)
- stat_results = StatisticsResults() # Create empty container
-
- if len(doc._.referential_cohesion_all) == 0:
- return stat_results
- else:
- if statistic_type in ['mean', 'all']:
- stat_results.mean = statistics.mean(doc._.referential_cohesion_all)
-
- if statistic_type in ['std', 'all']:
- stat_results.std = statistics.pstdev(doc._.referential_cohesion_all)
-
- return stat_results
-
- def get_noun_overlap_adjacent_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the noun overlap for adjacent sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean noun overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion adjacent sentences analyzer']]
- return self._calculate_overlap_for_adjacent_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_noun_overlap, statistic_type='mean').mean
-
- def get_noun_overlap_all_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the noun overlap for all sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean noun overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion all sentences analyzer']]
- return self._calculate_overlap_for_all_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_noun_overlap, statistic_type='mean').mean
-
- def get_argument_overlap_adjacent_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the argument overlap for adjacent sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean argument overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion adjacent sentences analyzer']]
- return self._calculate_overlap_for_adjacent_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_argument_overlap, statistic_type='mean').mean
-
- def get_argument_overlap_all_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the argument overlap for all sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean argument overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion all sentences analyzer']]
- return self._calculate_overlap_for_all_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_argument_overlap, statistic_type='mean').mean
-
- def get_stem_overlap_adjacent_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the stem overlap for adjacent sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean stem overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion adjacent sentences analyzer']]
- return self._calculate_overlap_for_adjacent_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_stem_overlap, statistic_type='mean').mean
-
- def get_stem_overlap_all_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the stem overlap for all sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean stem overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion all sentences analyzer']]
- return self._calculate_overlap_for_all_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_stem_overlap, statistic_type='mean').mean
-
- def get_content_word_overlap_adjacent_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the mean and standard deviation of the content word overlap for adjacent sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean mean and standard deviation of the content word overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion adjacent sentences analyzer']]
- return self._calculate_overlap_for_adjacent_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_content_word_overlap, statistic_type='all')
-
- def get_content_word_overlap_all_sentences(self, text: str, workers: int=-1) -> StatisticsResults:
- '''
- This method calculates the mean and standard deviation of the content word overlap for all sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- StatisticsResults: The mean mean and standard deviation of the content word overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion all sentences analyzer']]
- return self._calculate_overlap_for_all_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_content_word_overlap, statistic_type='all')
-
- def get_anaphore_overlap_adjacent_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the mean of the anaphore overlap for adjacent sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean mean of the anaphore overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion adjacent sentences analyzer']]
- return self._calculate_overlap_for_adjacent_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_anaphore_overlap, statistic_type='all').mean
-
- def get_anaphore_overlap_all_sentences(self, text: str, workers: int=-1) -> float:
- '''
- This method calculates the mean of the anaphore overlap for all sentences in a text.
-
- Parameters:
- text(str): The text to be analyzed.
- workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used.
-
- Returns:
- float: The mean mean of the anaphore overlap.
- '''
- disable_pipeline = [pipe
- for pipe in self._nlp.pipe_names
- if pipe not in ['sentencizer', 'tagger', 'referential cohesion all sentences analyzer']]
- return self._calculate_overlap_for_all_sentences(text=text, workers=workers, disable_pipeline=disable_pipeline, sentence_analyzer=analyze_anaphore_overlap, statistic_type='all').mean
-
-def analyze_noun_overlap(prev_sentence: Span, cur_sentence: Span, language: str='en') -> int:
- '''
- This function analyzes whether or not there's noun overlap between two sentences for a language.
-
- Parameters:
- prev_sentence(Span): The previous sentence to analyze.
- cur_sentence(Span): The current sentence to analyze.
- language(str): The language of the sentences.
-
- Returns:
- int: 1 if there's overlap between the two sentences and 0 if no.
- '''
- # Place the tokens in a dictionary for search efficiency
- prev_sentence_noun_tokens = {token.text.lower(): None
- for token in prev_sentence
- if is_word(token) and token.pos_ == 'NOUN'}
-
- for token in cur_sentence:
- if language == 'en':
- if is_word(token) and token.pos_ == 'NOUN' and token.text.lower() in prev_sentence_noun_tokens:
- return 1 # There's cohesion
-
- return 0 # No cohesion
-
-
-def analyze_argument_overlap(prev_sentence: Span, cur_sentence: Span, language: str='en') -> int:
- '''
- This function analyzes whether or not there's argument overlap between two sentences.
-
- Parameters:
- prev_sentence(Span): The previous sentence to analyze.
- cur_sentence(Span): The current sentence to analyze.
- language(str): The language of the sentences.
-
- Returns:
- int: 1 if there's overlap between the two sentences and 0 if no.
- '''
- # Place the tokens in a dictionary for search efficiency
- prev_sentence_noun_tokens = {token.lemma_.lower(): None
- for token in prev_sentence
- if is_word(token) and token.pos_ == 'NOUN'}
-
- prev_sentence_personal_pronouns_tokens = {token.text.lower(): None
- for token in prev_sentence
- if is_word(token) and 'PronType=Prs' in token.tag_}
-
- for token in cur_sentence: # Iterate every token of the current sentence
- if language == 'en':
- if is_word(token) and token.pos_ == 'NOUN' and token.lemma_.lower() in prev_sentence_noun_tokens:
- return 1 # There's cohesion by noun lemma
-
- if is_word(token) and 'PronType=Prs' in token.tag_ and token.text.lower() in prev_sentence_personal_pronouns_tokens:
- return 1 # There's cohesion by personal pronoun
-
- return 0 # No cohesion
-
-
-def analyze_stem_overlap(prev_sentence: Span, cur_sentence: Span, language: str='en') -> int:
- '''
- This function analyzes whether or not there's stem overlap between two sentences.
-
- Parameters:
- prev_sentence(Span): The previous sentence to analyze.
- cur_sentence(Span): The current sentence to analyze.
- language(str): The language of the sentences.
-
- Returns:
- int: 1 if there's overlap between the two sentences and 0 if no.
- '''
- # Place the tokens in a dictionary for search efficiency
- prev_sentence_content_stem_tokens = {token.lemma_.lower(): None
- for token in prev_sentence
- if is_content_word(token)}
-
- for token in cur_sentence:
- if language == 'en':
- if is_word(token) and token.pos_ in ['NOUN', 'PROPN'] and token.lemma_.lower() in prev_sentence_content_stem_tokens:
- return 1 # There's cohesion
-
- return 0 # No cohesion
-
-
-def analyze_content_word_overlap(prev_sentence: Span, cur_sentence: Span, language='en') -> float:
- '''
- This function calculates the proportional content word overlap between two sentences.
-
- Parameters:
- prev_sentence(Span): The previous sentence to analyze.
- cur_sentence(Span): The current sentence to analyze.
- language(str): The language of the sentences.
-
- Returns:
- float: Proportion of tokens that overlap between the current and previous sentences
- '''
- total_tokens = len([token for token in prev_sentence if is_content_word(token)]) + len([token for token in cur_sentence if is_content_word(token)])
-
- if total_tokens == 0: # Nothing to compute
- return 0
- else:
- prev_sentence_content_words_tokens = {token.text.lower(): None
- for token in prev_sentence
- if is_content_word(token)}
- matches = 0 # Matcher counter
-
- for token in cur_sentence:
- if language == 'en':
- if is_content_word(token) and token.text.lower() in prev_sentence_content_words_tokens:
- matches += 2 # There's cohesion
-
- return matches / total_tokens
-
-
-def analyze_anaphore_overlap(prev_sentence: Span, cur_sentence: Span, language: str='en') -> int:
- '''
- This function analyzes whether or not there's anaphore overlap between two sentences.
-
- Parameters:
- prev_sentence(Span): The previous sentence to analyze.
- cur_sentence(Span): The current sentence to analyze.
- language(str): The language of the sentences.
-
- Returns:
- int: 1 if there's overlap between the two sentences and 0 if no.
- '''
- # Place the tokens in a dictionary for search efficiency
- prev_sentence_pronoun_tokens = {token.text.lower(): None
- for token in prev_sentence
- if is_word(token) and token.pos_ == 'PRON'}
-
- for token in cur_sentence:
- if language == 'en':
- if is_word(token) and token.pos_ == 'PRON' and token.text.lower() in prev_sentence_pronoun_tokens:
- return 1 # There's cohesion
-
- return 0 # No cohesion
diff --git a/spaces/seecuecue/text_generator/app.py b/spaces/seecuecue/text_generator/app.py
deleted file mode 100644
index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000
--- a/spaces/seecuecue/text_generator/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/gpt2").launch()
\ No newline at end of file
diff --git a/spaces/sgxz/bingo/src/components/providers.tsx b/spaces/sgxz/bingo/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/sgxz/bingo/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/shi-labs/FcF-Inpainting/training/data/augment.py b/spaces/shi-labs/FcF-Inpainting/training/data/augment.py
deleted file mode 100644
index 22d4ff22e3c01de0b05d5fdf9503be555263b92b..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/FcF-Inpainting/training/data/augment.py
+++ /dev/null
@@ -1,423 +0,0 @@
-import numpy as np
-import scipy.signal
-import torch
-from torch_utils import persistence
-from torch_utils import misc
-from torch_utils.ops import upfirdn2d
-from torch_utils.ops import grid_sample_gradfix
-from torch_utils.ops import conv2d_gradfix
-
-#----------------------------------------------------------------------------
-# Coefficients of various wavelet decomposition low-pass filters.
-
-wavelets = {
- 'haar': [0.7071067811865476, 0.7071067811865476],
- 'db1': [0.7071067811865476, 0.7071067811865476],
- 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025],
- 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569],
- 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523],
- 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125],
- 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017],
- 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236],
- 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161],
- 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025],
- 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569],
- 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427],
- 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728],
- 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148],
- 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255],
- 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609],
-}
-
-#----------------------------------------------------------------------------
-# Helpers for constructing transformation matrices.
-
-def matrix(*rows, device=None):
- assert all(len(row) == len(rows[0]) for row in rows)
- elems = [x for row in rows for x in row]
- ref = [x for x in elems if isinstance(x, torch.Tensor)]
- if len(ref) == 0:
- return misc.constant(np.asarray(rows), device=device)
- assert device is None or device == ref[0].device
- elems = [x if isinstance(x, torch.Tensor) else misc.constant(x, shape=ref[0].shape, device=ref[0].device) for x in elems]
- return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1))
-
-def translate2d(tx, ty, **kwargs):
- return matrix(
- [1, 0, tx],
- [0, 1, ty],
- [0, 0, 1],
- **kwargs)
-
-def translate3d(tx, ty, tz, **kwargs):
- return matrix(
- [1, 0, 0, tx],
- [0, 1, 0, ty],
- [0, 0, 1, tz],
- [0, 0, 0, 1],
- **kwargs)
-
-def scale2d(sx, sy, **kwargs):
- return matrix(
- [sx, 0, 0],
- [0, sy, 0],
- [0, 0, 1],
- **kwargs)
-
-def scale3d(sx, sy, sz, **kwargs):
- return matrix(
- [sx, 0, 0, 0],
- [0, sy, 0, 0],
- [0, 0, sz, 0],
- [0, 0, 0, 1],
- **kwargs)
-
-def rotate2d(theta, **kwargs):
- return matrix(
- [torch.cos(theta), torch.sin(-theta), 0],
- [torch.sin(theta), torch.cos(theta), 0],
- [0, 0, 1],
- **kwargs)
-
-def rotate3d(v, theta, **kwargs):
- vx = v[..., 0]; vy = v[..., 1]; vz = v[..., 2]
- s = torch.sin(theta); c = torch.cos(theta); cc = 1 - c
- return matrix(
- [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0],
- [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0],
- [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0],
- [0, 0, 0, 1],
- **kwargs)
-
-def translate2d_inv(tx, ty, **kwargs):
- return translate2d(-tx, -ty, **kwargs)
-
-def scale2d_inv(sx, sy, **kwargs):
- return scale2d(1 / sx, 1 / sy, **kwargs)
-
-def rotate2d_inv(theta, **kwargs):
- return rotate2d(-theta, **kwargs)
-
-#----------------------------------------------------------------------------
-# Versatile image augmentation pipeline from the paper
-# "Training Generative Adversarial Networks with Limited Data".
-#
-# All augmentations are disabled by default; individual augmentations can
-# be enabled by setting their probability multipliers to 1.
-
-@persistence.persistent_class
-class AugmentPipe(torch.nn.Module):
- def __init__(self,
- xflip=0, rotate90=0, xint=0, xint_max=0.125,
- scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125,
- brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1,
- imgfilter=0, imgfilter_bands=[1,1,1,1], imgfilter_std=1,
- noise=0, cutout=0, noise_std=0.1, cutout_size=0.5,
- ):
- super().__init__()
- self.register_buffer('p', torch.ones([])) # Overall multiplier for augmentation probability.
-
- # Pixel blitting.
- self.xflip = float(xflip) # Probability multiplier for x-flip.
- self.rotate90 = float(rotate90) # Probability multiplier for 90 degree rotations.
- self.xint = float(xint) # Probability multiplier for integer translation.
- self.xint_max = float(xint_max) # Range of integer translation, relative to image dimensions.
-
- # General geometric transformations.
- self.scale = float(scale) # Probability multiplier for isotropic scaling.
- self.rotate = float(rotate) # Probability multiplier for arbitrary rotation.
- self.aniso = float(aniso) # Probability multiplier for anisotropic scaling.
- self.xfrac = float(xfrac) # Probability multiplier for fractional translation.
- self.scale_std = float(scale_std) # Log2 standard deviation of isotropic scaling.
- self.rotate_max = float(rotate_max) # Range of arbitrary rotation, 1 = full circle.
- self.aniso_std = float(aniso_std) # Log2 standard deviation of anisotropic scaling.
- self.xfrac_std = float(xfrac_std) # Standard deviation of frational translation, relative to image dimensions.
-
- # Color transformations.
- self.brightness = float(brightness) # Probability multiplier for brightness.
- self.contrast = float(contrast) # Probability multiplier for contrast.
- self.lumaflip = float(lumaflip) # Probability multiplier for luma flip.
- self.hue = float(hue) # Probability multiplier for hue rotation.
- self.saturation = float(saturation) # Probability multiplier for saturation.
- self.brightness_std = float(brightness_std) # Standard deviation of brightness.
- self.contrast_std = float(contrast_std) # Log2 standard deviation of contrast.
- self.hue_max = float(hue_max) # Range of hue rotation, 1 = full circle.
- self.saturation_std = float(saturation_std) # Log2 standard deviation of saturation.
-
- # Image-space filtering.
- self.imgfilter = float(imgfilter) # Probability multiplier for image-space filtering.
- self.imgfilter_bands = list(imgfilter_bands) # Probability multipliers for individual frequency bands.
- self.imgfilter_std = float(imgfilter_std) # Log2 standard deviation of image-space filter amplification.
-
- # Image-space corruptions.
- self.noise = float(noise) # Probability multiplier for additive RGB noise.
- self.cutout = float(cutout) # Probability multiplier for cutout.
- self.noise_std = float(noise_std) # Standard deviation of additive RGB noise.
- self.cutout_size = float(cutout_size) # Size of the cutout rectangle, relative to image dimensions.
-
- # Setup orthogonal lowpass filter for geometric augmentations.
- self.register_buffer('Hz_geom', upfirdn2d.setup_filter(wavelets['sym6']))
-
- # Construct filter bank for image-space filtering.
- Hz_lo = np.asarray(wavelets['sym2']) # H(z)
- Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z)
- Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2
- Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2
- Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i)
- for i in range(1, Hz_fbank.shape[0]):
- Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(Hz_fbank.shape[0], -1)[:, :-1]
- Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2])
- Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // 2 : (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2
- self.register_buffer('Hz_fbank', torch.as_tensor(Hz_fbank, dtype=torch.float32))
-
- def forward(self, images, debug_percentile=None):
- assert isinstance(images, torch.Tensor) and images.ndim == 4
- batch_size, num_channels, height, width = images.shape
- device = images.device
- if debug_percentile is not None:
- debug_percentile = torch.as_tensor(debug_percentile, dtype=torch.float32, device=device)
-
- # -------------------------------------
- # Select parameters for pixel blitting.
- # -------------------------------------
-
- # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in
- I_3 = torch.eye(3, device=device)
- G_inv = I_3
-
- # Apply x-flip with probability (xflip * strength).
- if self.xflip > 0:
- i = torch.floor(torch.rand([batch_size], device=device) * 2)
- i = torch.where(torch.rand([batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i))
- if debug_percentile is not None:
- i = torch.full_like(i, torch.floor(debug_percentile * 2))
- G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1)
-
- # Apply 90 degree rotations with probability (rotate90 * strength).
- if self.rotate90 > 0:
- i = torch.floor(torch.rand([batch_size], device=device) * 4)
- i = torch.where(torch.rand([batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i))
- if debug_percentile is not None:
- i = torch.full_like(i, torch.floor(debug_percentile * 4))
- G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i)
-
- # Apply integer translation with probability (xint * strength).
- if self.xint > 0:
- t = (torch.rand([batch_size, 2], device=device) * 2 - 1) * self.xint_max
- t = torch.where(torch.rand([batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t))
- if debug_percentile is not None:
- t = torch.full_like(t, (debug_percentile * 2 - 1) * self.xint_max)
- G_inv = G_inv @ translate2d_inv(torch.round(t[:,0] * width), torch.round(t[:,1] * height))
-
- # --------------------------------------------------------
- # Select parameters for general geometric transformations.
- # --------------------------------------------------------
-
- # Apply isotropic scaling with probability (scale * strength).
- if self.scale > 0:
- s = torch.exp2(torch.randn([batch_size], device=device) * self.scale_std)
- s = torch.where(torch.rand([batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s))
- if debug_percentile is not None:
- s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.scale_std))
- G_inv = G_inv @ scale2d_inv(s, s)
-
- # Apply pre-rotation with probability p_rot.
- p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) # P(pre OR post) = p
- if self.rotate > 0:
- theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max
- theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta))
- if debug_percentile is not None:
- theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max)
- G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling.
-
- # Apply anisotropic scaling with probability (aniso * strength).
- if self.aniso > 0:
- s = torch.exp2(torch.randn([batch_size], device=device) * self.aniso_std)
- s = torch.where(torch.rand([batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s))
- if debug_percentile is not None:
- s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.aniso_std))
- G_inv = G_inv @ scale2d_inv(s, 1 / s)
-
- # Apply post-rotation with probability p_rot.
- if self.rotate > 0:
- theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max
- theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta))
- if debug_percentile is not None:
- theta = torch.zeros_like(theta)
- G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling.
-
- # Apply fractional translation with probability (xfrac * strength).
- if self.xfrac > 0:
- t = torch.randn([batch_size, 2], device=device) * self.xfrac_std
- t = torch.where(torch.rand([batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t))
- if debug_percentile is not None:
- t = torch.full_like(t, torch.erfinv(debug_percentile * 2 - 1) * self.xfrac_std)
- G_inv = G_inv @ translate2d_inv(t[:,0] * width, t[:,1] * height)
-
- # ----------------------------------
- # Execute geometric transformations.
- # ----------------------------------
-
- # Execute if the transform is not identity.
- if G_inv is not I_3:
-
- # Calculate padding.
- cx = (width - 1) / 2
- cy = (height - 1) / 2
- cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], [-cx, cy, 1], device=device) # [idx, xyz]
- cp = G_inv @ cp.t() # [batch, xyz, idx]
- Hz_pad = self.Hz_geom.shape[0] // 4
- margin = cp[:, :2, :].permute(1, 0, 2).flatten(1) # [xy, batch * idx]
- margin = torch.cat([-margin, margin]).max(dim=1).values # [x0, y0, x1, y1]
- margin = margin + misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] * 2, device=device)
- margin = margin.max(misc.constant([0, 0] * 2, device=device))
- margin = margin.min(misc.constant([width-1, height-1] * 2, device=device))
- mx0, my0, mx1, my1 = margin.ceil().to(torch.int32)
-
- # Pad image and adjust origin.
- images = torch.nn.functional.pad(input=images, pad=[mx0,mx1,my0,my1], mode='reflect')
- G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv
-
- # Upsample.
- images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2)
- G_inv = scale2d(2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device)
- G_inv = translate2d(-0.5, -0.5, device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device)
-
- # Execute transformation.
- shape = [batch_size, num_channels, (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2]
- G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(2 / shape[3], 2 / shape[2], device=device)
- grid = torch.nn.functional.affine_grid(theta=G_inv[:,:2,:], size=shape, align_corners=False)
- images = grid_sample_gradfix.grid_sample(images, grid)
-
- # Downsample and crop.
- images = upfirdn2d.downsample2d(x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True)
-
- # --------------------------------------------
- # Select parameters for color transformations.
- # --------------------------------------------
-
- # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out
- I_4 = torch.eye(4, device=device)
- C = I_4
-
- # Apply brightness with probability (brightness * strength).
- if self.brightness > 0:
- b = torch.randn([batch_size], device=device) * self.brightness_std
- b = torch.where(torch.rand([batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b))
- if debug_percentile is not None:
- b = torch.full_like(b, torch.erfinv(debug_percentile * 2 - 1) * self.brightness_std)
- C = translate3d(b, b, b) @ C
-
- # Apply contrast with probability (contrast * strength).
- if self.contrast > 0:
- c = torch.exp2(torch.randn([batch_size], device=device) * self.contrast_std)
- c = torch.where(torch.rand([batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c))
- if debug_percentile is not None:
- c = torch.full_like(c, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.contrast_std))
- C = scale3d(c, c, c) @ C
-
- # Apply luma flip with probability (lumaflip * strength).
- v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) # Luma axis.
- if self.lumaflip > 0:
- i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2)
- i = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i))
- if debug_percentile is not None:
- i = torch.full_like(i, torch.floor(debug_percentile * 2))
- C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection.
-
- # Apply hue rotation with probability (hue * strength).
- if self.hue > 0 and num_channels > 1:
- theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.hue_max
- theta = torch.where(torch.rand([batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta))
- if debug_percentile is not None:
- theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max)
- C = rotate3d(v, theta) @ C # Rotate around v.
-
- # Apply saturation with probability (saturation * strength).
- if self.saturation > 0 and num_channels > 1:
- s = torch.exp2(torch.randn([batch_size, 1, 1], device=device) * self.saturation_std)
- s = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s))
- if debug_percentile is not None:
- s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.saturation_std))
- C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C
-
- # ------------------------------
- # Execute color transformations.
- # ------------------------------
-
- # Execute if the transform is not identity.
- if C is not I_4:
- images = images.reshape([batch_size, num_channels, height * width])
- if num_channels == 3:
- images = C[:, :3, :3] @ images + C[:, :3, 3:]
- elif num_channels == 1:
- C = C[:, :3, :].mean(dim=1, keepdims=True)
- images = images * C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:]
- else:
- raise ValueError('Image must be RGB (3 channels) or L (1 channel)')
- images = images.reshape([batch_size, num_channels, height, width])
-
- # ----------------------
- # Image-space filtering.
- # ----------------------
-
- if self.imgfilter > 0:
- num_bands = self.Hz_fbank.shape[0]
- assert len(self.imgfilter_bands) == num_bands
- expected_power = misc.constant(np.array([10, 1, 1, 1]) / 13, device=device) # Expected power spectrum (1/f).
-
- # Apply amplification for each band with probability (imgfilter * strength * band_strength).
- g = torch.ones([batch_size, num_bands], device=device) # Global gain vector (identity).
- for i, band_strength in enumerate(self.imgfilter_bands):
- t_i = torch.exp2(torch.randn([batch_size], device=device) * self.imgfilter_std)
- t_i = torch.where(torch.rand([batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i))
- if debug_percentile is not None:
- t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i)
- t = torch.ones([batch_size, num_bands], device=device) # Temporary gain vector.
- t[:, i] = t_i # Replace i'th element.
- t = t / (expected_power * t.square()).sum(dim=-1, keepdims=True).sqrt() # Normalize power.
- g = g * t # Accumulate into global gain.
-
- # Construct combined amplification filter.
- Hz_prime = g @ self.Hz_fbank # [batch, tap]
- Hz_prime = Hz_prime.unsqueeze(1).repeat([1, num_channels, 1]) # [batch, channels, tap]
- Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) # [batch * channels, 1, tap]
-
- # Apply filter.
- p = self.Hz_fbank.shape[1] // 2
- images = images.reshape([1, batch_size * num_channels, height, width])
- images = torch.nn.functional.pad(input=images, pad=[p,p,p,p], mode='reflect')
- images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels)
- images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels)
- images = images.reshape([batch_size, num_channels, height, width])
-
- # ------------------------
- # Image-space corruptions.
- # ------------------------
-
- # Apply additive RGB noise with probability (noise * strength).
- if self.noise > 0:
- sigma = torch.randn([batch_size, 1, 1, 1], device=device).abs() * self.noise_std
- sigma = torch.where(torch.rand([batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma))
- if debug_percentile is not None:
- sigma = torch.full_like(sigma, torch.erfinv(debug_percentile) * self.noise_std)
- images = images + torch.randn([batch_size, num_channels, height, width], device=device) * sigma
-
- # Apply cutout with probability (cutout * strength).
- if self.cutout > 0:
- size = torch.full([batch_size, 2, 1, 1, 1], self.cutout_size, device=device)
- size = torch.where(torch.rand([batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size))
- center = torch.rand([batch_size, 2, 1, 1, 1], device=device)
- if debug_percentile is not None:
- size = torch.full_like(size, self.cutout_size)
- center = torch.full_like(center, debug_percentile)
- coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1])
- coord_y = torch.arange(height, device=device).reshape([1, 1, -1, 1])
- mask_x = (((coord_x + 0.5) / width - center[:, 0]).abs() >= size[:, 0] / 2)
- mask_y = (((coord_y + 0.5) / height - center[:, 1]).abs() >= size[:, 1] / 2)
- mask = torch.logical_or(mask_x, mask_y).to(torch.float32)
- images = images * mask
-
- return images
-
-#----------------------------------------------------------------------------
diff --git a/spaces/shikunl/prismer/prismer/dataset/caption_dataset.py b/spaces/shikunl/prismer/prismer/dataset/caption_dataset.py
deleted file mode 100644
index 4363cf6f241d2727ac13a8f346322d66d9a3900a..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/dataset/caption_dataset.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright (c) 2023, NVIDIA Corporation & Affiliates. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://github.com/NVlabs/prismer/blob/main/LICENSE
-
-import glob
-
-from torch.utils.data import Dataset
-from dataset.utils import *
-from PIL import ImageFile
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-
-
-class Caption(Dataset):
- def __init__(self, config, train=True):
- self.data_path = config['data_path']
- self.label_path = config['label_path']
- self.experts = config['experts']
- self.prefix = config['prefix']
- self.dataset = config['dataset']
- self.transform = Transform(resize_resolution=config['image_resolution'], scale_size=[0.5, 1.0], train=train)
- self.train = train
-
- if train:
- self.data_list = []
- if self.dataset in ['coco', 'nocaps']:
- self.data_list += json.load(open(os.path.join(self.data_path, 'coco_karpathy_train.json'), 'r'))
- else:
- if self.dataset == 'coco':
- self.data_list = json.load(open(os.path.join(self.data_path, 'coco_karpathy_test.json'), 'r'))
- elif self.dataset == 'nocaps':
- self.data_list = json.load(open(os.path.join(self.data_path, 'nocaps_val.json'), 'r'))
- elif self.dataset == 'demo':
- self.data_list = [{'image': f'helpers/images/{config["im_name"]}.jpg'}]
-
- def __len__(self):
- return len(self.data_list)
-
- def __getitem__(self, index):
- data = self.data_list[index]
-
- if self.dataset == 'coco':
- image, labels, labels_info = get_expert_labels(self.data_path, self.label_path, data['image'], 'vqav2', self.experts)
- elif self.dataset == 'nocaps':
- image, labels, labels_info = get_expert_labels(self.data_path, self.label_path, data['image'], 'nocaps', self.experts)
- elif self.dataset == 'demo':
- img_path_split = self.data_list[index]['image'].split('/')
- img_name = img_path_split[-2] + '/' + img_path_split[-1]
- image, labels, labels_info = get_expert_labels('prismer', self.label_path, img_name, 'helpers', self.experts)
-
- experts = self.transform(image, labels)
- experts = post_label_process(experts, labels_info)
- experts['rgb'] = experts['rgb'].half()
-
- if self.train:
- caption = pre_caption(self.prefix + ' ' + self.data_list[index]['caption'], max_words=30)
- return experts, caption
- else:
- return experts, index
-
diff --git a/spaces/silencewing/server/youyou/demo/demo/math.html b/spaces/silencewing/server/youyou/demo/demo/math.html
deleted file mode 100644
index 8689433893a8cd567aee4a575ecf3952cfa62f50..0000000000000000000000000000000000000000
--- a/spaces/silencewing/server/youyou/demo/demo/math.html
+++ /dev/null
@@ -1,235 +0,0 @@
-
-
-
-
-
-
-
-
-
- Document
-
-
-
-
-
-
-
-
-
-
-
题目
-
-
-
-
答案
-
-
-
-
-
正误
-
-
-
-
得分
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/Christina Aguilera - 2006 - Back To Basics [FLAC].md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/Christina Aguilera - 2006 - Back To Basics [FLAC].md
deleted file mode 100644
index 7b73f3abb5ecbc2a335ac00fe109e05252fafbd3..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/Christina Aguilera - 2006 - Back To Basics [FLAC].md
+++ /dev/null
@@ -1,74 +0,0 @@
-## Christina Aguilera - 2006 - Back To Basics [FLAC]
-
-
-
-
-
- ![Christina Aguilera - 2006 - Back To Basics \[FLAC\]](https://2baksa.ws/download/images/~off/images2019/753f149949875aea139155e9651a35ce89a3758c.jpg)
-
-
-
-
-
-**DOWNLOAD - [https://kneedacexbrew.blogspot.com/?d=2txiFV](https://kneedacexbrew.blogspot.com/?d=2txiFV)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Christina Aguilera Goes Back To Basics With Her Fifth Studio Album
-
-
-
-Christina Aguilera is one of the most versatile and influential pop singers of her generation. She has sold over 75 million records worldwide and won five Grammy Awards. Her fifth studio album, *Back To Basics*, released in 2006, showcases her vocal prowess and artistic vision in a double-disc set that pays homage to the music of the 1920s, 1930s, and 1940s.
-
-
-
-*Back To Basics* features 22 tracks that blend elements of jazz, blues, soul, gospel, and rock. The first disc, titled *Back To Basics: Modern*, contains contemporary pop songs with retro influences, such as the lead single "Ain't No Other Man", which samples a horn riff from a 1968 song by The Soul Survivors. The second disc, titled *Back To Basics: Old School*, contains more traditional songs with orchestral arrangements and live instruments, such as the ballad "Hurt", which was co-written by Linda Perry and inspired by Aguilera's late grandfather.
-
-
-
-Aguilera collaborated with various producers and songwriters for *Back To Basics*, including DJ Premier, Mark Ronson, Kara DioGuardi, and Steve Winwood, who plays the organ on "Makes Me Wanna Pray". She also co-wrote and co-produced most of the songs on the album, expressing her personal and artistic growth. She said in an interview: "I wanted to make a record that was about me going back to my roots and my love for soul music."
-
-
-
-*Back To Basics* received critical acclaim from music critics, who praised Aguilera's vocals, songwriting, and stylistic diversity. The album debuted at number one on the Billboard 200 chart in the United States, selling over 346,000 copies in its first week. It also topped the charts in several other countries, including Canada, Australia, Germany, and the United Kingdom. It was certified platinum by the Recording Industry Association of America (RIAA) for selling over one million copies in the US.
-
-
-
-The album was also nominated for two Grammy Awards in 2007: Best Pop Vocal Album and Best Female Pop Vocal Performance for "Ain't No Other Man". Aguilera performed several songs from *Back To Basics* at the ceremony, including a tribute to James Brown with a rendition of "It's a Man's Man's Man's World". She also embarked on a worldwide tour to promote the album, which grossed over $90 million and was one of the highest-grossing tours of 2007.
-
-
-
-If you are a fan of Christina Aguilera or classic music genres, you should definitely check out *Back To Basics*. You can stream or download the album in high-quality FLAC format from various online platforms, such as Qobuz[^1^], Spotify[^2^], or Apple Music[^3^]. You will not regret listening to this masterpiece by one of the greatest voices of our time.
-
-
-
-*Back To Basics* is not only a musical tribute to the past, but also a personal reflection of Aguilera's life and experiences. She explores various themes and emotions on the album, such as empowerment, love, loss, sexuality, and spirituality. She also addresses some of the challenges and controversies she faced in her career, such as the media scrutiny, the comparisons with other pop stars, and the criticism of her image and style.
-
-
-
-One of the most notable songs on the album is "Still Dirrty", which is a sequel to her 2002 hit "Dirrty". In this song, Aguilera reclaims her sexuality and asserts her independence and confidence. She sings: "I still got the nasty in me / Still got that dirrty degree / And if you want some more sexy / Still got that freak in me". She also defends her artistic choices and challenges the double standards in the music industry. She sings: "The girl I used to be has a voice / Still got to make a choice / You can take it or leave it / But they're still gonna talk".
-
-
-
-Another highlight of the album is "Oh Mother", which is a heartfelt tribute to Aguilera's mother, who suffered domestic abuse from Aguilera's father when she was a child. In this song, Aguilera expresses her gratitude and admiration for her mother's strength and courage. She sings: "Oh mother, we're stronger / From all of the tears you have shed / Oh mother, don't look back / 'Cause he'll never hurt us again". She also reveals how her mother's love helped her heal from the trauma and become a better person. She sings: "You showed me how to love myself / Through all the good and bad / You're always there for me / Your love's unconditional".
-
-
-
-*Back To Basics* is a testament to Aguilera's talent and versatility as an artist. She proves that she can sing any genre and style with ease and grace. She also shows that she can write meaningful and honest lyrics that resonate with her fans and listeners. *Back To Basics* is a timeless and classic album that deserves to be celebrated and appreciated by music lovers of all ages.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Counter Strike 1.6 The Ultimate Online Action Game.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Counter Strike 1.6 The Ultimate Online Action Game.md
deleted file mode 100644
index 0975b8ad0c41fc76b168bd8bbdb48b2f0b22b9d9..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Counter Strike 1.6 The Ultimate Online Action Game.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
How to Download Counter Strike 1.6 and Play Online
-
If you are a fan of first-person shooter games, you have probably heard of Counter Strike 1.6, one of the most popular and influential online games of all time. But how can you download it and play it online in 2023? In this article, we will show you how to do that in a few simple steps.
Counter Strike 1.6 is a team-based game that pits two teams against each other: terrorists and counter-terrorists. The game was originally a mod for Half-Life, a game developed by Valve in 1998. The mod was created by two fans, Minh Le and Jess Cliffe, who released the first version in 1999. The mod became so popular that Valve hired them and acquired the rights to the game. Valve then released several updates and versions of the game, including Counter Strike 1.6 in 2003, which is considered the final and most stable version of the original game.
-
The main features of the game
-
Counter Strike 1.6 is a game that focuses on realism, teamwork, and strategy. The game has several modes, such as bomb defusal, hostage rescue, assassination, and deathmatch. The game also has dozens of maps, each with its own layout, objectives, and tactics. The game also has a variety of weapons, ranging from pistols, rifles, shotguns, submachine guns, sniper rifles, grenades, and knives. Each weapon has its own characteristics, such as accuracy, recoil, damage, and price. The game also has a money system, where players earn money by killing enemies, completing objectives, or winning rounds. The money can be used to buy weapons, armor, or equipment at the beginning of each round.
-
How to download Counter Strike 1.6 for free?
-
The official way: Steam
-
The easiest and safest way to download Counter Strike 1.6 is through Steam, the digital distribution platform created by Valve. Steam allows you to buy, download, install, update, and play games on your computer. To download Counter Strike 1.6 through Steam, you need to do the following:
-
-
Create a Steam account if you don't have one already.
-
Download and install the Steam client on your computer.
-
Open the Steam client and log in with your account.
-
Go to the Store tab and search for Counter Strike.
-
Select Counter Strike from the list of results and click on Add to Cart.
-
Proceed to checkout and pay $9.99 for the game.
-
Go to your Library tab and find Counter Strike in your list of games.
-
Click on Install and wait for the game to download and install on your computer.
-
Click on Play and enjoy the game.
-
-
The alternative way: CS 1.6 website
-
If you don't want to pay for the game or use Steam, you can also download Counter Strike 1.6 from other websites that offer free downloads of the game. However, you need to be careful when doing this, as some websites may contain viruses or malware that can harm your computer or steal your personal information. To download Counter Strike 1.6 from a website safely, you need to do the following:
One of the websites that offers free downloads of Counter Strike 1.6 is CS 1.6 website. This website claims to provide a clean and safe version of the game, with no viruses, malware, or ads. To download Counter Strike 1.6 from this website, you need to do the following:
-
download cs 1.6 original
-download cs 1.6 warzone
-download cs 1.6 non steam
-download cs 1.6 full version
-download cs 1.6 with bots
-download cs 1.6 free for pc
-download cs 1.6 online
-download cs 1.6 torrent
-download cs 1.6 no recoil
-download cs 1.6 windows 10
-download cs 1.6 zombie mod
-download cs 1.6 best version
-download cs 1.6 steam key
-download cs 1.6 direct link
-download cs 1.6 latest update
-download cs 1.6 portable
-download cs 1.6 maps pack
-download cs 1.6 skins pack
-download cs 1.6 cheats pack
-download cs 1.6 server creator
-download cs 1.6 dedicated server
-download cs 1.6 lan version
-download cs 1.6 hd graphics
-download cs 1.6 extreme v7
-download cs 1.6 final release
-download cs 1.6 source mod
-download cs 1.6 condition zero
-download cs 1.6 global offensive mod
-download cs 1.6 modern warfare mod
-download cs 1.6 counter strike nexon zombies mod
-download cs 1.6 reloaded edition
-download cs 1.6 revolution edition
-download cs 1.6 professional edition
-download cs 1.6 classic edition
-download cs 1.6 clean edition
-download cs 1.6 fun edition
-download cs 1.6 super edition
-download cs 1.6 ultimate edition
-download cs 1.6 gold edition
-download cs 1.6 pro gaming edition
-
-
Go to the CS 1.6 website and click on the Download button.
-
Choose a mirror from the list of available servers and click on it.
-
Wait for the download to start and save the file on your computer.
-
Open the file and follow the instructions to install the game on your computer.
-
Launch the game and enjoy it.
-
-
However, you should be aware that downloading Counter Strike 1.6 from unofficial websites may have some risks, such as compatibility issues, outdated versions, or unwanted modifications. Therefore, we recommend that you always scan the file with an antivirus program before opening it and check the reviews and ratings of the website before downloading it. Alternatively, you can also try playing Counter Strike 1.6 on your browser, without downloading anything, by visiting CS-ONLINE.CLUB. This website allows you to play Counter Strike 1.6 online with other players, using only your browser and an internet connection. You don't need to register or pay anything to play on this website, but you may need to install some plugins or extensions to make it work properly.
-
How to play Counter Strike 1.6 online?
-
Joining a server from the game menu
-
Once you have downloaded and installed Counter Strike 1.6 on your computer, you can start playing it online by joining a server from the game menu. To do this, you need to do the following:
-
-
Launch the game and click on Find Servers.
-
Select the Internet tab and wait for the list of servers to load.
-
Browse through the list of servers and find one that suits your preferences, such as game mode, map, ping, players, etc.
-
Double-click on the server name or click on Connect to join it.
-
If prompted, enter the password of the server or agree to the terms and conditions.
-
Wait for the game to load and choose your team and your weapon.
-
Start playing and have fun.
-
Finding a server from a website
-
Another way to play Counter Strike 1.6 online is to find a server from a website that lists them. There are many websites that offer this service, such as Gametracker, Game-monitor, or CS-Servers. To find a server from a website, you need to do the following:
-
-
Go to the website of your choice and search for Counter Strike 1.6 servers.
-
Filter the results by country, game mode, map, players, etc. to find the server that matches your preferences.
-
Copy the IP address and port number of the server you want to join.
-
Launch the game and open the console by pressing the ~ key on your keyboard.
-
Type connect followed by the IP address and port number of the server and press Enter.
-
Wait for the game to load and choose your team and your weapon.
-
Start playing and have fun.
-
-
Creating your own server
-
If you want to play Counter Strike 1.6 online with your friends or customize your own game settings, you can also create your own server. To do this, you need to do the following:
-
-
Launch the game and click on New Game.
-
Select the map, game mode, and other options you want for your server.
-
Click on Start to create your server.
-
If you want other players to join your server, you need to share your IP address with them. You can find your IP address by going to Google and typing "what is my IP".
-
Your friends can join your server by following the same steps as finding a server from a website, but using your IP address instead.
-
Enjoy playing with your friends on your own server.
-
-
Tips and tricks for playing Counter Strike 1.6 online
-
Learn the maps and the weapons
-
One of the most important skills for playing Counter Strike 1.6 online is to know the maps and the weapons well. Each map has its own layout, objectives, and strategies, so you need to learn them by playing them often or watching other players play them. You also need to know which weapons are best suited for each map, situation, and play style. For example, some weapons are more effective at long range, while others are better at close quarters. Some weapons are more accurate, while others have more recoil. Some weapons are more expensive, while others are more affordable. You need to experiment with different weapons and find the ones that work best for you.
-
Communicate with your teammates
-
Another essential skill for playing Counter Strike 1.6 online is to communicate with your teammates. Communication is key for teamwork, coordination, and strategy. You can communicate with your teammates by using voice chat or text chat in the game. You can also use commands or gestures to convey information or instructions. For example, you can use commands like "follow me", "cover me", "enemy spotted", or "need backup" to communicate with your teammates quickly and easily. You can also use gestures like pointing, nodding, or waving to communicate with your teammates without words. Communication can help you win more rounds and have more fun playing with your teammates.
Practice your aim and reflexes
-
The final skill for playing Counter Strike 1.6 online is to practice your aim and reflexes. Aim and reflexes are crucial for winning gunfights, eliminating enemies, and surviving. You can practice your aim and reflexes by playing the game regularly, training on aim maps, or using aim trainers. You can also improve your aim and reflexes by adjusting your mouse sensitivity, crosshair settings, and resolution to suit your preferences. You can also watch professional players or streamers play the game and learn from their techniques and tips. Practicing your aim and reflexes can help you become a better player and enjoy the game more.
-
Conclusion
-
Counter Strike 1.6 is a classic game that has been loved by millions of players for over two decades. It is a game that offers realism, teamwork, and strategy in a fast-paced and exciting way. If you want to download Counter Strike 1.6 and play it online in 2023, you can do so by following the steps we have outlined in this article. You can also improve your skills and have more fun by learning the maps and the weapons, communicating with your teammates, and practicing your aim and reflexes. We hope you found this article helpful and informative. Now go ahead and download Counter Strike 1.6 and play it online with your friends or other players around the world.
-
FAQs
-
Is Counter Strike 1.6 still popular in 2023?
-
Counter Strike 1.6 is still popular in 2023, despite being an old game. According to Steam Charts, Counter Strike 1.6 had an average of 16,000 players online in June 2023, making it one of the top 100 most played games on Steam. The game also has a loyal fan base and a vibrant community that creates and shares new content, such as maps, mods, skins, or servers.
-
What are the system requirements for Counter Strike 1.6?
-
The system requirements for Counter Strike 1.6 are very low compared to modern games. According to Steam, the minimum system requirements are:
-
-
OS: Windows XP/Vista/7/8/10
-
Processor: 500 MHz
-
Memory: 96 MB RAM
-
Graphics: 16 MB video card
-
DirectX: Version 7.0
-
Storage: 500 MB available space
-
-
The recommended system requirements are:
-
-
OS: Windows XP/Vista/7/8/10
-
Processor: 800 MHz
-
Memory: 128 MB RAM
-
Graphics: 32 MB video card
-
DirectX: Version 9.0
-
Storage: 500 MB available space
-
-
What are the differences between Counter Strike 1.6 and Counter Strike: Global Offensive?
-
Counter Strike: Global Offensive (CS:GO) is the latest version of Counter Strike, released by Valve in 2012. It is a modernized and improved version of the game, with new graphics, gameplay, features, modes, maps, weapons, skins, and more. Some of the main differences between Counter Strike 1.6 and CS:GO are:
-
Graphics: CS:GO has much better graphics than CS 1.6, with more realistic textures, lighting, shadows, and effects. CS 1.6 has more pixelated and outdated graphics, but some players prefer its simplicity and nostalgia.
-
Gameplay: CS:GO has more refined and balanced gameplay than CS 1.6, with smoother movement, recoil, and hitboxes. CS 1.6 has more raw and unpredictable gameplay, but some players enjoy its challenge and skill.
-
Features: CS:GO has more features than CS 1.6, such as matchmaking, ranking, skins, cases, stickers, achievements, stats, and more. CS 1.6 has fewer features, but some players appreciate its simplicity and focus on the core game.
-
Modes: CS:GO has more modes than CS 1.6, such as competitive, casual, deathmatch, arms race, demolition, wingman, danger zone, and more. CS 1.6 has fewer modes, but some players like its classic and original modes.
-
Maps: CS:GO has more maps than CS 1.6, including new ones and remakes of old ones. Some of the new maps are Dust II, Mirage, Inferno, Cache, Overpass, Cobblestone, Train, Nuke, Vertigo, and more. Some of the remade maps are Dust, Aztec, Italy, Office, Assault, Militia, and more. CS 1.6 has fewer maps, but some players love its iconic and legendary maps.
-
Weapons: CS:GO has more weapons than CS 1.6, including new ones and updated ones. Some of the new weapons are M4A1-S, USP-S, CZ75-Auto, R8 Revolver, MP7, MP9, PP-Bizon, UMP-45, Negev, Sawed-Off Shotgun, Nova Shotgun,and more. Some of the updated weapons are AK-47,M4A4,Glock-18,P250,FAMAS,Galil AR,AUG,SIG SG 553,AWP,and more. CS 1.6 has fewer weapons,but some players prefer its classic and familiar weapons.
-
-
How can I improve my performance and FPS in Counter Strike 1.6?
-
If you want to improve your performance and FPS (frames per second) in Counter Strike 1.6,you can try the following tips:
-
-
Update your drivers and software to the latest versions.
-
Close any unnecessary programs or processes running in the background.
-
Lower your graphics settings in the game options or use a custom config file.
-
Use a lower resolution or windowed mode for the game.
-
Disable vertical sync (VSync) and anti-aliasing in the game options or your graphics card settings.
-
Clean your computer from dust and dirt and make sure it is well ventilated.
-
-
Where can I find more information and resources about Counter Strike 1.6?
-
If you want to find more information and resources about Counter Strike 1.6,you can visit the following websites:
If you want to find more information and resources about Counter Strike 1.6, you can visit the following websites:
-
-
[CS-ONLINE.CLUB](^1^): This website allows you to play Counter Strike 1.6 on your browser, without downloading anything. You can also rent a private server, chat with other players, and check your stats and rankings.
-
[Hypebeast](^2^): This website provides a brief overview of how to play Counter Strike 1.6 on your browser, as well as some screenshots and a video of the game.
-
[PC Gamer](^3^): This website gives a detailed review of how to play Counter Strike 1.6 on your browser, as well as some tips and tricks for improving your performance and FPS.
-
-
These websites can help you learn more about Counter Strike 1.6, its history, its features, its community, and its legacy. You can also find other websites that offer news, guides, forums, videos, and more about the game.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Score Hero APK Hack and Become a Football Legend.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Score Hero APK Hack and Become a Football Legend.md
deleted file mode 100644
index 5fb0f389000aaa6fe51aa693f54ba363ccedc241..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Score Hero APK Hack and Become a Football Legend.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Score Hero APK Hack Download: How to Get Unlimited Money and Energy in Score Hero 2
-
Introduction
-
If you are a fan of soccer games, you might have heard of Score Hero 2, a unique and immersive game that lets you control the action and become a soccer superstar. Unlike other soccer games, Score Hero 2 does not require you to play an entire match, but instead puts you in various situations where you need to score a goal or assist a teammate. You can also customize your hero, choose from over 90 real clubs, and take part in regular events for medals and glory.
-
However, as fun as Score Hero 2 is, it also has some limitations that might frustrate you. For example, you need to spend money to buy items, upgrade your skills, or change your appearance. You also need energy to play each level, which can run out quickly if you fail or retry too often. And if you want to unlock all the levels and stories, you need to complete a lot of challenges and earn stars.
Fortunately, there is a way to overcome these limitations and enjoy Score Hero 2 without any restrictions. And that is by using the Score Hero APK hack, a modified version of the game that gives you unlimited money and energy, as well as unlocks all the levels and stories. With this hack, you can buy anything you want, play as long as you want, and explore the game at your own pace.
-
In this article, we will show you how to download and install the Score Hero APK hack, how to use its features, and some tips and tricks for playing Score Hero 2. So read on and get ready to score some amazing goals!
-
How to Download and Install the Score Hero APK Hack
-
The first step to use the Score Hero APK hack is to download it from a reliable source. There are many websites that claim to offer the hack, but not all of them are safe or trustworthy. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything.
-
One of the websites that we recommend is [AN1.com](^1^), which offers a variety of hacked games for Android devices. You can find the Score Hero APK hack by searching for "score hero mod" on their website. The latest version of the hack is 2.75, which was updated on February 15, 2023. The file size is 95.6 MB, which is not too large for most devices.
-
Before downloading the file, make sure that you have enough storage space on your device. You can also check the comments section of the website to see what other users have said about the hack. If there are any complaints or issues reported, you might want to look for another source.
-
Once
Once you have downloaded the file, you need to install it on your device. But before that, you need to enable unknown sources on your device settings. This is because the Score Hero APK hack is not from the official Google Play Store, and your device might block it by default. To enable unknown sources, follow these steps:
-
score hero mod apk unlimited money and energy download
-score hero hack apk download latest version
-score hero apk mod unlimited lives and stars download
-score hero hack apk download for android
-score hero mod apk download 2021
-score hero hack apk download no root
-score hero mod apk unlimited money and energy 2.75
-score hero hack apk download ios
-score hero mod apk unlimited everything download
-score hero hack apk download free
-score hero mod apk download rexdl
-score hero hack apk download 2020
-score hero mod apk download revdl
-score hero hack apk download uptodown
-score hero mod apk download an1
-score hero hack apk download android 1
-score hero mod apk download apkpure
-score hero hack apk download apkmody
-score hero mod apk download happymod
-score hero hack apk download latest version 2021
-score hero mod apk download for pc
-score hero hack apk download for ios
-score hero mod apk download android oyun club
-score hero hack apk download android oyun club
-score hero mod apk download 2.75
-score hero hack apk download 2.75
-score hero mod apk download 2.32
-score hero hack apk download 2.32
-score hero mod apk download 2.22
-score hero hack apk download 2.22
-score hero mod apk unlimited money and energy 2.32
-score hero hack apk unlimited money and energy 2.32
-score hero mod apk unlimited money and energy 2.22
-score hero hack apk unlimited money and energy 2.22
-score hero mod apk unlimited money and energy 1.77
-score hero hack apk unlimited money and energy 1.77
-score hero mod apk unlimited money and energy 1.76
-score hero hack apk unlimited money and energy 1.76
-score hero mod apk unlimited money and energy 1.75
-score hero hack apk unlimited money and energy 1.75
- - Go to your device settings and look for security or privacy options. - Find the option that says unknown sources or allow installation of apps from unknown sources and turn it on. - You might see a warning message that says installing apps from unknown sources can harm your device. Ignore it and tap OK.
Now you are ready to install the Score Hero APK hack. To do that, follow these steps:
- - Locate the downloaded file on your device storage. You can use a file manager app to help you find it. - Tap on the file and you will see a pop-up window that asks you to install the app. - Tap on install and wait for the installation process to finish. - You might see another pop-up window that asks you to allow the app to access your device data. Tap on allow and proceed.
Congratulations, you have successfully installed the Score Hero APK hack on your device. Now you can launch the game and enjoy the hack features.
-
How to Use the Score Hero APK Hack Features
-
The Score Hero APK hack has three main features that will make your gaming experience more fun and easy. These are unlimited money, unlimited energy, and unlocked levels and stories. Let's see how to use each of them.
-
Unlimited Money
-
Money is the currency of Score Hero 2, which you can use to buy items, upgrade your skills, or change your appearance. You can earn money by playing levels, completing challenges, or watching ads. However, these methods are slow and tedious, and you might not have enough money to buy what you want.
-
With the Score Hero APK hack, you don't have to worry about money anymore. You will have unlimited money from the start of the game, which means you can buy anything you want without any restrictions. You can access the shop by tapping on the cart icon on the top right corner of the screen. There you will find various categories of items, such as balls, boots, shirts, pants, hair, accessories, and more. You can also upgrade your skills by tapping on the star icon on the top left corner of the screen. There you can improve your shooting, passing, dribbling, speed, stamina, and more.
-
To use unlimited money, just tap on any item or skill that you want to buy or upgrade and confirm your purchase. You will see that your money balance will not decrease at all, no matter how much you spend. This way, you can customize your hero and make him look and play like a pro.
-
Unlimited Energy
-
Energy is another resource that you need to play Score Hero 2. Each level requires a certain amount of energy to play, which varies depending on the difficulty and length of the level. You can see how much energy a level requires by looking at the lightning icon below it. You start with a maximum of 10 energy points, which regenerate over time or by watching ads.
-
However, energy can run out quickly if you play too many levels in a row or if you fail or retry too often. When that happens, you have to wait for your energy to refill or watch ads to get more energy. This can be annoying and interrupt your gaming flow.
-
With the Score Hero APK hack, you don't have to worry about energy anymore either. You will have unlimited energy from the start of the game, which means you can play as many levels as you want without any interruptions. You will see that your energy bar will always be full, no matter how much you play.
-
Unlocked Levels and Stories
-
The last feature of the Score Hero APK hack is unlocked levels and stories. Score Hero 2 has over 600 levels and 20 stories that follow your hero's career from a rookie to a legend. However, not all of them are available from the beginning. You need to complete previous levels and earn stars to unlock new ones.
-
This can be challenging and time-consuming, especially if some levels are too hard or require specific goals or conditions. You might get stuck on a level for a long time or miss out on some stories that interest you.
-
With the Score Hero APK hack, you don't have to worry about unlocking levels and stories anymore either. You will have all of them unlocked from the start of the game, which means you can play any level or story that you want without any restrictions. You can access them by tapping on the map icon on the bottom left corner of the screen. You can access them by tapping on the map icon on the bottom left corner of the screen. There you will see a list of stories, each with a number of levels. You can tap on any story or level that you want to play and start the action.
-
This way, you can explore the game at your own pace and enjoy the different scenarios and challenges that Score Hero 2 offers.
-
Tips and Tricks for Playing Score Hero 2
-
Now that you know how to use the Score Hero APK hack features, you might wonder how to play Score Hero 2 like a pro. Well, we have some tips and tricks for you that will help you score amazing goals and win awards. Here they are:
-
How to Score Amazing Goals and Win Awards
-
Scoring goals is the main objective of Score Hero 2, and it is also the most fun part. However, not all goals are equal. Some are more spectacular and rewarding than others. For example, you can score goals by using different techniques, such as curling, chipping, volleying, or heading. You can also score goals from different distances, angles, or positions. And you can score goals in different situations, such as free kicks, penalties, corners, or counterattacks.
-
The game will reward you for scoring amazing goals by giving you awards, such as gold balls, trophies, medals, or stars. These awards will also help you unlock new items, skills, or stories. Therefore, you should always try to score amazing goals and win awards whenever possible.
-
But how do you score amazing goals? Well, here are some tips:
- - Use the swipe gesture to control the direction and power of your shots. The longer and faster you swipe, the more power and curve your shots will have. - Aim for the corners of the goal or the spaces between the goalkeeper and the defenders. These are the hardest spots to save and will increase your chances of scoring. - Use different techniques depending on the situation. For example, use curling shots to bend the ball around the defenders or the wall, use chipping shots to lob the ball over the goalkeeper or the defenders, use volleying shots to hit the ball in mid-air before it touches the ground, or use heading shots to nod the ball into the goal from a cross or a corner. - Experiment with different angles and positions. For example, try to score from outside the box, from acute angles, from behind the defenders, or from unexpected places. - Take advantage of the situations that give you an edge. For example, use free kicks or penalties to score without any interference from the defenders, use corners to create chaos in the box and find an opening, or use counterattacks to catch the opponents off guard and outnumber them.
How to Pass Wisely and Avoid Defenders
-
Passing is another important aspect of Score Hero 2, as it allows you to create opportunities and assist your teammates. However, passing is not always easy, as you have to deal with defenders who will try to intercept your passes or tackle your hero. Therefore, you need to pass wisely and avoid defenders whenever possible.
-
But how do you pass wisely and avoid defenders? Well, here are some tips:
- - Use the tap gesture to control the direction and power of your passes. The longer and faster you tap, the more power and curve your passes will have. - Aim for the spaces between the defenders or behind them. These are the safest spots to pass and will increase your chances of reaching your teammates. - Use different techniques depending on the situation. For example, use through balls to send your teammates behind the defenders or into space, use cross balls to send your teammates into the box or near the goal, use back passes to retain possession or switch the play, or use one-touch passes to surprise the defenders or create quick combinations. - Experiment with different angles and positions. For example, try to pass from the wings, from the center, from the back, or from the front. - Take advantage of the situations that give you an edge. For example, use counterattacks to exploit the gaps in the defense, use set pieces to create chances from dead ball situations, or use dribbling to attract defenders and create space for your teammates.
How to Customize Your Hero and Improve Your Skills
-
Customizing your hero and improving your skills are two ways to make your hero stand out and perform better in Score Hero 2. You can change your hero's appearance, such as his hair, face, skin, clothes, and accessories. You can also upgrade your hero's skills, such as his shooting, passing, dribbling, speed, stamina, and more.
-
But how do you customize your hero and improve your skills? Well, here are some tips:
- - Use the shop to buy items or skills that suit your style and preferences. You can access the shop by tapping on the cart icon on the top right corner of the screen. There you will find various categories of items and skills that you can buy with money. You can also preview how they look on your hero before buying them. - Use the awards to unlock new items or skills that are exclusive or rare. You can access the awards by tapping on the trophy icon on the top right corner of the screen. There you will see a list of awards that you can earn by scoring amazing goals, completing challenges, or playing events. Some of these awards will give you gold balls, which are special currency that you can use to buy premium items or skills. - Use the hack to get unlimited money and buy anything you want without any restrictions. As we mentioned before, you can use the Score Hero APK hack to get unlimited money from the start of the game. This way, you can buy any item or skill that you want without worrying about your budget.
Conclusion
-
Score Hero 2 is a great game for soccer lovers who want to control the action and become a soccer superstar. However, it also has some limitations that might hinder your gaming experience. That's why we recommend using the Score Hero APK hack, a modified version of the game that gives you unlimited money and energy, as well as unlocks all levels and stories.
-
With this hack, you can enjoy Score Hero 2 without any restrictions and have more fun and freedom. You can buy anything you want, play as long as you want, and explore the game at your own pace. You can also score amazing goals, pass wisely, and customize your hero.
-
In this article, we showed you how to download and install the Score Hero APK hack, how to use its features, and some tips and tricks for playing Score Hero 2. We hope you found this article helpful and informative.
-
Now it's time for you to try the Score Hero APK hack and see for yourself how awesome it is. Download it now and start scoring some amazing goals!
-
FAQs
-
Q1: Is the Score Hero APK hack safe to use?
-
A1: Yes, the Score Hero APK hack is safe to use as long as you download it from a reliable source like [AN1.com]. However, you should always be careful when downloading anything from unknown sources and scan it with an antivirus app before installing it.
-
Q2: Do I need to root my device to use the Score Hero APK hack?
-
A2: No, you don't need to root your device to use the Score Hero APK hack. The hack works on any Android device without requiring any special permissions or modifications.
-
Q3: Can I play online with the Score Hero APK hack?
-
A3: Yes, you can play online with the Score Hero APK hack. The hack does not affect your online connectivity or compatibility with other players. However, you should be careful not to abuse the hack features or cheat in online matches, as this might ruin the fun for others or get you reported.
-
Q4: Will I get banned for using the Score Hero APK hack?
-
A4: No, you will not get banned for using the Score Hero APK hack. The hack is undetectable by the game servers and does not interfere with your account data or progress. However, you should always use the hack responsibly and not brag about it or share it with others.
-
Q5: How can I update the Score Hero APK hack?
-
A5: You can update the Score Hero APK
A5: You can update the Score Hero APK hack by visiting the same website where you downloaded it and looking for the latest version. You can also check the comments section of the website to see if there are any updates or news about the hack. To install the update, you need to follow the same steps as before, but make sure to uninstall the previous version of the hack first.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/sklearn-docs/Manifold-Learning-methods-on-a-severed-sphere/README.md b/spaces/sklearn-docs/Manifold-Learning-methods-on-a-severed-sphere/README.md
deleted file mode 100644
index 97c873b6f40647c2cc348251e0af1422662f6934..0000000000000000000000000000000000000000
--- a/spaces/sklearn-docs/Manifold-Learning-methods-on-a-severed-sphere/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Manifold Learning Methods On A Severed Sphere
-emoji: 🦀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: bsd-3-clause
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/app.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/app.py
deleted file mode 100644
index 26c63f03abdacef260b11a091048af5729b26b8c..0000000000000000000000000000000000000000
--- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/app.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import os
-import sys
-import time
-import json
-import torch
-import base64
-from PIL import Image
-from io import BytesIO
-
-# set CUDA_MODULE_LOADING=LAZY to speed up the serverless function
-os.environ["CUDA_MODULE_LOADING"] = "LAZY"
-# set SAFETENSORS_FAST_GPU=1 to speed up the serverless function
-os.environ["SAFETENSORS_FAST_GPU"] = "1"
-
-sys.path.append(os.path.join(os.path.dirname(__file__), "seg2art"))
-from seg2art.sstan_models.pix2pix_model import Pix2PixModel
-from seg2art.options.test_options import TestOptions
-from seg2art.inference_util import get_artwork
-
-import uvicorn
-from fastapi import FastAPI, Form
-from fastapi.templating import Jinja2Templates
-from fastapi.responses import PlainTextResponse, HTMLResponse
-from fastapi.requests import Request
-from fastapi.staticfiles import StaticFiles
-
-
-# declare constants
-HOST = "0.0.0.0"
-PORT = 7860
-# FastAPI
-app = FastAPI(root_path=os.path.abspath(os.path.dirname(__file__)))
-app.mount("/static", StaticFiles(directory="static"), name="static")
-templates = Jinja2Templates(directory="templates")
-
-
-# initialize SEAN model.
-opt = TestOptions().parse()
-opt.status = "test"
-model = Pix2PixModel(opt)
-model = model.half() if torch.cuda.is_available() else model
-model.eval()
-
-
-from utils.umap_utils import get_code, load_boundries, modify_code
-
-boundaries = load_boundries()
-global current_codes
-current_codes = {}
-max_user_num = 5
-
-initial_code_path = os.path.join(os.path.dirname(__file__), "static/init_code")
-initial_code = torch.load(initial_code_path) if torch.cuda.is_available() else torch.load(initial_code_path, map_location=torch.device("cpu"))
-
-
-def EncodeImage(img_pil):
- with BytesIO() as buffer:
- img_pil.save(buffer, "jpeg")
- image_data = base64.b64encode(buffer.getvalue())
- return image_data
-
-
-def DecodeImage(img_pil):
- img_pil = BytesIO(base64.urlsafe_b64decode(img_pil))
- img_pil = Image.open(img_pil).convert("RGB")
- return img_pil
-
-
-def process_input(body, random=False):
- global current_codes
- json_body = json.loads(body.decode("utf-8"))
- user_id = json_body["user_id"]
- start_time = time.time()
-
- # save current code for different users
- if user_id not in current_codes:
- current_codes[user_id] = initial_code.clone()
- if len(current_codes[user_id]) > max_user_num:
- current_codes[user_id] = current_codes[user_id][-max_user_num:]
-
- if random:
- # randomize code
- domain = json_body["model"]
- current_codes[user_id] = get_code(domain, boundaries)
-
- # get input
- input_img = DecodeImage(json_body["img"])
-
- try:
- move_range = float(json_body["move_range"])
- except:
- move_range = 0
-
- # set move range to 3 if random is True
- move_range = 3 if random else move_range
- # print("Input image was received")
- # get selected style
- domain = json_body["model"]
- if move_range != 0:
- modified_code = modify_code(current_codes[user_id], boundaries, domain, move_range)
- else:
- modified_code = current_code.clone()
-
- # inference
- result = get_artwork(model, input_img, modified_code)
- print("Time Cost: ", time.time() - start_time)
- return EncodeImage(result)
-
-
-@app.get("/", response_class=HTMLResponse)
-def root(request: Request):
- return templates.TemplateResponse("index.html", {"request": request})
-
-@app.get("/check_gpu")
-async def check_gpu():
- return torch.cuda.is_available()
-
-@app.post("/predict")
-async def predict(request: Request):
- body = await request.body()
- result = process_input(body, random=False)
- return result
-
-
-@app.post("/predict_random")
-async def predict_random(request: Request):
- body = await request.body()
- result = process_input(body, random=True)
- return result
-
-
-if __name__ == "__main__":
- uvicorn.run(app, host=HOST, port=PORT, log_level="info")
diff --git a/spaces/sneedium/dvatch_captcha_sneedium_old/modules/resnet.py b/spaces/sneedium/dvatch_captcha_sneedium_old/modules/resnet.py
deleted file mode 100644
index 3b53b137134f36b126fa5bdf68d619c2283d082f..0000000000000000000000000000000000000000
--- a/spaces/sneedium/dvatch_captcha_sneedium_old/modules/resnet.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import math
-
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.model_zoo as model_zoo
-
-
-def conv1x1(in_planes, out_planes, stride=1):
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv1x1(inplanes, planes)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes, stride)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers):
- self.inplanes = 32
- super(ResNet, self).__init__()
- self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1,
- bias=False)
- self.bn1 = nn.BatchNorm2d(32)
- self.relu = nn.ReLU(inplace=True)
-
- self.layer1 = self._make_layer(block, 32, layers[0], stride=2)
- self.layer2 = self._make_layer(block, 64, layers[1], stride=1)
- self.layer3 = self._make_layer(block, 128, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 256, layers[3], stride=1)
- self.layer5 = self._make_layer(block, 512, layers[4], stride=1)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(3. / n))
- elif isinstance(m, nn.BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- x = self.layer5(x)
- return x
-
-
-def resnet45():
- return ResNet(BasicBlock, [3, 4, 6, 6, 3])
diff --git a/spaces/society-ethics/model-card-regulatory-check/tests/cards/sentence-transformers___all-MiniLM-L6-v2.md b/spaces/society-ethics/model-card-regulatory-check/tests/cards/sentence-transformers___all-MiniLM-L6-v2.md
deleted file mode 100644
index c2cf8c94e238937a7385a9ca84ecf5114adb550f..0000000000000000000000000000000000000000
--- a/spaces/society-ethics/model-card-regulatory-check/tests/cards/sentence-transformers___all-MiniLM-L6-v2.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# all-MiniLM-L6-v2
-This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
-
-## Usage (Sentence-Transformers)
-Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
-
-```
-pip install -U sentence-transformers
-```
-
-Then you can use the model like this:
-```python
-from sentence_transformers import SentenceTransformer
-sentences = ["This is an example sentence", "Each sentence is converted"]
-
-model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
-embeddings = model.encode(sentences)
-print(embeddings)
-```
-
-## Usage (HuggingFace Transformers)
-Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
-
-```python
-from transformers import AutoTokenizer, AutoModel
-import torch
-import torch.nn.functional as F
-
-#Mean Pooling - Take attention mask into account for correct averaging
-def mean_pooling(model_output, attention_mask):
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
-
-
-# Sentences we want sentence embeddings for
-sentences = ['This is an example sentence', 'Each sentence is converted']
-
-# Load model from HuggingFace Hub
-tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
-model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
-
-# Tokenize sentences
-encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
-
-# Compute token embeddings
-with torch.no_grad():
- model_output = model(**encoded_input)
-
-# Perform pooling
-sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
-
-# Normalize embeddings
-sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
-
-print("Sentence embeddings:")
-print(sentence_embeddings)
-```
-
-## Evaluation Results
-
-For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
-
-------
-
-## Background
-
-The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
-contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
-1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
-
-We developped this model during the
-[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
-organized by Hugging Face. We developped this model as part of the project:
-[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
-
-## Intended uses
-
-Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
-the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
-
-By default, input text longer than 256 word pieces is truncated.
-
-
-## Training procedure
-
-### Pre-training
-
-We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
-
-### Fine-tuning
-
-We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
-We then apply the cross entropy loss by comparing with true pairs.
-
-#### Hyper parameters
-
-We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
-We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
-a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
-
-#### Training data
-
-We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
-We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
-
-
-| Dataset | Paper | Number of training tuples |
-|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
-| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
-| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
-| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
-| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
-| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
-| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
-| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
-| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
-| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
-| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
-| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
-| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
-| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
-| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
-| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
-| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
-| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
-| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
-| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
-| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
-| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
-| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
-| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
-| **Total** | | **1,170,060,424** |
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/joint_alignment_translation/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/joint_alignment_translation/README.md
deleted file mode 100644
index cd9c0ea65f5292198296a8f427b42e01b584e2d9..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/joint_alignment_translation/README.md
+++ /dev/null
@@ -1,89 +0,0 @@
-# Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)
-
-This page includes instructions for training models described in [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](https://arxiv.org/abs/1909.02074).
-
-## Training a joint alignment-translation model on WMT'18 En-De
-
-##### 1. Extract and preprocess the WMT'18 En-De data
-```bash
-./prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh
-```
-
-##### 2. Generate alignments from statistical alignment toolkits e.g. Giza++/FastAlign.
-In this example, we use FastAlign.
-```bash
-git clone git@github.com:clab/fast_align.git
-pushd fast_align
-mkdir build
-cd build
-cmake ..
-make
-popd
-ALIGN=fast_align/build/fast_align
-paste bpe.32k/train.en bpe.32k/train.de | awk -F '\t' '{print $1 " ||| " $2}' > bpe.32k/train.en-de
-$ALIGN -i bpe.32k/train.en-de -d -o -v > bpe.32k/train.align
-```
-
-##### 3. Preprocess the dataset with the above generated alignments.
-```bash
-fairseq-preprocess \
- --source-lang en --target-lang de \
- --trainpref bpe.32k/train \
- --validpref bpe.32k/valid \
- --testpref bpe.32k/test \
- --align-suffix align \
- --destdir binarized/ \
- --joined-dictionary \
- --workers 32
-```
-
-##### 4. Train a model
-```bash
-fairseq-train \
- binarized \
- --arch transformer_wmt_en_de_big_align --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --activation-fn relu\
- --lr 0.0002 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \
- --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \
- --max-tokens 3500 --label-smoothing 0.1 \
- --save-dir ./checkpoints --log-interval 1000 --max-update 60000 \
- --keep-interval-updates -1 --save-interval-updates 0 \
- --load-alignments --criterion label_smoothed_cross_entropy_with_alignment \
- --fp16
-```
-
-Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU or newer.
-
-If you want to train the above model with big batches (assuming your machine has 8 GPUs):
-- add `--update-freq 8` to simulate training on 8x8=64 GPUs
-- increase the learning rate; 0.0007 works well for big batches
-
-##### 5. Evaluate and generate the alignments (BPE level)
-```bash
-fairseq-generate \
- binarized --gen-subset test --print-alignment \
- --source-lang en --target-lang de \
- --path checkpoints/checkpoint_best.pt --beam 5 --nbest 1
-```
-
-##### 6. Other resources.
-The code for:
-1. preparing alignment test sets
-2. converting BPE level alignments to token level alignments
-3. symmetrizing bidirectional alignments
-4. evaluating alignments using AER metric
-can be found [here](https://github.com/lilt/alignment-scripts)
-
-## Citation
-
-```bibtex
-@inproceedings{garg2019jointly,
- title = {Jointly Learning to Align and Translate with Transformer Models},
- author = {Garg, Sarthak and Peitz, Stephan and Nallasamy, Udhyakumar and Paulik, Matthias},
- booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)},
- address = {Hong Kong},
- month = {November},
- url = {https://arxiv.org/abs/1909.02074},
- year = {2019},
-}
-```
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/w2l_decoder.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/w2l_decoder.py
deleted file mode 100644
index fbf2d3524ee40bd0d08b6a9560047d96e49b6045..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/w2l_decoder.py
+++ /dev/null
@@ -1,486 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Flashlight decoders.
-"""
-
-import gc
-import itertools as it
-import os.path as osp
-from typing import List
-import warnings
-from collections import deque, namedtuple
-
-import numpy as np
-import torch
-from examples.speech_recognition.data.replabels import unpack_replabels
-from fairseq import tasks
-from fairseq.utils import apply_to_sample
-from omegaconf import open_dict
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-
-
-try:
- from flashlight.lib.text.dictionary import create_word_dict, load_words
- from flashlight.lib.sequence.criterion import CpuViterbiPath, get_data_ptr_as_bytes
- from flashlight.lib.text.decoder import (
- CriterionType,
- LexiconDecoderOptions,
- KenLM,
- LM,
- LMState,
- SmearingMode,
- Trie,
- LexiconDecoder,
- )
-except:
- warnings.warn(
- "flashlight python bindings are required to use this functionality. Please install from https://github.com/facebookresearch/flashlight/tree/master/bindings/python"
- )
- LM = object
- LMState = object
-
-
-class W2lDecoder(object):
- def __init__(self, args, tgt_dict):
- self.tgt_dict = tgt_dict
- self.vocab_size = len(tgt_dict)
- self.nbest = args.nbest
-
- # criterion-specific init
- self.criterion_type = CriterionType.CTC
- self.blank = (
- tgt_dict.index("")
- if "" in tgt_dict.indices
- else tgt_dict.bos()
- )
- if "" in tgt_dict.indices:
- self.silence = tgt_dict.index("")
- elif "|" in tgt_dict.indices:
- self.silence = tgt_dict.index("|")
- else:
- self.silence = tgt_dict.eos()
- self.asg_transitions = None
-
- def generate(self, models, sample, **unused):
- """Generate a batch of inferences."""
- # model.forward normally channels prev_output_tokens into the decoder
- # separately, but SequenceGenerator directly calls model.encoder
- encoder_input = {
- k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens"
- }
- emissions = self.get_emissions(models, encoder_input)
- return self.decode(emissions)
-
- def get_emissions(self, models, encoder_input):
- """Run encoder and normalize emissions"""
- model = models[0]
- encoder_out = model(**encoder_input)
- if hasattr(model, "get_logits"):
- emissions = model.get_logits(encoder_out) # no need to normalize emissions
- else:
- emissions = model.get_normalized_probs(encoder_out, log_probs=True)
- return emissions.transpose(0, 1).float().cpu().contiguous()
-
- def get_tokens(self, idxs):
- """Normalize tokens by handling CTC blank, ASG replabels, etc."""
- idxs = (g[0] for g in it.groupby(idxs))
- idxs = filter(lambda x: x != self.blank, idxs)
- return torch.LongTensor(list(idxs))
-
-
-class W2lViterbiDecoder(W2lDecoder):
- def __init__(self, args, tgt_dict):
- super().__init__(args, tgt_dict)
-
- def decode(self, emissions):
- B, T, N = emissions.size()
- hypos = []
- if self.asg_transitions is None:
- transitions = torch.FloatTensor(N, N).zero_()
- else:
- transitions = torch.FloatTensor(self.asg_transitions).view(N, N)
- viterbi_path = torch.IntTensor(B, T)
- workspace = torch.ByteTensor(CpuViterbiPath.get_workspace_size(B, T, N))
- CpuViterbiPath.compute(
- B,
- T,
- N,
- get_data_ptr_as_bytes(emissions),
- get_data_ptr_as_bytes(transitions),
- get_data_ptr_as_bytes(viterbi_path),
- get_data_ptr_as_bytes(workspace),
- )
- return [
- [{"tokens": self.get_tokens(viterbi_path[b].tolist()), "score": 0}]
- for b in range(B)
- ]
-
-
-class W2lKenLMDecoder(W2lDecoder):
- def __init__(self, args, tgt_dict):
- super().__init__(args, tgt_dict)
-
- self.unit_lm = getattr(args, "unit_lm", False)
-
- if args.lexicon:
- self.lexicon = load_words(args.lexicon)
- self.word_dict = create_word_dict(self.lexicon)
- self.unk_word = self.word_dict.get_index("")
-
- self.lm = KenLM(args.kenlm_model, self.word_dict)
- self.trie = Trie(self.vocab_size, self.silence)
-
- start_state = self.lm.start(False)
- for i, (word, spellings) in enumerate(self.lexicon.items()):
- word_idx = self.word_dict.get_index(word)
- _, score = self.lm.score(start_state, word_idx)
- for spelling in spellings:
- spelling_idxs = [tgt_dict.index(token) for token in spelling]
- assert (
- tgt_dict.unk() not in spelling_idxs
- ), f"{spelling} {spelling_idxs}"
- self.trie.insert(spelling_idxs, word_idx, score)
- self.trie.smear(SmearingMode.MAX)
-
- self.decoder_opts = LexiconDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- word_score=args.word_score,
- unk_score=args.unk_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
-
- if self.asg_transitions is None:
- N = 768
- # self.asg_transitions = torch.FloatTensor(N, N).zero_()
- self.asg_transitions = []
-
- self.decoder = LexiconDecoder(
- self.decoder_opts,
- self.trie,
- self.lm,
- self.silence,
- self.blank,
- self.unk_word,
- self.asg_transitions,
- self.unit_lm,
- )
- else:
- assert args.unit_lm, "lexicon free decoding can only be done with a unit language model"
- from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions
-
- d = {w: [[w]] for w in tgt_dict.symbols}
- self.word_dict = create_word_dict(d)
- self.lm = KenLM(args.kenlm_model, self.word_dict)
- self.decoder_opts = LexiconFreeDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
- self.decoder = LexiconFreeDecoder(
- self.decoder_opts, self.lm, self.silence, self.blank, []
- )
-
- def get_timesteps(self, token_idxs: List[int]) -> List[int]:
- """Returns frame numbers corresponding to every non-blank token.
-
- Parameters
- ----------
- token_idxs : List[int]
- IDs of decoded tokens.
-
- Returns
- -------
- List[int]
- Frame numbers corresponding to every non-blank token.
- """
- timesteps = []
- for i, token_idx in enumerate(token_idxs):
- if token_idx == self.blank:
- continue
- if i == 0 or token_idx != token_idxs[i-1]:
- timesteps.append(i)
- return timesteps
-
- def decode(self, emissions):
- B, T, N = emissions.size()
- hypos = []
- for b in range(B):
- emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0)
- results = self.decoder.decode(emissions_ptr, T, N)
-
- nbest_results = results[: self.nbest]
- hypos.append(
- [
- {
- "tokens": self.get_tokens(result.tokens),
- "score": result.score,
- "timesteps": self.get_timesteps(result.tokens),
- "words": [
- self.word_dict.get_entry(x) for x in result.words if x >= 0
- ],
- }
- for result in nbest_results
- ]
- )
- return hypos
-
-
-FairseqLMState = namedtuple("FairseqLMState", ["prefix", "incremental_state", "probs"])
-
-
-class FairseqLM(LM):
- def __init__(self, dictionary, model):
- LM.__init__(self)
- self.dictionary = dictionary
- self.model = model
- self.unk = self.dictionary.unk()
-
- self.save_incremental = False # this currently does not work properly
- self.max_cache = 20_000
-
- model.cuda()
- model.eval()
- model.make_generation_fast_()
-
- self.states = {}
- self.stateq = deque()
-
- def start(self, start_with_nothing):
- state = LMState()
- prefix = torch.LongTensor([[self.dictionary.eos()]])
- incremental_state = {} if self.save_incremental else None
- with torch.no_grad():
- res = self.model(prefix.cuda(), incremental_state=incremental_state)
- probs = self.model.get_normalized_probs(res, log_probs=True, sample=None)
-
- if incremental_state is not None:
- incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state)
- self.states[state] = FairseqLMState(
- prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy()
- )
- self.stateq.append(state)
-
- return state
-
- def score(self, state: LMState, token_index: int, no_cache: bool = False):
- """
- Evaluate language model based on the current lm state and new word
- Parameters:
- -----------
- state: current lm state
- token_index: index of the word
- (can be lexicon index then you should store inside LM the
- mapping between indices of lexicon and lm, or lm index of a word)
-
- Returns:
- --------
- (LMState, float): pair of (new state, score for the current word)
- """
- curr_state = self.states[state]
-
- def trim_cache(targ_size):
- while len(self.stateq) > targ_size:
- rem_k = self.stateq.popleft()
- rem_st = self.states[rem_k]
- rem_st = FairseqLMState(rem_st.prefix, None, None)
- self.states[rem_k] = rem_st
-
- if curr_state.probs is None:
- new_incremental_state = (
- curr_state.incremental_state.copy()
- if curr_state.incremental_state is not None
- else None
- )
- with torch.no_grad():
- if new_incremental_state is not None:
- new_incremental_state = apply_to_sample(
- lambda x: x.cuda(), new_incremental_state
- )
- elif self.save_incremental:
- new_incremental_state = {}
-
- res = self.model(
- torch.from_numpy(curr_state.prefix).cuda(),
- incremental_state=new_incremental_state,
- )
- probs = self.model.get_normalized_probs(
- res, log_probs=True, sample=None
- )
-
- if new_incremental_state is not None:
- new_incremental_state = apply_to_sample(
- lambda x: x.cpu(), new_incremental_state
- )
-
- curr_state = FairseqLMState(
- curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy()
- )
-
- if not no_cache:
- self.states[state] = curr_state
- self.stateq.append(state)
-
- score = curr_state.probs[token_index].item()
-
- trim_cache(self.max_cache)
-
- outstate = state.child(token_index)
- if outstate not in self.states and not no_cache:
- prefix = np.concatenate(
- [curr_state.prefix, torch.LongTensor([[token_index]])], -1
- )
- incr_state = curr_state.incremental_state
-
- self.states[outstate] = FairseqLMState(prefix, incr_state, None)
-
- if token_index == self.unk:
- score = float("-inf")
-
- return outstate, score
-
- def finish(self, state: LMState):
- """
- Evaluate eos for language model based on the current lm state
-
- Returns:
- --------
- (LMState, float): pair of (new state, score for the current word)
- """
- return self.score(state, self.dictionary.eos())
-
- def empty_cache(self):
- self.states = {}
- self.stateq = deque()
- gc.collect()
-
-
-class W2lFairseqLMDecoder(W2lDecoder):
- def __init__(self, args, tgt_dict):
- super().__init__(args, tgt_dict)
-
- self.unit_lm = getattr(args, "unit_lm", False)
-
- self.lexicon = load_words(args.lexicon) if args.lexicon else None
- self.idx_to_wrd = {}
-
- checkpoint = torch.load(args.kenlm_model, map_location="cpu")
-
- if "cfg" in checkpoint and checkpoint["cfg"] is not None:
- lm_args = checkpoint["cfg"]
- else:
- lm_args = convert_namespace_to_omegaconf(checkpoint["args"])
-
- with open_dict(lm_args.task):
- lm_args.task.data = osp.dirname(args.kenlm_model)
-
- task = tasks.setup_task(lm_args.task)
- model = task.build_model(lm_args.model)
- model.load_state_dict(checkpoint["model"], strict=False)
-
- self.trie = Trie(self.vocab_size, self.silence)
-
- self.word_dict = task.dictionary
- self.unk_word = self.word_dict.unk()
- self.lm = FairseqLM(self.word_dict, model)
-
- if self.lexicon:
- start_state = self.lm.start(False)
- for i, (word, spellings) in enumerate(self.lexicon.items()):
- if self.unit_lm:
- word_idx = i
- self.idx_to_wrd[i] = word
- score = 0
- else:
- word_idx = self.word_dict.index(word)
- _, score = self.lm.score(start_state, word_idx, no_cache=True)
-
- for spelling in spellings:
- spelling_idxs = [tgt_dict.index(token) for token in spelling]
- assert (
- tgt_dict.unk() not in spelling_idxs
- ), f"{spelling} {spelling_idxs}"
- self.trie.insert(spelling_idxs, word_idx, score)
- self.trie.smear(SmearingMode.MAX)
-
- self.decoder_opts = LexiconDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- word_score=args.word_score,
- unk_score=args.unk_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
-
- self.decoder = LexiconDecoder(
- self.decoder_opts,
- self.trie,
- self.lm,
- self.silence,
- self.blank,
- self.unk_word,
- [],
- self.unit_lm,
- )
- else:
- assert args.unit_lm, "lexicon free decoding can only be done with a unit language model"
- from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions
-
- d = {w: [[w]] for w in tgt_dict.symbols}
- self.word_dict = create_word_dict(d)
- self.lm = KenLM(args.kenlm_model, self.word_dict)
- self.decoder_opts = LexiconFreeDecoderOptions(
- beam_size=args.beam,
- beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))),
- beam_threshold=args.beam_threshold,
- lm_weight=args.lm_weight,
- sil_score=args.sil_weight,
- log_add=False,
- criterion_type=self.criterion_type,
- )
- self.decoder = LexiconFreeDecoder(
- self.decoder_opts, self.lm, self.silence, self.blank, []
- )
-
- def decode(self, emissions):
- B, T, N = emissions.size()
- hypos = []
-
- def idx_to_word(idx):
- if self.unit_lm:
- return self.idx_to_wrd[idx]
- else:
- return self.word_dict[idx]
-
- def make_hypo(result):
- hypo = {"tokens": self.get_tokens(result.tokens), "score": result.score}
- if self.lexicon:
- hypo["words"] = [idx_to_word(x) for x in result.words if x >= 0]
- return hypo
-
- for b in range(B):
- emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0)
- results = self.decoder.decode(emissions_ptr, T, N)
-
- nbest_results = results[: self.nbest]
- hypos.append([make_hypo(result) for result in nbest_results])
- self.lm.empty_cache()
-
- return hypos
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Arturia Brass VSTi RTAS V2.0.5 I.md b/spaces/stomexserde/gpt4-ui/Examples/Arturia Brass VSTi RTAS V2.0.5 I.md
deleted file mode 100644
index c3a1988bca9e90bb3fd3c0c782756144fc7400a4..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Arturia Brass VSTi RTAS V2.0.5 I.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
Arturia Brass VSTi RTAS V2.0.5 I: A Powerful and Expressive Virtual Brass Instrument
-
-
Arturia Brass VSTi RTAS V2.0.5 I is a software instrument that uses physical and acoustic modelling techniques to create realistic and versatile sounds of trumpet, trombone and saxophone. It was developed in collaboration with the IRCAM institute in Paris, and offers a range of features and benefits that make it stand out from other sample-based or phrase-based brass libraries.
-
-
Some of the advantages of Arturia Brass VSTi RTAS V2.0.5 I are:
It has a small installation footprint and does not rely on streaming samples, which makes it friendly to laptop users and saves disk space.
-
It runs as a stand-alone application or as a plug-in (VST, RTAS or Audio Units) on both Mac and Windows platforms.
-
It allows for greater levels of expressivity and realism, especially for soloing, as it responds to various MIDI controllers and parameters such as breath, pitch bend, vibrato, mute, growl and more.
-
It has a flexible and intuitive user interface that lets you switch between four modes: Live, Riff, Loop and MIDI Preset. In Live mode, you can play up to four instruments at once, each with its own settings and automation. In Riff mode, you can create your own phrases or use the built-in library of over 500 riffs in different styles and genres. In Loop mode, you can record and edit your own loops or use the included ones. In MIDI Preset mode, you can assign different MIDI controllers to various instrument parameters.
-
It has a powerful sound engine that uses physical modelling to simulate the behaviour of the brass instruments, including their mouthpiece, bore, bell, valves, slides and keys. It also uses acoustic modelling to recreate the sound of the room and the microphone placement.
-
It has a rich and diverse sound library that covers a wide range of musical styles and genres, from classical to jazz to pop to funk to reggae and more. You can also customize the sound by adjusting the tuning, timbre, envelope, effects and more.
-
-
-
If you are looking for a virtual brass instrument that can deliver realistic, expressive and versatile sounds, Arturia Brass VSTi RTAS V2.0.5 I might be the perfect choice for you. You can download a free demo version from Arturia's website or buy the full version for $249.
-
-
Arturia Brass VSTi RTAS V2.0.5 I is not only a great tool for creating realistic brass sounds, but also a fun and creative instrument that can inspire you to make new music. You can use it to play melodies, harmonies, riffs, loops or even entire songs with the help of the built-in sequencer and mixer. You can also use it to add some spice and flavor to your existing tracks by layering or replacing the brass parts with Arturia's unique sounds.
-
-
One of the most impressive features of Arturia Brass VSTi RTAS V2.0.5 I is its ability to emulate the human expression and articulation of the brass players. You can control the dynamics, pitch, tone and effects of each instrument with your MIDI keyboard, breath controller, modulation wheel or other MIDI devices. You can also choose from different playing styles such as legato, staccato, portamento, glissando, vibrato, mute and more. You can even create your own custom playing styles by editing the parameters of each articulation.
-
-
Another feature that sets Arturia Brass VSTi RTAS V2.0.5 I apart from other virtual brass instruments is its realistic and flexible sound engine. It uses physical modelling to simulate the physics and acoustics of the brass instruments, such as the shape and size of the mouthpiece, bore, bell and valves. It also uses acoustic modelling to simulate the sound of the room and the microphone placement. You can adjust these parameters to create different sounds and effects, such as changing the material of the instrument, adding a mute or a wah-wah pedal, changing the position of the microphone or adding some reverb or delay.
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Film Hindi Af Somali Mohabbat.md b/spaces/stomexserde/gpt4-ui/Examples/Film Hindi Af Somali Mohabbat.md
deleted file mode 100644
index 975144eb91ef70a8b04d88247d7f16c4f6ea9e45..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Film Hindi Af Somali Mohabbat.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
Mohabbat: A Romantic Hindi Film with Somali Subtitles
-
Mohabbat is a 1997 Hindi romantic film starring Sanjay Kapoor, Madhuri Dixit and Akshaye Khanna. The film tells the story of Gaurav, a wealthy businessman who falls in love with Shweta, a singer and dancer. However, their relationship faces many obstacles, such as Shweta's past, Gaurav's family and a rival named Rohit.
If you are looking for a film that combines romance, drama and music, Mohabbat is a great choice. The film has beautiful songs composed by Nadeem-Shravan and sung by Kumar Sanu, Alka Yagnik and Udit Narayan. The film also showcases Madhuri Dixit's stunning dance skills in songs like "Pyar Kiya Hai Chori Chori" and "O Baby Don't Break My Heart".
-
Mohabbat is also one of the few Hindi films that have Somali subtitles. You can watch the film on YouTube[^1^] with Af Somali subtitles and enjoy the story of love and passion. Mohabbat is a film that will touch your heart and make you feel the emotions of the characters.
-
-
The film has a twisty plot that keeps the audience engaged till the end. The film explores the themes of friendship, sacrifice, betrayal and destiny. The film also has some comic scenes that lighten the mood and provide relief from the intense drama. The film has a star-studded cast that delivers powerful performances. Madhuri Dixit shines as Shweta, the woman who is torn between two men who love her. Sanjay Kapoor plays Gaurav, the rich and generous man who is ready to do anything for his friend and his love. Akshaye Khanna plays Rohit, the loyal and brave man who faces many challenges in his life.
-
-
The film was directed by Reema Rakesh Nath, who also wrote the story, screenplay and dialogues. The film was produced by Rakesh Nath, who is Madhuri Dixit's manager and close friend. The film was released on 19 September 1997 and received mixed reviews from critics and audiences. The film was praised for its music, cinematography and performances, but criticized for its slow pace, weak direction and predictable plot. The film was a moderate success at the box office, but failed to live up to the expectations of Madhuri Dixit's fans.
-
Mohabbat is a film that will appeal to those who love romantic films with a touch of drama and suspense. The film has some memorable scenes and songs that will stay with you for a long time. The film is a tribute to the bond of friendship and the power of love. Mohabbat is a film that will make you cry, laugh and feel.
-
-
Mohabbat is not only a film for Hindi speakers, but also for Somali speakers. The film has Somali subtitles that make it easy to follow the dialogues and songs. The film also has some aspects that Somali viewers can relate to, such as the importance of family, friendship and faith. The film also shows the diversity and beauty of Indian culture, which has some similarities and differences with Somali culture.
-
Somali culture is rich and diverse, with influences from various regions and religions. Somalia is a country located in the Horn of Africa, bordered by Ethiopia, Djibouti, Kenya, the Gulf of Aden and the Indian Ocean. Somalia has a population of about 15 million people, most of whom are ethnic Somalis who speak Somali as their mother tongue. Somali is a Cushitic language that belongs to the Afro-Asiatic language family. Somali also has many loanwords from Arabic, Persian, English and Italian.
-
Somalis are predominantly Sunni Muslims who follow the Shafiʿi school of Islamic law. Islam plays a central role in Somali society and culture, influencing their values, norms and traditions. Somalis have a strong sense of community and hospitality, as well as a respect for elders and guests. Somalis also have a long history of poetry, oral literature and storytelling, which reflect their creativity and wisdom. Somalis are known for their love of music and dance, which are often performed at weddings, festivals and other social occasions.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/autocast.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/subhajitmaji/MusicGen/tests/data/__init__.py b/spaces/subhajitmaji/MusicGen/tests/data/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/subhajitmaji/MusicGen/tests/data/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/suchun/chatGPT_acdemic/crazy_functions/crazy_functions_test.py b/spaces/suchun/chatGPT_acdemic/crazy_functions/crazy_functions_test.py
deleted file mode 100644
index 2838e543977e94c13791a681a5a6b9bb8f4110dc..0000000000000000000000000000000000000000
--- a/spaces/suchun/chatGPT_acdemic/crazy_functions/crazy_functions_test.py
+++ /dev/null
@@ -1,92 +0,0 @@
-"""
-这是什么?
- 这个文件用于函数插件的单元测试
- 运行方法 python crazy_functions/crazy_functions_test.py
-"""
-
-def validate_path():
- import os, sys
- dir_name = os.path.dirname(__file__)
- root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume)
- sys.path.append(root_dir_assume)
-
-validate_path() # validate path so you can run from base directory
-
-from toolbox import get_conf, ChatBotWithCookies
-proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
-
-llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
- 'top_p':1.0,
- 'max_length': None,
- 'temperature':1.0,
-}
-plugin_kwargs = { }
-chatbot = ChatBotWithCookies(llm_kwargs)
-history = []
-system_prompt = "Serve me as a writing and programming assistant."
-web_port = 1024
-
-
-def test_解析一个Python项目():
- from crazy_functions.解析项目源代码 import 解析一个Python项目
- txt = "crazy_functions/test_project/python/dqn"
- for cookies, cb, hist, msg in 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_解析一个Cpp项目():
- from crazy_functions.解析项目源代码 import 解析一个C项目
- txt = "crazy_functions/test_project/cpp/cppipc"
- for cookies, cb, hist, msg in 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_Latex英文润色():
- from crazy_functions.Latex全文润色 import Latex英文润色
- txt = "crazy_functions/test_project/latex/attention"
- for cookies, cb, hist, msg in Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_Markdown中译英():
- from crazy_functions.批量Markdown翻译 import Markdown中译英
- txt = "README.md"
- for cookies, cb, hist, msg in Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_批量翻译PDF文档():
- from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
- txt = "crazy_functions/test_project/pdf_and_word"
- for cookies, cb, hist, msg in 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_谷歌检索小助手():
- from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
- txt = "https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG="
- for cookies, cb, hist, msg in 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_总结word文档():
- from crazy_functions.总结word文档 import 总结word文档
- txt = "crazy_functions/test_project/pdf_and_word"
- for cookies, cb, hist, msg in 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_下载arxiv论文并翻译摘要():
- from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
- txt = "1812.10695"
- for cookies, cb, hist, msg in 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-test_解析一个Python项目()
-test_Latex英文润色()
-test_Markdown中译英()
-test_批量翻译PDF文档()
-test_谷歌检索小助手()
-test_总结word文档()
-test_下载arxiv论文并翻译摘要()
-test_解析一个Cpp项目()
-
-input("程序完成,回车退出。")
-print("退出。")
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Delcam Featurecam 2014 20 1 0 24 Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Delcam Featurecam 2014 20 1 0 24 Torrent.md
deleted file mode 100644
index 59e67dba8940ced80bbfb151a450ca271e53172f..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Delcam Featurecam 2014 20 1 0 24 Torrent.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-```
-
If you are looking for a powerful and easy-to-use software for CNC machining, you might want to check out Delcam Featurecam 2014 20 1 0 24 Torrent. This software is designed to help you create high-quality parts with minimal programming time and effort. In this article, we will review the features, benefits, and drawbacks of Delcam Featurecam 2014 20 1 0 24 Torrent, as well as provide you with a link to download it for free.
-
-
What is Delcam Featurecam 2014 20 1 0 24 Torrent?
-
Delcam Featurecam 2014 20 1 0 24 Torrent is a software package that allows you to create CNC programs for various types of machines, such as mills, lathes, turn-mills, wire EDMs, and multi-axis machines. It uses a feature-based approach that automatically recognizes the features of your CAD model and generates the optimal toolpaths for them. You can also customize the toolpaths using various options and parameters. Delcam Featurecam 2014 20 1 0 24 Torrent supports a wide range of file formats, such as IGES, STEP, DXF, DWG, STL, Parasolid, SolidWorks, Solid Edge, Inventor, and CATIA.
What are the benefits of Delcam Featurecam 2014 20 1 0 24 Torrent?
-
Some of the benefits of Delcam Featurecam 2014 20 1 0 24 Torrent are:
-
-
It reduces the programming time and errors by automatically generating toolpaths based on the features of your CAD model.
-
It improves the quality and efficiency of your machining process by optimizing the toolpaths for speed, accuracy, and surface finish.
-
It supports a variety of machines and operations, such as drilling, tapping, boring, contouring, pocketing, facing, grooving, threading, engraving, and more.
-
It allows you to simulate and verify your toolpaths before sending them to the machine.
-
It integrates with other Delcam products, such as PowerMILL, PowerSHAPE, PowerINSPECT, and ArtCAM.
-
-
What are the drawbacks of Delcam Featurecam 2014 20 1 0 24 Torrent?
-
Some of the drawbacks of Delcam Featurecam 2014 20 1 0 24 Torrent are:
-
-
It requires a high-end computer system to run smoothly and efficiently.
-
It may not be compatible with some newer versions of Windows or other operating systems.
-
It may not support some advanced or complex features or geometries that require manual programming or editing.
-
It may contain some bugs or errors that affect the performance or functionality of the software.
-
-
How to download Delcam Featurecam 2014 20 1 0 24 Torrent for free?
-
If you want to download Delcam Featurecam 2014 20 1
-
-```
-0 24 Torrent for free, you can follow the link below. However, please be aware that downloading and using pirated software is illegal and unethical, and may expose you to viruses, malware, or other security risks. We do not endorse or recommend downloading or using Delcam Featurecam 2014 20 1 0 24 Torrent without a valid license or permission from the developer. Use it at your own risk and responsibility.
-
-Download Delcam Featurecam 2014 20 1 0 24 Torrent
-
-
Conclusion
-
Delcam Featurecam 2014 20 1 0 24 Torrent is a software package that can help you create CNC programs for various types of machines and operations. It has many features and benefits that can improve your machining quality and efficiency. However, it also has some drawbacks and limitations that you should be aware of before using it. Moreover, downloading and using pirated software is illegal and unethical, and may harm your computer or data. Therefore, we suggest that you purchase a legitimate copy of Delcam Featurecam 2014 from the official website or a trusted reseller.
-
We hope that this article has given you some useful information and insights about Delcam Featurecam 2014 20 1 0 24 Torrent. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
-``` d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Electromagnetic Waves By R Shevgaonkar Pdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Electromagnetic Waves By R Shevgaonkar Pdf.md
deleted file mode 100644
index 3d66b3ca04d619bf94bec1331416088fb5c24406..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Electromagnetic Waves By R Shevgaonkar Pdf.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
r.k.shevgaonkar, c. reddy, a.n. chandorkar, r.r.reddy, s.sridhar, a. prabhakar, s. manjunathan, effect of input pulse length on delay line performance, proc. of the 25th conference on electromagnetic and electronic pulse-simulation, ultrasonics, sensing and electromagnetic materials, june 2011, penang, malaysia.
-
in this course, professor r. k. shevgaonkar, department of electrical engineering, iit, bombay (nptel), gives 42 video lectures on the concepts of transmission lines and electromagnetic waves.
the original name for this course is: electronics - transmission lines and em waves. this course has 42 video lectures on transmission lines and electromagnetic waves, by professor r. shevgaonkar.
b. k. mohanty, dr. r.k.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, proc. of the eleventh ieee international symposium on personal, indoor and mobile radio communications, pimrc 2008, munich, germany, sept. 2008.
-
b.k. mohanty, r.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, ieee microwave and wireless components letters, vol. 18, no. 12, august 2007.
-
b.k. mohanty, r.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, proc. of the tenth ieee pimrc, moscow, russia, september 2006.
-
b.k. mohanty, dr. r.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, proc. of the ieee international symposium on personal, indoor and mobile radio communications, pimrc 2006, kyoto, japan, sep. 2006.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ford Transit Connect Workshop Manual Downloads Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ford Transit Connect Workshop Manual Downloads Torrent.md
deleted file mode 100644
index 13e44f11827bac06d1783cd1bf82f9b8a802d0ea..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ford Transit Connect Workshop Manual Downloads Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Ford Transit Connect Workshop Manual Downloads Torrent
-
-The CD is loaded with great professional samples . The CD is loaded with great professional samples . These samples were recorded in the Swedish E-MU Systems Studios, which were designed by Rick Chasen and Daniel 'Nelz' Nilsson in 1999. All the tracks were recorded at 24-bit resolution using a SSL 9000-series console, and sampled at 48 kHz and have been digitally remastered. The samples come with comprehensive tagging so they are ready for creative use immediately after download. Thanks to E-MU Systems, to Sweden's leading specialist sound supplier - Electro-Music Sweden, for their great service and assistance in providing samples. Thank you Electro-Music Sweden. They can be contacted on info@electro-music.se)
-
-Third in the series is "Music for Trance and Psychedelic Trance". The set includes House Music, Techno, Progressive Trance, Trance and Psychedelic Trance. It is designed to be used as a series of playlists that can be used in a DJ set or as an extra bonus when playing a set as a vinyl DJ. Again, this set breaks out with 3 )[sample] E-MU Systems Sound Library Vol 13 Dance 2000 [E-MU] 1CD (The CD is loaded with great professional samples . The CD is loaded with great professional samples . The CD is loaded with great professional samples . These samples were recorded in the Swedish E-MU Systems Studios, which were designed by Rick Chasen and Daniel 'Nelz' Nilsson in 1999. All the tracks were recorded at 24-bit resolution using a SSL 9000-series console, and sampled at 48 kHz and have been digitally remastered. The samples come with comprehensive tagging so they are ready for creative use immediately after download. Thanks to E-MU Systems, to Sweden's leading specialist sound supplier - Electro-Music Sweden, for their great service and assistance in providing samples. Thank you Electro-Music Sweden. They can be contacted on info@electro-music.se)
-
-Fourth in the series is "House Music Sampler". This set breaks out with 3 )[sample] E-MU Systems Sound Library Vol 13 Dance 2000 [E-MU] 1CD (The CD is loaded with great professional samples . The CD is loaded with great professional samples . The CD is loaded with great professional samples . These samples were recorded in the Swedish E-MU Systems Studios, which were designed by Rick Chasen and Daniel 'N 4fefd39f24
-
-
-
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/momentum_updater.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/momentum_updater.py
deleted file mode 100644
index 60437756ceedf06055ec349df69a25465738d3f0..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/momentum_updater.py
+++ /dev/null
@@ -1,493 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import annotator.uniformer.mmcv as mmcv
-from .hook import HOOKS, Hook
-from .lr_updater import annealing_cos, annealing_linear, format_param
-
-
-class MomentumUpdaterHook(Hook):
-
- def __init__(self,
- by_epoch=True,
- warmup=None,
- warmup_iters=0,
- warmup_ratio=0.9):
- # validate the "warmup" argument
- if warmup is not None:
- if warmup not in ['constant', 'linear', 'exp']:
- raise ValueError(
- f'"{warmup}" is not a supported type for warming up, valid'
- ' types are "constant" and "linear"')
- if warmup is not None:
- assert warmup_iters > 0, \
- '"warmup_iters" must be a positive integer'
- assert 0 < warmup_ratio <= 1.0, \
- '"warmup_momentum" must be in range (0,1]'
-
- self.by_epoch = by_epoch
- self.warmup = warmup
- self.warmup_iters = warmup_iters
- self.warmup_ratio = warmup_ratio
-
- self.base_momentum = [] # initial momentum for all param groups
- self.regular_momentum = [
- ] # expected momentum if no warming up is performed
-
- def _set_momentum(self, runner, momentum_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, mom in zip(optim.param_groups,
- momentum_groups[k]):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
- else:
- for param_group, mom in zip(runner.optimizer.param_groups,
- momentum_groups):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
-
- def get_momentum(self, runner, base_momentum):
- raise NotImplementedError
-
- def get_regular_momentum(self, runner):
- if isinstance(runner.optimizer, dict):
- momentum_groups = {}
- for k in runner.optimizer.keys():
- _momentum_group = [
- self.get_momentum(runner, _base_momentum)
- for _base_momentum in self.base_momentum[k]
- ]
- momentum_groups.update({k: _momentum_group})
- return momentum_groups
- else:
- return [
- self.get_momentum(runner, _base_momentum)
- for _base_momentum in self.base_momentum
- ]
-
- def get_warmup_momentum(self, cur_iters):
-
- def _get_warmup_momentum(cur_iters, regular_momentum):
- if self.warmup == 'constant':
- warmup_momentum = [
- _momentum / self.warmup_ratio
- for _momentum in self.regular_momentum
- ]
- elif self.warmup == 'linear':
- k = (1 - cur_iters / self.warmup_iters) * (1 -
- self.warmup_ratio)
- warmup_momentum = [
- _momentum / (1 - k) for _momentum in self.regular_mom
- ]
- elif self.warmup == 'exp':
- k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters)
- warmup_momentum = [
- _momentum / k for _momentum in self.regular_mom
- ]
- return warmup_momentum
-
- if isinstance(self.regular_momentum, dict):
- momentum_groups = {}
- for key, regular_momentum in self.regular_momentum.items():
- momentum_groups[key] = _get_warmup_momentum(
- cur_iters, regular_momentum)
- return momentum_groups
- else:
- return _get_warmup_momentum(cur_iters, self.regular_momentum)
-
- def before_run(self, runner):
- # NOTE: when resuming from a checkpoint,
- # if 'initial_momentum' is not saved,
- # it will be set according to the optimizer params
- if isinstance(runner.optimizer, dict):
- self.base_momentum = {}
- for k, optim in runner.optimizer.items():
- for group in optim.param_groups:
- if 'momentum' in group.keys():
- group.setdefault('initial_momentum', group['momentum'])
- else:
- group.setdefault('initial_momentum', group['betas'][0])
- _base_momentum = [
- group['initial_momentum'] for group in optim.param_groups
- ]
- self.base_momentum.update({k: _base_momentum})
- else:
- for group in runner.optimizer.param_groups:
- if 'momentum' in group.keys():
- group.setdefault('initial_momentum', group['momentum'])
- else:
- group.setdefault('initial_momentum', group['betas'][0])
- self.base_momentum = [
- group['initial_momentum']
- for group in runner.optimizer.param_groups
- ]
-
- def before_train_epoch(self, runner):
- if not self.by_epoch:
- return
- self.regular_mom = self.get_regular_momentum(runner)
- self._set_momentum(runner, self.regular_mom)
-
- def before_train_iter(self, runner):
- cur_iter = runner.iter
- if not self.by_epoch:
- self.regular_mom = self.get_regular_momentum(runner)
- if self.warmup is None or cur_iter >= self.warmup_iters:
- self._set_momentum(runner, self.regular_mom)
- else:
- warmup_momentum = self.get_warmup_momentum(cur_iter)
- self._set_momentum(runner, warmup_momentum)
- elif self.by_epoch:
- if self.warmup is None or cur_iter > self.warmup_iters:
- return
- elif cur_iter == self.warmup_iters:
- self._set_momentum(runner, self.regular_mom)
- else:
- warmup_momentum = self.get_warmup_momentum(cur_iter)
- self._set_momentum(runner, warmup_momentum)
-
-
-@HOOKS.register_module()
-class StepMomentumUpdaterHook(MomentumUpdaterHook):
- """Step momentum scheduler with min value clipping.
-
- Args:
- step (int | list[int]): Step to decay the momentum. If an int value is
- given, regard it as the decay interval. If a list is given, decay
- momentum at these steps.
- gamma (float, optional): Decay momentum ratio. Default: 0.5.
- min_momentum (float, optional): Minimum momentum value to keep. If
- momentum after decay is lower than this value, it will be clipped
- accordingly. If None is given, we don't perform lr clipping.
- Default: None.
- """
-
- def __init__(self, step, gamma=0.5, min_momentum=None, **kwargs):
- if isinstance(step, list):
- assert mmcv.is_list_of(step, int)
- assert all([s > 0 for s in step])
- elif isinstance(step, int):
- assert step > 0
- else:
- raise TypeError('"step" must be a list or integer')
- self.step = step
- self.gamma = gamma
- self.min_momentum = min_momentum
- super(StepMomentumUpdaterHook, self).__init__(**kwargs)
-
- def get_momentum(self, runner, base_momentum):
- progress = runner.epoch if self.by_epoch else runner.iter
-
- # calculate exponential term
- if isinstance(self.step, int):
- exp = progress // self.step
- else:
- exp = len(self.step)
- for i, s in enumerate(self.step):
- if progress < s:
- exp = i
- break
-
- momentum = base_momentum * (self.gamma**exp)
- if self.min_momentum is not None:
- # clip to a minimum value
- momentum = max(momentum, self.min_momentum)
- return momentum
-
-
-@HOOKS.register_module()
-class CosineAnnealingMomentumUpdaterHook(MomentumUpdaterHook):
-
- def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs):
- assert (min_momentum is None) ^ (min_momentum_ratio is None)
- self.min_momentum = min_momentum
- self.min_momentum_ratio = min_momentum_ratio
- super(CosineAnnealingMomentumUpdaterHook, self).__init__(**kwargs)
-
- def get_momentum(self, runner, base_momentum):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
- if self.min_momentum_ratio is not None:
- target_momentum = base_momentum * self.min_momentum_ratio
- else:
- target_momentum = self.min_momentum
- return annealing_cos(base_momentum, target_momentum,
- progress / max_progress)
-
-
-@HOOKS.register_module()
-class CyclicMomentumUpdaterHook(MomentumUpdaterHook):
- """Cyclic momentum Scheduler.
-
- Implement the cyclical momentum scheduler policy described in
- https://arxiv.org/pdf/1708.07120.pdf
-
- This momentum scheduler usually used together with the CyclicLRUpdater
- to improve the performance in the 3D detection area.
-
- Attributes:
- target_ratio (tuple[float]): Relative ratio of the lowest momentum and
- the highest momentum to the initial momentum.
- cyclic_times (int): Number of cycles during training
- step_ratio_up (float): The ratio of the increasing process of momentum
- in the total cycle.
- by_epoch (bool): Whether to update momentum by epoch.
- """
-
- def __init__(self,
- by_epoch=False,
- target_ratio=(0.85 / 0.95, 1),
- cyclic_times=1,
- step_ratio_up=0.4,
- **kwargs):
- if isinstance(target_ratio, float):
- target_ratio = (target_ratio, target_ratio / 1e5)
- elif isinstance(target_ratio, tuple):
- target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \
- if len(target_ratio) == 1 else target_ratio
- else:
- raise ValueError('target_ratio should be either float '
- f'or tuple, got {type(target_ratio)}')
-
- assert len(target_ratio) == 2, \
- '"target_ratio" must be list or tuple of two floats'
- assert 0 <= step_ratio_up < 1.0, \
- '"step_ratio_up" must be in range [0,1)'
-
- self.target_ratio = target_ratio
- self.cyclic_times = cyclic_times
- self.step_ratio_up = step_ratio_up
- self.momentum_phases = [] # init momentum_phases
- # currently only support by_epoch=False
- assert not by_epoch, \
- 'currently only support "by_epoch" = False'
- super(CyclicMomentumUpdaterHook, self).__init__(by_epoch, **kwargs)
-
- def before_run(self, runner):
- super(CyclicMomentumUpdaterHook, self).before_run(runner)
- # initiate momentum_phases
- # total momentum_phases are separated as up and down
- max_iter_per_phase = runner.max_iters // self.cyclic_times
- iter_up_phase = int(self.step_ratio_up * max_iter_per_phase)
- self.momentum_phases.append(
- [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]])
- self.momentum_phases.append([
- iter_up_phase, max_iter_per_phase, max_iter_per_phase,
- self.target_ratio[0], self.target_ratio[1]
- ])
-
- def get_momentum(self, runner, base_momentum):
- curr_iter = runner.iter
- for (start_iter, end_iter, max_iter_per_phase, start_ratio,
- end_ratio) in self.momentum_phases:
- curr_iter %= max_iter_per_phase
- if start_iter <= curr_iter < end_iter:
- progress = curr_iter - start_iter
- return annealing_cos(base_momentum * start_ratio,
- base_momentum * end_ratio,
- progress / (end_iter - start_iter))
-
-
-@HOOKS.register_module()
-class OneCycleMomentumUpdaterHook(MomentumUpdaterHook):
- """OneCycle momentum Scheduler.
-
- This momentum scheduler usually used together with the OneCycleLrUpdater
- to improve the performance.
-
- Args:
- base_momentum (float or list): Lower momentum boundaries in the cycle
- for each parameter group. Note that momentum is cycled inversely
- to learning rate; at the peak of a cycle, momentum is
- 'base_momentum' and learning rate is 'max_lr'.
- Default: 0.85
- max_momentum (float or list): Upper momentum boundaries in the cycle
- for each parameter group. Functionally,
- it defines the cycle amplitude (max_momentum - base_momentum).
- Note that momentum is cycled inversely
- to learning rate; at the start of a cycle, momentum is
- 'max_momentum' and learning rate is 'base_lr'
- Default: 0.95
- pct_start (float): The percentage of the cycle (in number of steps)
- spent increasing the learning rate.
- Default: 0.3
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing.
- Default: 'cos'
- three_phase (bool): If three_phase is True, use a third phase of the
- schedule to annihilate the learning rate according to
- final_div_factor instead of modifying the second phase (the first
- two phases will be symmetrical about the step indicated by
- pct_start).
- Default: False
- """
-
- def __init__(self,
- base_momentum=0.85,
- max_momentum=0.95,
- pct_start=0.3,
- anneal_strategy='cos',
- three_phase=False,
- **kwargs):
- # validate by_epoch, currently only support by_epoch=False
- if 'by_epoch' not in kwargs:
- kwargs['by_epoch'] = False
- else:
- assert not kwargs['by_epoch'], \
- 'currently only support "by_epoch" = False'
- if not isinstance(base_momentum, (float, list, dict)):
- raise ValueError('base_momentum must be the type among of float,'
- 'list or dict.')
- self._base_momentum = base_momentum
- if not isinstance(max_momentum, (float, list, dict)):
- raise ValueError('max_momentum must be the type among of float,'
- 'list or dict.')
- self._max_momentum = max_momentum
- # validate pct_start
- if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float):
- raise ValueError('Expected float between 0 and 1 pct_start, but '
- f'got {pct_start}')
- self.pct_start = pct_start
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must by one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
- self.three_phase = three_phase
- self.momentum_phases = [] # init momentum_phases
- super(OneCycleMomentumUpdaterHook, self).__init__(**kwargs)
-
- def before_run(self, runner):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- if ('momentum' not in optim.defaults
- and 'betas' not in optim.defaults):
- raise ValueError('optimizer must support momentum with'
- 'option enabled')
- self.use_beta1 = 'betas' in optim.defaults
- _base_momentum = format_param(k, optim, self._base_momentum)
- _max_momentum = format_param(k, optim, self._max_momentum)
- for group, b_momentum, m_momentum in zip(
- optim.param_groups, _base_momentum, _max_momentum):
- if self.use_beta1:
- _, beta2 = group['betas']
- group['betas'] = (m_momentum, beta2)
- else:
- group['momentum'] = m_momentum
- group['base_momentum'] = b_momentum
- group['max_momentum'] = m_momentum
- else:
- optim = runner.optimizer
- if ('momentum' not in optim.defaults
- and 'betas' not in optim.defaults):
- raise ValueError('optimizer must support momentum with'
- 'option enabled')
- self.use_beta1 = 'betas' in optim.defaults
- k = type(optim).__name__
- _base_momentum = format_param(k, optim, self._base_momentum)
- _max_momentum = format_param(k, optim, self._max_momentum)
- for group, b_momentum, m_momentum in zip(optim.param_groups,
- _base_momentum,
- _max_momentum):
- if self.use_beta1:
- _, beta2 = group['betas']
- group['betas'] = (m_momentum, beta2)
- else:
- group['momentum'] = m_momentum
- group['base_momentum'] = b_momentum
- group['max_momentum'] = m_momentum
-
- if self.three_phase:
- self.momentum_phases.append({
- 'end_iter':
- float(self.pct_start * runner.max_iters) - 1,
- 'start_momentum':
- 'max_momentum',
- 'end_momentum':
- 'base_momentum'
- })
- self.momentum_phases.append({
- 'end_iter':
- float(2 * self.pct_start * runner.max_iters) - 2,
- 'start_momentum':
- 'base_momentum',
- 'end_momentum':
- 'max_momentum'
- })
- self.momentum_phases.append({
- 'end_iter': runner.max_iters - 1,
- 'start_momentum': 'max_momentum',
- 'end_momentum': 'max_momentum'
- })
- else:
- self.momentum_phases.append({
- 'end_iter':
- float(self.pct_start * runner.max_iters) - 1,
- 'start_momentum':
- 'max_momentum',
- 'end_momentum':
- 'base_momentum'
- })
- self.momentum_phases.append({
- 'end_iter': runner.max_iters - 1,
- 'start_momentum': 'base_momentum',
- 'end_momentum': 'max_momentum'
- })
-
- def _set_momentum(self, runner, momentum_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, mom in zip(optim.param_groups,
- momentum_groups[k]):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
- else:
- for param_group, mom in zip(runner.optimizer.param_groups,
- momentum_groups):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
-
- def get_momentum(self, runner, param_group):
- curr_iter = runner.iter
- start_iter = 0
- for i, phase in enumerate(self.momentum_phases):
- end_iter = phase['end_iter']
- if curr_iter <= end_iter or i == len(self.momentum_phases) - 1:
- pct = (curr_iter - start_iter) / (end_iter - start_iter)
- momentum = self.anneal_func(
- param_group[phase['start_momentum']],
- param_group[phase['end_momentum']], pct)
- break
- start_iter = end_iter
- return momentum
-
- def get_regular_momentum(self, runner):
- if isinstance(runner.optimizer, dict):
- momentum_groups = {}
- for k, optim in runner.optimizer.items():
- _momentum_group = [
- self.get_momentum(runner, param_group)
- for param_group in optim.param_groups
- ]
- momentum_groups.update({k: _momentum_group})
- return momentum_groups
- else:
- momentum_groups = []
- for param_group in runner.optimizer.param_groups:
- momentum_groups.append(self.get_momentum(runner, param_group))
- return momentum_groups
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/parrots_wrapper.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/parrots_wrapper.py
deleted file mode 100644
index 93c97640d4b9ed088ca82cfe03e6efebfcfa9dbf..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/parrots_wrapper.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from functools import partial
-
-import torch
-
-TORCH_VERSION = torch.__version__
-
-
-def is_rocm_pytorch() -> bool:
- is_rocm = False
- if TORCH_VERSION != 'parrots':
- try:
- from torch.utils.cpp_extension import ROCM_HOME
- is_rocm = True if ((torch.version.hip is not None) and
- (ROCM_HOME is not None)) else False
- except ImportError:
- pass
- return is_rocm
-
-
-def _get_cuda_home():
- if TORCH_VERSION == 'parrots':
- from parrots.utils.build_extension import CUDA_HOME
- else:
- if is_rocm_pytorch():
- from torch.utils.cpp_extension import ROCM_HOME
- CUDA_HOME = ROCM_HOME
- else:
- from torch.utils.cpp_extension import CUDA_HOME
- return CUDA_HOME
-
-
-def get_build_config():
- if TORCH_VERSION == 'parrots':
- from parrots.config import get_build_info
- return get_build_info()
- else:
- return torch.__config__.show()
-
-
-def _get_conv():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin
- else:
- from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin
- return _ConvNd, _ConvTransposeMixin
-
-
-def _get_dataloader():
- if TORCH_VERSION == 'parrots':
- from torch.utils.data import DataLoader, PoolDataLoader
- else:
- from torch.utils.data import DataLoader
- PoolDataLoader = DataLoader
- return DataLoader, PoolDataLoader
-
-
-def _get_extension():
- if TORCH_VERSION == 'parrots':
- from parrots.utils.build_extension import BuildExtension, Extension
- CppExtension = partial(Extension, cuda=False)
- CUDAExtension = partial(Extension, cuda=True)
- else:
- from torch.utils.cpp_extension import (BuildExtension, CppExtension,
- CUDAExtension)
- return BuildExtension, CppExtension, CUDAExtension
-
-
-def _get_pool():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd,
- _AdaptiveMaxPoolNd, _AvgPoolNd,
- _MaxPoolNd)
- else:
- from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd,
- _AdaptiveMaxPoolNd, _AvgPoolNd,
- _MaxPoolNd)
- return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd
-
-
-def _get_norm():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm
- SyncBatchNorm_ = torch.nn.SyncBatchNorm2d
- else:
- from torch.nn.modules.instancenorm import _InstanceNorm
- from torch.nn.modules.batchnorm import _BatchNorm
- SyncBatchNorm_ = torch.nn.SyncBatchNorm
- return _BatchNorm, _InstanceNorm, SyncBatchNorm_
-
-
-_ConvNd, _ConvTransposeMixin = _get_conv()
-DataLoader, PoolDataLoader = _get_dataloader()
-BuildExtension, CppExtension, CUDAExtension = _get_extension()
-_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm()
-_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool()
-
-
-class SyncBatchNorm(SyncBatchNorm_):
-
- def _check_input_dim(self, input):
- if TORCH_VERSION == 'parrots':
- if input.dim() < 2:
- raise ValueError(
- f'expected at least 2D input (got {input.dim()}D input)')
- else:
- super()._check_input_dim(input)
diff --git a/spaces/systash/hashtag_and_named_entity_generator/theme.css b/spaces/systash/hashtag_and_named_entity_generator/theme.css
deleted file mode 100644
index 56d9cfc145c394b65b9f4cdc95482afb982f2360..0000000000000000000000000000000000000000
--- a/spaces/systash/hashtag_and_named_entity_generator/theme.css
+++ /dev/null
@@ -1,26 +0,0 @@
-/* Force scrollbar to always display */
-::-webkit-scrollbar {
- -webkit-appearance: none;
- width: 10px;
-}
-
-::-webkit-scrollbar-thumb {
- border-radius: 5px;
- background-color: rgba(0, 0, 0, .5);
- -webkit-box-shadow: 0 0 1px rgba(255, 255, 255, .5);
-}
-
-/* Add scrollbar to body */
-body::-webkit-scrollbar {
- width: 10px;
-}
-
-body::-webkit-scrollbar-track {
- background-color: #F5F5F5;
-}
-
-body::-webkit-scrollbar-thumb {
- background-color: #000000;
- border-radius: 10px;
- border: 2px solid #F5F5F5;
-}
\ No newline at end of file
diff --git a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/misc.py b/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/misc.py
deleted file mode 100644
index 46d7a61b4b605cb6409c3ae5b0ff9ceac5bac9ba..0000000000000000000000000000000000000000
--- a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/misc.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import torch
-import sys
-
-
-def count_lines(file_path):
- lines_num = 0
- with open(file_path, 'rb') as f:
- while True:
- data = f.read(2 ** 20)
- if not data:
- break
- lines_num += data.count(b'\n')
- return lines_num
-
-
-def flip(x, dim):
- indices = [slice(None)] * x.dim()
- indices[dim] = torch.arange(x.size(dim) - 1, -1, -1,
- dtype=torch.long, device=x.device)
- return x[tuple(indices)]
-
-
-def pooling(memory_bank, seg, pooling_type):
- seg = torch.unsqueeze(seg, dim=-1).type_as(memory_bank)
- memory_bank = memory_bank * seg
- if pooling_type == "mean":
- features = torch.sum(memory_bank, dim=1)
- features = torch.div(features, torch.sum(seg, dim=1))
- elif pooling_type == "last":
- features = memory_bank[torch.arange(memory_bank.shape[0]), torch.squeeze(torch.sum(seg, dim=1).type(torch.int64) - 1), :]
- elif pooling_type == "max":
- features = torch.max(memory_bank + (seg - 1) * sys.maxsize, dim=1)[0]
- else:
- features = memory_bank[:, 0, :]
- return features
-
-class ZeroOneNormalize(object):
- def __call__(self, img):
- return img.float().div(255)
\ No newline at end of file
diff --git a/spaces/tang155/bingo/src/lib/bots/bing/tts.ts b/spaces/tang155/bingo/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/tang155/bingo/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/tang155/bingo/src/lib/isomorphic/index.ts b/spaces/tang155/bingo/src/lib/isomorphic/index.ts
deleted file mode 100644
index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000
--- a/spaces/tang155/bingo/src/lib/isomorphic/index.ts
+++ /dev/null
@@ -1,17 +0,0 @@
-'use client'
-
-import Default from './browser'
-
-let exportsModel: any = {}
-
-if (process.browser) {
- Object.assign(exportsModel, require('./browser').default)
-} else {
- Object.assign(exportsModel, require('./node').default)
-}
-
-export default exportsModel! as typeof Default
-
-export const fetch: typeof Default.fetch = exportsModel!.fetch
-export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket
-export const debug: typeof Default.debug = exportsModel!.debug
diff --git a/spaces/tcfly/Flowise/README.md b/spaces/tcfly/Flowise/README.md
deleted file mode 100644
index 66793dba8fdd6d0b2427b6cbbbab90aba95a9e89..0000000000000000000000000000000000000000
--- a/spaces/tcfly/Flowise/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Flowise
-emoji: 🦀
-colorFrom: pink
-colorTo: yellow
-sdk: docker
-app_port: 3000
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/terfces0erbo/CollegeProjectV2/AutoCAD LT 2010 32 Bit Free Download [TOP].md b/spaces/terfces0erbo/CollegeProjectV2/AutoCAD LT 2010 32 Bit Free Download [TOP].md
deleted file mode 100644
index df57651b06ae0a61e03696167380bdba00b5613b..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/AutoCAD LT 2010 32 Bit Free Download [TOP].md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
How to Download AutoCAD LT 2010 32 Bit for Free
-
AutoCAD LT is a professional CAD software that allows you to create 2D and 3D designs with precision and detail. It is widely used by designers, engineers, architects, and other professionals who need to create technical drawings and plans.
If you are looking for a way to download AutoCAD LT 2010 32 bit for free, you may have some options depending on your situation. Here are some possible methods:
-
-
If you have a current subscription to a newer version of AutoCAD LT, you may be able to download previous versions as a benefit. You can sign in to your Autodesk Account at manage.autodesk.com and find your product under All Products and Services. In the product tile, click the current version and select a previous version. Then you can download your product[^1^].
-
If you are a student or an educator, you may be able to access previous versions of AutoCAD LT through the Education community. You can go to the Education community website and sign in with your education credentials. Then you can search for AutoCAD LT and select the version you want to download[^1^].
-
If you do not have a subscription or an education license, you may have to resort to other sources that offer free downloads of AutoCAD LT 2010 32 bit. However, these sources may not be reliable, safe, or legal. You may encounter malware, viruses, or other problems that could harm your computer or compromise your data. You may also violate the terms of service or the intellectual property rights of Autodesk by using unauthorized copies of their software.
-
-
Therefore, we recommend that you only download AutoCAD LT 2010 32 bit from official or trusted sources. If you need more information or assistance, you can contact Autodesk support or visit their forums[^3^]. Alternatively, you can consider upgrading to the latest version of AutoCAD LT that offers more features, compatibility, and security.
-
-
Why Choose AutoCAD LT 2010 32 Bit?
-
AutoCAD LT 2010 32 bit is a version of AutoCAD LT that was released in 2009. It is compatible with Windows XP, Vista, and 7 operating systems. It has some features that make it a good choice for 2D drafting and design, such as:
-
-
-
AutoLISP: This is a programming language that allows you to automate tasks and customize AutoCAD LT to suit your needs. You can use AutoLISP to create macros, scripts, functions, and commands that can streamline your workflows and enforce CAD standards[^1^].
-
Dynamic Blocks: These are blocks that can change their shape, size, or configuration depending on how you use them. You can create dynamic blocks with parameters and actions that control their behavior. Dynamic blocks can help you save time and reduce the number of blocks you need to create[^1^].
-
Enhanced DWG Compare: This is a feature that allows you to compare two versions of a drawing without leaving your current window. You can see the differences between the drawings highlighted in different colors. You can also copy changes from one drawing to another with a single click[^1^].
-
Cloud Storage Connectivity: This is a feature that allows you to access, preview, and open any DWG file in AutoCAD LT with Autodeskâs cloud and other leading cloud storage providers. You can also save your drawings to the cloud and share them with others[^1^].
-
AutoCAD on Any Device: This is a feature that allows you to view, edit, and create drawings in AutoCAD LT on virtually any desktop, web, and mobile device. You can use the AutoCAD web app or the AutoCAD mobile app to access your drawings anytime, anywhere[^1^].
-
-
These are some of the reasons why you may want to download AutoCAD LT 2010 32 bit for free. However, you should also be aware of some of the limitations and drawbacks of this version, such as:
-
-
Lack of 3D Modeling and Rendering: AutoCAD LT 2010 32 bit does not support 3D modeling and rendering. If you need to create 3D designs or generate realistic images of your projects, you will need a more advanced version of AutoCAD or another software[^2^].
-
Lack of Toolsets: AutoCAD LT 2010 32 bit does not include any of the specialized toolsets that are available for AutoCAD. These toolsets provide industry-specific features and libraries for architecture, mechanical design, electrical design, map 3D, MEP, plant 3D, and raster design[^2^].
-
Lack of Support: AutoCAD LT 2010 32 bit is an outdated version that is no longer supported by Autodesk. This means that you will not receive any software updates, hotfixes, or technical support for this version. You may also encounter compatibility issues with newer versions of Windows or other software[^3^].
-
-
Therefore, you should weigh the pros and cons of downloading AutoCAD LT 2010 32 bit for free before making your decision. You may also want to consider upgrading to the latest version of AutoCAD LT that offers more features, compatibility, and security.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/CUR3D Maker Edition Torrent Download [Crack Serial Key.md b/spaces/terfces0erbo/CollegeProjectV2/CUR3D Maker Edition Torrent Download [Crack Serial Key.md
deleted file mode 100644
index 7b375a2c85d631ce6eeba648b2eff5cee40a418d..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/CUR3D Maker Edition Torrent Download [Crack Serial Key.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
CUR3D Maker Edition Torrent Download [Crack Serial Key
-
-Lil \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' Eddie City Of My Heart [iTunes Deluxe Edition] (2010) · gtachennaicitypcgamedownload. 1fdad05405
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Ih8sn0w Ireb V3 1.2 For Windows.md b/spaces/terfces0erbo/CollegeProjectV2/Ih8sn0w Ireb V3 1.2 For Windows.md
deleted file mode 100644
index faba1be20e9f34cadd5f4dee3dfc43fa97048239..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Ih8sn0w Ireb V3 1.2 For Windows.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
How to Use iH8sn0w's iREB v3.1.2 for Windows
-
iREB is a tool that allows you to bypass iTunes errors when restoring custom firmware on your iOS device. It uses the usb control msg exploit from 3.1.2 and the limera1n/steaks4uce exploit to put your device into a pwned DFU mode, which enables you to restore custom firmware without any errors.
-
iH8sn0w is a well-known hacker and developer who created iREB, as well as other tools such as iBoot32Patcher, sn0wbreeze, and f0recast. He is also one of the members of the evad3rs team that released the evasi0n jailbreak for iOS 6.x.
In this article, we will show you how to use iH8sn0w's iREB v3.1.2 for Windows to restore custom firmware on your iOS device.
-
Requirements
-
-
A Windows PC with iTunes installed.
-
A compatible iOS device that can be jailbroken with limera1n/steaks4uce exploit (iPhone 3GS, iPhone 4, iPod touch 3G, iPod touch 4G, iPad 1).
-
A custom firmware file (IPSW) for your device.
-
A USB cable to connect your device to your PC.
-
-
Steps
-
-
Download iREB v3.1.2 for Windows from here and extract the zip file.
-
Run iREB.exe as administrator and select your device from the list.
-
Follow the on-screen instructions to put your device into DFU mode.
-
Once your device is in pwned DFU mode, you will see a message saying "Waiting for custom firmware...".
-
Open iTunes and select your device from the sidebar.
-
Hold down the Shift key and click on the Restore button.
-
Browse to the custom firmware file (IPSW) that you want to restore and click Open.
-
Wait for iTunes to restore your device with the custom firmware.
-
Once the restore is done, your device will reboot and you will see a message saying "Done!".
-
Congratulations! You have successfully restored your device with custom firmware using iH8sn0w's iREB v3.1.2 for Windows.
-
Benefits of Custom Firmware
-
Custom firmware is not the same as the official firmware that comes with your device. Custom firmware is modified by third-party developers to add features, improve performance, and customize the user interface. There are many benefits of using custom firmware on your iOS device, such as:
-
-
You can access more apps and games that are not available on the official App Store, such as emulators, file managers, and tweaks.
-
You can customize the look and feel of your device with themes, icons, fonts, and widgets.
-
You can unlock your device to use it with any carrier or network.
-
You can improve the battery life and speed of your device by removing unwanted apps and services.
-
You can enhance the security and privacy of your device by blocking ads, trackers, and malware.
-
-
However, custom firmware also comes with some risks and drawbacks, such as:
-
-
You may void your warranty and lose your support from Apple or your carrier.
-
You may encounter bugs, errors, or compatibility issues with some apps and services.
-
You may lose your data or brick your device if you do not follow the instructions carefully or use incompatible files.
-
You may need to update your custom firmware manually or lose some features when a new official firmware is released.
-
-
Therefore, before you decide to install custom firmware on your device, you should weigh the pros and cons carefully and do some research on the best custom firmware for your device and needs. You should also backup your data and follow the steps in this article precisely to avoid any problems.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/text-generation-inference/chat-ui/README.md b/spaces/text-generation-inference/chat-ui/README.md
deleted file mode 100644
index 9b6f929ebdfd6077806faf4ede5959963102b5b9..0000000000000000000000000000000000000000
--- a/spaces/text-generation-inference/chat-ui/README.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: chat-ui
-emoji: 🔥
-colorFrom: purple
-colorTo: purple
-sdk: docker
-pinned: false
-license: other
----
-
-# create-svelte
-
-Everything you need to build a Svelte project, powered by [`create-svelte`](https://github.com/sveltejs/kit/tree/master/packages/create-svelte).
-
-## Creating a project
-
-If you're seeing this, you've probably already done this step. Congrats!
-
-```bash
-# create a new project in the current directory
-npm create svelte@latest
-
-# create a new project in my-app
-npm create svelte@latest my-app
-```
-
-## Developing
-
-Once you've created a project and installed dependencies with `npm install` (or `pnpm install` or `yarn`), start a development server:
-
-```bash
-npm run dev
-
-# or start the server and open the app in a new browser tab
-npm run dev -- --open
-```
-
-## Building
-
-To create a production version of your app:
-
-```bash
-npm run build
-```
-
-You can preview the production build with `npm run preview`.
-
-> To deploy your app, you may need to install an [adapter](https://kit.svelte.dev/docs/adapters) for your target environment.
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Demon Hunter Shadow World Premium Mod APK and Become the Ultimate Hero.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Demon Hunter Shadow World Premium Mod APK and Become the Ultimate Hero.md
deleted file mode 100644
index 0c53e26b791d938e8fbe16ff9c4d2422ce8fbe16..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Demon Hunter Shadow World Premium Mod APK and Become the Ultimate Hero.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Download Demon Hunter: Shadow World Premium Mod APK
-
If you are a fan of dark fantasy and action games, you might have heard of Demon Hunter: Shadow World, a popular mobile game developed by EA Publishing. This game lets you unleash your inner warrior in a world invaded by demons, undead, and other creatures of the night. You can choose from different classes of hunters, each with their own unique skills and weapons, and fight against hordes of enemies in various modes and locations. You can also customize your hunter with various outfits, accessories, and upgrades to suit your style and preferences.
-
However, as much as this game is fun and exciting, it also has some limitations that might hinder your enjoyment. For example, some features are locked behind a paywall, such as premium outfits, weapons, and skills. You also have to deal with ads, in-game purchases, and internet connection requirements. These factors can make the game less satisfying and more frustrating for some players.
-
download demon hunter shadow world premium mod apk
Fortunately, there is a way to overcome these limitations and enjoy the game to its fullest potential. You can download Demon Hunter: Shadow World Premium Mod APK, a modified version of the original game that offers additional benefits and enhancements that are not present in the official release. In this article, we will tell you more about the features and benefits of this mod apk, as well as how to download and install it on your device. Read on to find out more!
-
Features of Demon Hunter: Shadow World
-
Before we dive into the benefits of the mod apk version, let us first review the features of the original game. Demon Hunter: Shadow World is an action-packed mobile game that offers a super satisfying combat system, diverse content, and thrilling PvP battles. Here are some of the main features of this game:
-
Action-packed gameplay
-
Demon Hunter: Shadow World features a super satisfying combat system that lets you slash, shoot, and cast skills with intuitive controls and responsive feedback. You can use combos, dodges, counters, and special moves to defeat your enemies and earn rewards. The game also has a dynamic camera that adjusts to your movements and actions, creating a cinematic experience.
-
Multiple classes and customization options
-
Demon Hunter: Shadow World allows you to choose from different classes of hunters, each with their own unique skills and weapons. You can play as a swordmaster, a gunslinger, a mage, or a priest, depending on your preference and playstyle. You can also customize your hunter with various outfits, accessories, and upgrades to suit your style and preferences. You can collect and craft different items and equipment to enhance your power and abilities.
-
Diverse modes and locations
-
Demon Hunter: Shadow World offers a variety of modes and locations to explore and enjoy. You can play solo or co-op missions, challenge bosses and dungeons, or join guilds and events. You can also travel to different locations, such as forests, deserts, cities, and castles, each with their own enemies and secrets. The game has a rich and immersive story that unfolds as you progress through the game.
-
* download demon hunter shadow world mod menu apk
-* download demon hunter shadow world unlimited money mod apk
-* download demon hunter shadow world latest version mod apk
-* download demon hunter shadow world hack mod apk
-* download demon hunter shadow world full unlocked mod apk
-* download demon hunter shadow world premium mod apk for android
-* download demon hunter shadow world premium mod apk free
-* download demon hunter shadow world premium mod apk offline
-* download demon hunter shadow world premium mod apk no root
-* download demon hunter shadow world premium mod apk happymod
-* download demon hunter shadow world premium mod apk rexdl
-* download demon hunter shadow world premium mod apk revdl
-* download demon hunter shadow world premium mod apk android 1
-* download demon hunter shadow world premium mod apk apkpure
-* download demon hunter shadow world premium mod apk apkdone
-* download demon hunter shadow world premium mod apk obb
-* download demon hunter shadow world premium mod apk data
-* download demon hunter shadow world premium mod apk + data
-* download demon hunter shadow world premium mod apk + obb
-* download demon hunter shadow world premium mod apk 60.80.11.0
-* download demon hunter shadow world premium mod apk 2023
-* download demon hunter shadow world premium mod apk new update
-* download demon hunter shadow world premium mod apk latest update
-* download demon hunter shadow world premium mod apk mega mod
-* download demon hunter shadow world premium mod apk pro
-* download demon hunter shadow world premium cracked mod apk
-* download demon hunter shadow world premium patched mod apk
-* download demon hunter shadow world premium vip mod apk
-* download demon hunter shadow world adventure game mod apk
-* download demon hunter shadow world hidden object game mod apk
-* download demon hunter shadow world ea publishing game mod apk
-* how to download demon hunter shadow world premium mod apk
-* where to download demon hunter shadow world premium mod apk
-* best site to download demon hunter shadow world premium mod apk
-* best way to download demon hunter shadow world premium mod apk
-* safe site to download demon hunter shadow world premium mod apk
-* safe way to download demon hunter shadow world premium mod apk
-* easy way to download demon hunter shadow world premium mod apk
-* fast way to download demon hunter shadow world premium mod apk
-* free way to download demon hunter shadow world premium mod apk
-
Thrilling PvP battles
-
Demon Hunter: Shadow World also features a thrilling PvP mode that lets you test your skills and strategies against other players from around the world. You can join ranked matches, tournaments, or custom games, and compete for glory and rewards. You can also chat with other players, form alliances, or challenge rivals. The game has a fair and balanced matchmaking system that ensures you have a fun and competitive experience.
-
Benefits of Demon Hunter: Shadow World Premium Mod APK
-
Now that we have covered the features of the original game, let us move on to the benefits of the mod apk version. Demon Hunter: Shadow World Premium Mod APK is a modified version of the original game that offers additional benefits and enhancements that are not present in the official release. Here are some of the main benefits of this mod apk:
-
Access to premium features for free
-
One of the biggest benefits of Demon Hunter: Shadow World Premium Mod APK is that it gives you access to premium features for free. This means that you can unlock and use all the premium outfits, weapons, and skills that are otherwise locked behind a paywall. You can also enjoy unlimited VIP privileges, such as faster leveling, more rewards, and exclusive events. You can experience the game without any limitations or restrictions.
-
Removal of ads and other annoyances
-
Another benefit of Demon Hunter: Shadow World Premium Mod APK is that it removes all the ads and other annoyances that might interrupt your gameplay. This means that you can play the game without any pop-ups, banners, or videos that might distract you or slow down your device. You can also skip the loading screens, tutorials, and other unnecessary elements that might waste your time or resources. You can enjoy the game without any interruptions or frustrations.
-
Unlimited resources and in-app purchases
-
A third benefit of Demon Hunter: Shadow World Premium Mod APK is that it gives you unlimited resources and in-app purchases. This means that you can have unlimited coins, gems, energy, and other currencies that you can use to buy or upgrade anything in the game. You can also have unlimited access to all the items and equipment in the shop, as well as all the in-app purchases that might enhance your gameplay. You can enjoy the game without any worries or limitations.
-
Offline functionality and compatibility
-
A fourth benefit of Demon Hunter: Shadow World Premium Mod APK is that it allows you to play the game offline and on any device. This means that you can play the game without an internet connection, which is useful if you have a poor or unstable connection, or if you want to save your data or battery. You can also play the game on any device, regardless of its specifications or operating system. The mod apk is optimized to run smoothly and efficiently on any device.
-
How to Download and Install Demon Hunter: Shadow World Premium Mod APK
-
Now that we have discussed the benefits of Demon Hunter: Shadow World Premium Mod APK, let us show you how to download and install it on your device. The process is simple and easy, but you need to follow some steps carefully to avoid any errors or issues. Here are the steps to download and install Demon Hunter: Shadow World Premium Mod APK:
-
Step 1: Find a reliable source
-
The first step is to find a reliable source where you can download Demon Hunter: Shadow World Premium Mod APK. There are many websites that offer mod apk files for various games, but not all of them are trustworthy or safe. Some of them might contain viruses, malware, or spyware that might harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything from an unknown source.
-
One of the best sources where you can download Demon Hunter: Shadow World Premium Mod APK is [text], a website that provides high-quality mod apk files for various games. This website is trusted by millions of users worldwide and has a reputation for being safe and secure. You can download Demon Hunter: Shadow World Premium Mod APK from this website without any worries or risks.
-
Step 2: Download the mod apk file
-
The second step is to download the mod apk file from [text]. To do this, you need to visit the website and search for Demon Hunter: Shadow World Premium Mod APK. You will see a list of results that match your query. You need to select the one that has the latest version of the mod apk file, which is usually the first or second result. You can also check the details and reviews of the mod apk file to make sure it is what you are looking for.
-
Once you have selected the result, you will be redirected to a download page where you can see a download button. You need to click on the download button and wait for a few seconds until the download starts. The mod apk file will be saved in your device's download folder or any other location that you have specified.
-
Step 3: Enable unknown sources on your device
-
The third step is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official app store. However, since you are installing a mod apk file from an external source, you need to enable this setting to allow the installation. To do this, you need to follow these steps:
-
-
Go to your device's settings and look for security or privacy options.
-
Find and tap on the option that says unknown sources or allow installation from unknown sources.
-
Toggle the switch or check the box to enable this option.
-
Confirm your choice by tapping on OK or Yes.
-
-
Once you have enabled unknown sources on your device, you can proceed to the next step.
-
Step 4: Install the mod apk file
-
The fourth step is to install the mod apk file on your device. To do this, you need to follow these steps:
-
-
Go to your device's file manager and locate the mod apk file that you have downloaded. It should be in your download folder or any other location that you have specified.
-
Tap on the mod apk file and select install.
-
Wait for a few seconds until the installation is complete.
-
If prompted, grant any permissions or access that the app might request.
-
-
Once you have installed the mod apk file on your device, you can proceed to the final step.
-
Step 5: Enjoy the game
-
The fifth and final step is to enjoy the game. To do this, you need to follow these steps:
-
-
Go to your device's app drawer and look for Demon Hunter: Shadow World icon.
-
Tap on the icon and launch the game.
-
Sign in with your account or create a new one if you don't have one.
-
Select your preferred language and region.
-
Choose your class and customize your hunter.
-
Start playing and enjoy the game!
-
-
Conclusion
-
Demon Hunter: Shadow World is an amazing mobile game that offers a super satisfying combat system, diverse content, and thrilling PvP battles. However, if you want to enjoy the game to its fullest potential, you should download Demon Hunter: Shadow World Premium Mod APK, a modified version of the original game that offers additional benefits and enhancements that are not present in the official release. You can access premium features for free, remove ads and other annoyances, have unlimited resources and in-app purchases, and play offline and on any device. All you need to do is follow the steps we have outlined above and install the mod apk file on your device. You will be able to enjoy the game without any limitations or restrictions.
-
If you are ready to unleash your inner warrior in a world invaded by demons, undead, and other creatures of the night, download Demon Hunter: Shadow World Premium Mod APK today and start playing!
-
FAQs
-
What is a mod apk?
-
A mod apk is a modified version of an original app or game that offers additional benefits and enhancements that are not present in the official release. A mod apk can unlock premium features, remove ads, provide unlimited resources, or add new content or functionality to an app or game.
-
Is Demon Hunter: Shadow World mod apk safe to use?
-
Demon Hunter: Shadow World mod apk is safe to use as long as you download it from a reliable source, such as [text]. This website provides high-quality mod apk files for various games that are tested and verified by millions of users worldwide. You can download Demon Hunter: Shadow World mod apk from this website without any worries or risks.
-
What are the minimum requirements to play Demon Hunter: Shadow World mod apk?
-
The minimum requirements to play Demon Hunter: Shadow World mod apk are as follows:
-
-
An Android device with a minimum of 4 GB of RAM and 2 GB of free storage space.
-
An internet connection to download and install the mod apk file, as well as to access some online features of the game.
-
A compatible operating system, such as Android 4.4 or higher.
-
-
How can I update Demon Hunter: Shadow World mod apk?
-
To update Demon Hunter: Shadow World mod apk, you need to follow the same steps as you did to download and install it. You need to visit [text] and look for the latest version of the mod apk file. You need to download and install it on your device, replacing the previous version. You might also need to enable unknown sources on your device again if you have disabled it after installing the mod apk file. You will be able to enjoy the updated features and content of the game.
-
How can I contact the developer of Demon Hunter: Shadow World mod apk?
-
If you have any questions, feedback, or issues regarding Demon Hunter: Shadow World mod apk, you can contact the developer of the mod apk file through their email address, which is [text]. You can also visit their website, which is [text], to find more information and support for the mod apk file. The developer of the mod apk file is not affiliated with EA Publishing, the developer of the original game, so you should not contact them for any matters related to the mod apk file.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GTA San Andreas Multiplayer APK for Android 2021 - Play with Friends Online.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GTA San Andreas Multiplayer APK for Android 2021 - Play with Friends Online.md
deleted file mode 100644
index 7fafa818a385de916e7aa9721911b5f2700b1b82..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GTA San Andreas Multiplayer APK for Android 2021 - Play with Friends Online.md
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-
-
-
GTA San Andreas Multiplayer Android Download 2021 APK
-
Do you love playing GTA San Andreas on your PC or console? Do you wish you could play it online with other players from around the world? If yes, then you should try GTA San Andreas Multiplayer (SAMP) on your Android device.
-
gta san andreas multiplayer android download 2021 apk
GTA San Andreas Multiplayer (SAMP) is a mod for GTA San Andreas that allows you to play online with other players on dedicated servers. It was originally released for PC in 2006 and has since become one of the most popular multiplayer mods for GTA fans.
-
Why is GTA SAMP so popular among gamers? Because it gives you the freedom to play GTA San Andreas in any way you want. You can choose from hundreds of servers that offer different game modes, such as deathmatch, roleplay, racing, zombie survival, and more. You can also customize your character's appearance, clothes, weapons, vehicles, and even create your own server with your own rules and settings.
-
What are the benefits of playing GTA SAMP on Android devices? Well, for one thing, you can play it anywhere and anytime you want. You don't need a PC or a console to enjoy GTA SAMP online. You just need a compatible Android device, the original GTA San Andreas game installed, and an internet connection. You can also use touchscreen controls or external controllers to play GTA SAMP on your Android device. And don't worry about performance issues or battery drain, because GTA SAMP is optimized for low-end devices and battery saving mode.
-
gta samp android apk 2021 free download
-gta san andreas online mobile android 2021
-gta sa multiplayer android mod apk 2021
-download gta san andreas for android with multiplayer 2021
-gta san andreas android ios multiplayer 2021
-gta san andreas samp android 2021 latest version
-gta sa online android apk download 2021
-gta san andreas multiplayer mod for android 2021
-how to play gta san andreas online on android 2021
-gta san andreas mobile multiplayer apk 2021
-gta sa android multiplayer server 2021
-gta san andreas online apk 2021 for android
-gta sa multiplayer mod android download 2021
-gta san andreas android multiplayer gameplay 2021
-gta san andreas samp apk download for android 2021
-gta sa online android 2021 free
-gta san andreas multiplayer android install 2021
-gta sa multiplayer android cheats 2021
-gta san andreas online android game 2021
-gta sa multiplayer android tutorial 2021
-gta san andreas samp android update 2021
-gta sa online android mod apk 2021
-gta san andreas multiplayer android offline 2021
-gta sa multiplayer android hack 2021
-gta san andreas online android app 2021
-
How to download GTA SAMP Android APK 2021
-
If you are interested in playing GTA SAMP on your Android device, you will need to download the GTA SAMP Android APK file from a trusted source. This is a modified version of the original GTA San Andreas game that allows you to connect to GTA SAMP servers and play online with other players. Here are the requirements and steps to download and install GTA SAMP Android APK 2021.
-
Requirements
-
-
A compatible Android device running Android 4.0 or higher
-
The original GTA San Andreas game installed on your device (you can buy it from Google Play Store or download it from other sources)
-
An internet connection (preferably Wi-Fi or 4G)
-
-
Steps
-
-
Download the GTA SAMP Android APK file from a trusted source. You can search for it on Google or use this link: GTA SAMP Android APK 2021 Download
-
Install the GTA SAMP Android APK file on your device. You may need to enable unknown sources in your device settings to allow the installation of apps from outside the Google Play Store.
-
Launch the GTA SAMP Android app and select a server to join. You can browse through the list of servers by name, description, ping, players, and game mode. You can also filter the servers by language, country, or favorite.
-
Enjoy playing GTA SAMP online with other players. You can chat with them using text or voice messages, join gangs or alliances, participate in events and activities, and have fun in the vast open world of San Andreas.
-
Features of GTA SAMP Android APK 2021
-
Now that you know how to download and install GTA SAMP Android APK 2021, you might be wondering what are the features that make it so awesome. Well, here are some of the main features of GTA SAMP Android APK 2021 that you can enjoy on your Android device.
-
Multiplayer mode
-
The most obvious and exciting feature of GTA SAMP Android APK 2021 is the multiplayer mode. You can play with up to 1000 players on a single server, which is much more than the original GTA San Andreas game. You can also choose from different game modes, such as deathmatch, roleplay, racing, zombie survival, and more. Each game mode has its own rules, objectives, and challenges that you can try out. You can also chat with other players using text or voice messages, and make friends or enemies along the way.
-
Customization
-
Another feature of GTA SAMP Android APK 2021 is the customization. You can customize your character's appearance, clothes, weapons, vehicles, and even create your own server with your own rules and settings. You can change your character's skin, hair, face, tattoos, accessories, and more. You can also buy or steal different weapons, vehicles, and items from the game world or other players. You can also add mods and scripts to enhance your gameplay experience, such as new maps, vehicles, weapons, skins, missions, etc.
-
Compatibility
-
The last feature of GTA SAMP Android APK 2021 is the compatibility. You don't need a high-end device to play GTA SAMP on your Android device. GTA SAMP Android APK 2021 is compatible with most Android devices running Android 4.0 or higher. It also supports both touchscreen and external controllers, so you can choose the control scheme that suits you best. It is also optimized for low-end devices and battery saving mode, so you don't have to worry about performance issues or battery drain.
A: Is GTA SAMP Android APK 2021 safe to download and install?
-
A: Yes, GTA SAMP Android APK 2021 is safe to download and install, as long as you download it from a trusted source. You should avoid downloading it from unknown or suspicious websites, as they may contain viruses or malware that could harm your device or steal your data. You should also scan the APK file with an antivirus app before installing it on your device.
-
Q: Do I need to root my Android device to play GTA SAMP Android APK 2021?
-
A: No, you don't need to root your Android device to play GTA SAMP Android APK 2021. You just need to enable unknown sources in your device settings to allow the installation of apps from outside the Google Play Store. However, some mods or scripts may require root access to work properly, so you should be careful when using them.
-
Q: Can I play GTA SAMP Android APK 2021 offline?
-
A: No, you can't play GTA SAMP Android APK 2021 offline. You need an internet connection to connect to GTA SAMP servers and play online with other players. You can use Wi-Fi or 4G to play GTA SAMP online, but you should avoid using 3G or 2G, as they may cause lag or disconnection issues.
-
Q: How can I update GTA SAMP Android APK 2021?
-
A: You can update GTA SAMP Android APK 2021 by downloading the latest version of the APK file from a trusted source and installing it on your device. You should also check the official website or social media pages of GTA SAMP for any news or announcements about updates or new features.
-
Q: How can I contact the developers or admins of GTA SAMP Android APK 2021?
-
A: You can contact the developers or admins of GTA SAMP Android APK 2021 by visiting their official website or social media pages. You can also join their forums or discord servers and ask them questions or give them feedback. You can also report any bugs or issues that you encounter while playing GTA SAMP Android APK 2021.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cakewalk Sonar X2 Producer.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cakewalk Sonar X2 Producer.md
deleted file mode 100644
index 6234ee9b93dacc313353ae6c512e31674cdd3e8f..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Cakewalk Sonar X2 Producer.md
+++ /dev/null
@@ -1,171 +0,0 @@
-
-
-
-
-
Cakewalk Sonar X2 Producer: A Comprehensive Review
-
If you are looking for a digital audio workstation (DAW) software that can handle every aspect of music production, from recording to mixing to mastering, you might want to check out Cakewalk Sonar X2 Producer. This software is designed for professional musicians, producers, engineers, and composers who want to create high-quality music in any genre. In this article, we will review the features and benefits of Cakewalk Sonar X2 Producer, as well as its pros and cons, comparison with other DAWs, pricing and availability. By the end of this article, you will have a better idea of whether or not Cakewalk Sonar X2 Producer is the right DAW for you.
-
Features and Benefits of Cakewalk Sonar X2 Producer
-
Cakewalk Sonar X2 Producer is packed with features and tools that can help you create amazing music. Here are some of the main features and benefits of this software:
The Skylight interface is one of the most distinctive features of Cakewalk Sonar X2 Producer. It is a user-friendly and flexible workspace that allows you to move seamlessly among different elements of music production, such as recording, editing, mixing, mastering, etc. You can customize the interface according to your preferences and workflow by docking, undocking, resizing, collapsing, or expanding any window or pane. You can also switch between different views, such as Track View, Piano Roll View, Staff View, Console View, etc., with a single click. The Skylight interface also features a Smart Grid that automatically adapts to your zoom level and snap settings, making it easier to align your clips and events.
-
ProChannel
-
The ProChannel is a modular and expandable channel strip that gives you complete control over your sound. It is available on every track and bus in Cakewalk Sonar X2 Producer, and it includes a
gate, a compressor, an equalizer, a tube saturation module, and a reverb module. You can also add more modules, such as the Console Emulator, the Softube Saturation Knob, the BREVERB SONAR, the R-MIX SONAR, etc., to expand your sonic possibilities. The ProChannel also allows you to change the order of the modules, save and load presets, and copy and paste settings across tracks and buses.
-
Virtual Instruments
-
Cakewalk Sonar X2 Producer comes with 20 virtual instruments that can help you create and edit sounds for any genre of music. These include:
-
-
Dimension Pro: A powerful and versatile synthesizer that offers over 1500 presets of acoustic and electric instruments, pads, atmospheres, basses, leads, etc.
-
Rapture: A wavetable synthesizer that combines analog warmth with digital precision, offering over 500 presets of rich and expressive sounds.
-
Z3TA+ 2: A classic synthesizer that has been updated with a modern interface and new features, offering over 1000 presets of vintage and futuristic sounds.
-
Addictive Drums: A realistic and flexible drum machine that features over 100 drum kits and 1300 grooves, as well as a built-in mixer and effects.
-
Session Drummer 3: A simple and easy-to-use drum machine that features over 20 drum kits and 700 patterns, as well as a drag-and-drop interface.
-
TruePianos Amber: A high-quality piano instrument that features a realistic sound and a low CPU usage.
-
Lounge Lizard SONAR: An electric piano instrument that emulates the classic Rhodes and Wurlitzer sounds.
-
Strum Acoustic Session: An acoustic guitar instrument that features realistic strumming patterns and chords.
-
Studio Instruments: A collection of four instruments that include a bass, a drum kit, an electric piano, and a string section.
-
TTS-1: A general MIDI synthesizer that features 256 sounds and 9 drum kits.
-
And more: Cakewalk Sonar X2 Producer also includes other virtual instruments, such as Cakewalk Sound Center, Cakewalk Tonespace 2, DropZone, Square I, Roland GrooveSynth, etc.
-
-
Audio and MIDI Effects
-
Cakewalk Sonar X2 Producer also comes with 59 audio and MIDI effects processors that can help you enhance your tracks. These include:
-
-
Sonitus FX Suite: A collection of 10 effects processors that cover the essential needs of any music production, such as delay, reverb, compressor, gate, equalizer, flanger, phaser, wahwah, modulator, and surround.
-
VX-64 Vocal Strip: A vocal processing tool that features a doubler, a de-esser, a compressor, an equalizer, a tube saturation module, and a delay module.
-
TL-64 Tube Leveler: A tube emulation tool that adds warmth and character to your tracks.
-
Perfect Space Convolution Reverb: A reverb processor that uses impulse responses to create realistic acoustic spaces.
-
PX-64 Percussion Strip: A percussion processing tool that features a transient shaper, a compressor, an equalizer, a tube saturation module, and a delay module.
-
Boost 11 Peak Limiter: A peak limiter that maximizes the loudness of your tracks without distorting them.
-
Guitar Rig 4 LE: A guitar amp and effects simulator that features 3 amps, 12 cabinets, 9 effects pedals, and a tuner.
-
Amp Simulators: Cakewalk Sonar X2 Producer also includes other amp simulators, such as TH2 Producer, TH2 SONAR, and Overloud Breverb 2, that can emulate various types of guitar amps, cabinets, and effects.
-
MIDI Effects: Cakewalk Sonar X2 Producer also comes with several MIDI effects, such as arpeggiator, chord analyzer, echo delay, MIDI filter, quantizer, velocity, etc., that can manipulate and transform your MIDI data.
-
-
Matrix View
-
The Matrix View is a feature that allows you to trigger and remix loops and clips in real-time. You can drag and drop audio or MIDI clips from the Track View or the Browser into the Matrix cells, and then trigger them individually or in groups using your mouse, keyboard, or MIDI controller. You can also record your performance and capture it as a new track in the Track View. The Matrix View is ideal for creating variations, transitions, mash-ups, or live performances of your music.
-
Smart Tool
-
The Smart Tool is a feature that allows you to perform multiple editing tasks with a single tool. Depending on where you position the cursor on the clip or event, the Smart Tool will change its function accordingly. For example, you can use the Smart Tool to select, move, split, trim, fade, slip-stretch, or draw envelopes on your clips or events. The Smart Tool also works with the Smart Grid to snap your edits to the grid lines.
-
Console Emulator
-
The Console Emulator is a feature that allows you to add analog warmth and character to your mix. It is a ProChannel module that emulates the sound of three legendary analog consoles: the SSL 4000 G+, the Neve 88RS, and the API 3124. You can choose between different console models for each track or bus, and adjust the drive and crosstalk parameters to achieve the desired sonic flavor. The Console Emulator also features a trim control that lets you adjust the input gain of each channel.
-
-
R-MIX SONAR
-
The R-MIX SONAR is a feature that allows you to manipulate the frequency and panning of any audio source. It is a ProChannel module that uses Roland's proprietary V-Remastering technology to analyze and isolate different elements of an audio file. You can use R-MIX SONAR to adjust the volume, pan, pitch, or reverb of any frequency band or region of an audio file. You can also use R-MIX SONAR to remove or extract vocals, instruments, or noises from an audio file.
-
FX Chains
-
The FX Chains are a feature that allows you to create and save complex effect routings. You can insert multiple effects processors into a single FX Chain, and then adjust their parameters using a custom interface. You can also assign knobs, sliders, buttons, or switches to control multiple parameters at once. You can save your FX Chains as presets and recall them later for different tracks or projects. You can also share your FX Chains with other users online.
-
Automation and Take Lanes
-
The Automation and Take Lanes are features that allow you to record and edit parameter changes and multiple takes. You can use automation to automate any parameter of any track or effect processor, such as volume, pan, mute, solo, send level, effect bypass, etc. You can record automation in real-time using your mouse, keyboard, or MIDI controller, or draw automation curves using the Smart Tool. You can also edit automation data using the Automation Lanes, which show the automation envelopes for each track or bus. You can use Take Lanes to record multiple takes of the same track, and then comp them together using the Smart Tool. You can also edit take data using the Take Lanes, which show the take clips for each track.
-
SoundCloud, MusicXML, and Export Options
-
Cakewalk Sonar X2 Producer also allows you to share your music with the world. You can upload your tracks directly to SoundCloud, a popular online platform for music distribution and collaboration. You can also export your tracks as MusicXML files, a standard format for exchanging musical notation data. You can also export your tracks as audio files in various formats, such as WAV, MP3, WMA, OGG, FLAC, etc., or as video files in various formats, such as AVI, WMV, MOV, etc.
-
Pros and Cons of Cakewalk Sonar X2 Producer
-
Cakewalk Sonar X2 Producer is a powerful and versatile DAW software that has many advantages, but also some disadvantages. Here are some of the pros and cons of this software:
-
Pros
-
-
User-friendly and flexible interface: The Skylight interface is one of the best features of Cakewalk Sonar X2 Producer. It allows you to customize your workspace according to your preferences and workflow, and switch between different views and modes with ease.
-
Modular and expandable channel strip: The ProChannel is another great feature of Cakewalk Sonar X2 Producer. It gives you complete control over your sound with a variety of modules that you can add, remove, reorder, or bypass. It also features a Console Emulator that adds analog warmth and character to your mix.
-
Rich and diverse virtual instruments and effects: Cakewalk Sonar X2 Producer comes with 20 virtual instruments and 59 audio and MIDI effects processors that cover every aspect of sound creation and enhancement. You can create and edit sounds for any genre of music with these tools.
-
Real-time loop and clip triggering: The Matrix View is another unique feature of Cakewalk Sonar X2 Producer. It allows you to trigger and remix loops and clips in real-time, creating variations, transitions, mash-ups, or live performances of your music.
-
Multiple editing tasks with a single tool: The Smart Tool is another handy feature of Cakewalk Sonar X2 Producer. It allows you to perform multiple editing tasks with a single tool, depending on where you position the cursor on the clip or event. It also works with the Smart Grid to snap your edits to the grid lines.
-
Frequency and panning manipulation: The R-MIX SONAR is another innovative feature of Cakewalk Sonar X2 Producer. It allows you to manipulate the frequency and panning of any audio source, using Roland's proprietary V-Remastering technology. You can use R-MIX SONAR to remove or extract vocals, instruments, or noises from an audio file.
-
Complex effect routings: The FX Chains are another useful feature of Cakewalk Sonar X2 Producer. They allow you to create and save complex effect routings with a custom interface. You can also assign knobs, sliders, buttons, or switches to control multiple parameters at once. You can also share your FX Chains with other users online.
-
Parameter changes and multiple takes: The Automation and Take Lanes are another essential feature of Cakewalk Sonar X2 Producer. They allow you to record and edit parameter changes and multiple takes with ease. You can use automation to automate any parameter of any track or effect processor, and use take lanes to record multiple takes of the same track and comp them together.
-
Easy music sharing: Cakewalk Sonar X2 Producer also allows you to share your music with the world. You can upload your tracks directly to SoundCloud, export your tracks as MusicXML files, or export your tracks as audio or video files in various formats.
-
-
Cons
-
-
Steep learning curve: Cakewalk Sonar X2 Producer is a powerful and versatile DAW software, but it also has a steep learning curve. It may take some time and effort to master all the features and tools of this software, especially if you are a beginner or coming from another DAW. You may need to watch tutorials, read manuals, or join forums to learn how to use this software effectively.
-
High system requirements: Cakewalk Sonar X2 Producer is a resource-intensive software that requires a high-performance computer system to run smoothly. You may need to upgrade your hardware, such as CPU, RAM, hard drive, sound card, etc., to meet the minimum or recommended system requirements of this software. You may also need to optimize your system settings, such as buffer size, sample rate, latency, etc., to avoid glitches, crashes, or errors.
-
Limited compatibility: Cakewalk Sonar X2 Producer is a Windows-only software that does not support Mac OS or Linux platforms. You may need to use a dual-boot system or a virtual machine to run this software on a non-Windows computer. You may also encounter some compatibility issues with some third-party plugins or hardware devices that are not supported by this software.
-
-
Comparison with Other DAWs
-
Cakewalk Sonar X2 Producer is one of the many DAWs available in the market today. Each DAW has its own strengths and weaknesses, and different users may prefer different DAWs depending on their needs and preferences. Here are some of the main differences between Cakewalk Sonar X2 Producer and other popular DAWs:
-
-
-
DAW
-
Differences
-
-
-
Pro Tools
-
Pro Tools is one of the most widely used DAWs in the professional music industry. It is known for its high-quality audio recording and editing capabilities, as well as its industry-standard format and compatibility. However, Pro Tools is also one of the most expensive DAWs in the market, and it requires a proprietary hardware device called an iLok to run. Pro Tools also has fewer virtual instruments and effects than Cakewalk Sonar X2 Producer, and it does not have features such as the Matrix View, the R-MIX SONAR, or the FX Chains.
-
-
-
Logic Pro
-
Logic Pro is one of the most popular DAWs for Mac users. It is known for its intuitive and elegant interface, as well as its rich and diverse collection of virtual instruments and effects. However, Logic Pro is only available for Mac OS platforms, and it does not support Windows or Linux platforms. Logic Pro also has fewer audio editing features than Cakewalk Sonar X2 Producer, and it does not have features such as the ProChannel, the Console Emulator, or the R-MIX SONAR.
-
-
-
Ableton Live
-
Ableton Live is one of the most innovative DAWs for live performance and electronic music production. It is known for its unique Session View that allows you to trigger and remix clips in real-time, as well as its powerful warping and time-stretching features that allow you to manipulate audio in creative ways. However, Ableton Live is also one of the most expensive DAWs in the market, and it has fewer audio recording and editing features than Cakewalk Sonar X2 Producer. Ableton Live also has fewer virtual instruments and effects than Cakewalk Sonar X2 Producer, and it does not have features such as the ProChannel, the Console Emulator, or the FX Chains.
-
-
FL Studio
-
FL Studio is one of the most user-friendly and affordable DAWs for beginners and hobbyists. It is known for its easy-to-use and colorful interface, as well as its step sequencer and piano roll that allow you to create and edit beats and melodies quickly. However, FL Studio is also one of the most limited DAWs in terms of audio recording and editing features, and it does not have features such as the ProChannel, the Console Emulator, the R-MIX SONAR, or the FX Chains. FL Studio also has fewer virtual instruments and effects than Cakewalk Sonar X2 Producer, and it does not have features such as the Matrix View, the Smart Tool, or the Automation and Take Lanes.
-
-
-
Pricing and Availability of Cakewalk Sonar X2 Producer
-
Cakewalk Sonar X2 Producer is available for purchase online from the official Cakewalk website or from authorized dealers. The price of Cakewalk Sonar X2 Producer is $499 USD, which is a reasonable price for a professional DAW software that offers so many features and tools. However, you can also find discounts and promotions from time to time that can lower the price of Cakewalk Sonar X2 Producer. You can also download a free trial version of Cakewalk Sonar X2 Producer from the official Cakewalk website that allows you to use the software for 30 days with no limitations.
-
Conclusion
-
Cakewalk Sonar X2 Producer is a powerful and versatile DAW software that can handle every aspect of music production, from recording to mixing to mastering. It offers a user-friendly and flexible interface, a modular and expandable channel strip, a rich and diverse collection of virtual instruments and effects, a real-time loop and clip triggering feature, a multiple editing tasks with a single tool feature, a frequency and panning manipulation feature, a complex effect routings feature, a parameter changes and multiple takes feature, and an easy music sharing feature. However, Cakewalk Sonar X2 Producer also has a steep learning curve, high system requirements, and limited compatibility. It also differs from other popular DAWs in terms of audio recording and editing capabilities, virtual instruments and effects, interface and workflow, etc.
-
In conclusion, Cakewalk Sonar X2 Producer is a great DAW software for professional musicians, producers, engineers, and composers who want to create high-quality music in any genre. It is especially suitable for those who value sound quality, flexibility, diversity, creativity, and performance. However, Cakewalk Sonar X2 Producer may not be the best DAW software for beginners or hobbyists who prefer simplicity, affordability, compatibility, or familiarity. Ultimately, the choice of DAW software depends on your personal needs and preferences.
-
If you are interested in buying Cakewalk Sonar X2 Producer or trying it out for free for 30 days, you can visit the official Cakewalk website or follow the links below:
We hope you enjoyed this article and learned something new about Cakewalk Sonar X2 Producer. If you have any questions or comments, feel free to leave them below. Thank you for reading!
-
FAQs
-
Here are some of the frequently asked questions about Cakewalk Sonar X2 Producer:
-
-
What are the system requirements for Cakewalk Sonar X2 Producer?
-
The minimum system requirements for Cakewalk Sonar X2 Producer are:
-
-
Windows 7 or Windows 8 (32-bit or 64-bit)
-
Intel Core 2 Duo E8200 2.67 GHz / AMD Phenom Quad Core 9750 2.4 GHz or higher
-
2 GB RAM
-
1280x800 minimum screen resolution
-
4.5 GB hard disk space for installation
-
A DVD-ROM drive for installation from disc
-
A compatible audio interface with ASIO or WDM drivers
-
A MIDI interface and a MIDI keyboard (recommended)
-
An internet connection for registration and updates
-
-
The recommended system requirements for Cakewalk Sonar X2 Producer are:
-
-
Windows 7 or Windows 8 (64-bit)
-
Intel Core i5 / i7 / Xeon or AMD Phenom II / Athlon II / Opteron or higher
20 GB hard disk space for installation and content
-
A DVD-ROM drive for installation from disc
-
A compatible audio interface with ASIO or WDM drivers
-
A MIDI interface and a MIDI keyboard (recommended)
-
An internet connection for registration and updates
-
-
How can I upgrade from a previous version of Cakewalk Sonar to Cakewalk Sonar X2 Producer?
-
If you own a previous version of Cakewalk Sonar, such as Sonar X1, Sonar 8.5, Sonar 8, etc., you can upgrade to Cakewalk Sonar X2 Producer at a discounted price. You can visit the official Cakewalk website or follow the link below to check the upgrade options and prices:
How can I learn how to use Cakewalk Sonar X2 Producer?
-
Cakewalk Sonar X2 Producer comes with a comprehensive user manual that explains all the features and functions of the software. You can access the user manual from the Help menu in the software, or download it from the official Cakewalk website or follow the link below:
Cakewalk Sonar X2 Producer also comes with a series of video tutorials that show you how to use the software in various scenarios. You can access the video tutorials from the Help menu in the software, or watch them online from the official Cakewalk website or follow the link below:
Cakewalk Sonar X2 Producer also has a large and active online community of users who can help you with any questions or problems you may have. You can join the official Cakewalk forums, where you can ask questions, share tips, exchange ideas, and get feedback from other users. You can also browse through the existing topics and posts to find answers to your questions. You can visit the official Cakewalk forums from the official Cakewalk website or follow the link below:
I have written four FAQs so far, and I need to write one more to complete the article. Here is the fifth FAQ:
-
-
What are the alternatives to Cakewalk Sonar X2 Producer?
-
If you are not satisfied with Cakewalk Sonar X2 Producer, or you want to try other DAWs, there are many alternatives available in the market. Some of the most popular alternatives are:
-
-
Reaper: Reaper is a powerful and affordable DAW that offers a customizable and flexible interface, a fast and stable performance, a rich and diverse collection of plugins and effects, a comprehensive audio and MIDI editing and recording features, and a wide compatibility with various formats and platforms.
-
Cubase: Cubase is a professional and versatile DAW that offers a sleek and intuitive interface, a high-quality audio engine, a large and diverse collection of virtual instruments and effects, a comprehensive audio and MIDI editing and recording features, and a wide compatibility with various formats and platforms.
-
Studio One: Studio One is a modern and innovative DAW that offers a streamlined and user-friendly interface, a fast and smooth performance, a rich and diverse collection of virtual instruments and effects, a comprehensive audio and MIDI editing and recording features, and a wide compatibility with various formats and platforms.
-
Reason: Reason is a creative and fun DAW that offers a unique and colorful interface, a modular and expandable rack of virtual instruments and effects, a comprehensive audio and MIDI editing and recording features, and a wide compatibility with various formats and platforms.
-
-
These are just some of the alternatives to Cakewalk Sonar X2 Producer. You can find more alternatives online or by asking other users for recommendations.
-
-
I have finished writing the article. Here is the custom message:
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download 720p Maidan-E-Jung Movi TOP.md b/spaces/tioseFevbu/cartoon-converter/scripts/Download 720p Maidan-E-Jung Movi TOP.md
deleted file mode 100644
index 9ce02e3c7ab74f6db1dc037da65a33da14fec633..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Download 720p Maidan-E-Jung Movi TOP.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
Download 720p Maidan-E-Jung Movie
-
If you are a fan of Bollywood action movies, you might want to download 720p Maidan-E-Jung movie, a 1995 film directed by K.C. Bokadia and starring Dharmendra, Akshay Kumar, Karisma Kapoor and Jaya Prada. The movie is about a village head named Daata Guru (Amrish Puri) who rules over his people with an iron fist and exploits them for his own benefit. He faces opposition from a young man named Karan (Akshay Kumar) who falls in love with his daughter Tulsi (Karisma Kapoor) and vows to free the villagers from his tyranny.
Download 720p Maidan-E-Jung movie to watch how Karan fights against Daata Guru and his henchmen with the help of his father Shankar (Dharmendra) and his mother Parvati (Jaya Prada). The movie is full of thrilling action scenes, melodious songs and emotional drama. You can download 720p Maidan-E-Jung movie from various online platforms that offer high-quality video and audio. However, you should be careful of the legal and ethical issues involved in downloading pirated content and respect the rights of the original creators.
One of the highlights of the movie is its music composed by Bappi Lahiri and sung by various artists like Vinod Rathod, Udit Narayan, Kumar Sanu, Sadhana Sargam, Ila Arun and Gurdas Maan. The movie has six songs that range from romantic to folk to qawwali. Some of the popular songs are Aayo Phaganiyo[^1^], Shaam Dhal Rahi Hai[^2^] and Teetar Boleâ¦Kiti Kiti[^3^]. The songs are well-choreographed and picturized on the lead actors and supporting cast.
-
Download 720p Maidan-E-Jung movie to enjoy the action-packed story of Karan and Tulsi's love and their struggle against Daata Guru's oppression. The movie has some memorable dialogues and performances by the veteran actors like Dharmendra, Jaya Prada and Amrish Puri. The movie also has some comic relief provided by Shakti Kapoor and Kader Khan who play Daata Guru's loyal but foolish servants. The movie is a typical masala entertainer that will keep you hooked till the end.
-
Maidan-E-Jung was released on 14th April 1995 and faced stiff competition from other movies like Karan Arjun, Dilwale Dulhania Le Jayenge and Raja. The movie was made on a budget of â¹ 32.5 million and collected â¹ 72.3 million at the box office[^1^]. The movie was an average grosser and did not live up to the expectations of the audience and critics. The movie was criticized for its outdated plot, poor direction, weak screenplay and excessive violence. The movie also failed to utilize the star power of its cast and wasted their talents.
-
Download 720p Maidan-E-Jung movie only if you are a die-hard fan of the actors or the genre. The movie is not a masterpiece of cinema but a typical 90s masala flick that has some moments of entertainment and nostalgia. The movie is not for everyone and may not appeal to the modern sensibilities of the viewers. The movie is best enjoyed with a pinch of salt and a lot of suspension of disbelief.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/svg.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/svg.py
deleted file mode 100644
index 075150a4b586d668c1666513fbf90463cdbb11ab..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/svg.py
+++ /dev/null
@@ -1,188 +0,0 @@
-"""
- pygments.formatters.svg
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for SVG output.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.token import Comment
-from pip._vendor.pygments.util import get_bool_opt, get_int_opt
-
-__all__ = ['SvgFormatter']
-
-
-def escape_html(text):
- """Escape &, <, > as well as single and double quotes for HTML."""
- return text.replace('&', '&'). \
- replace('<', '<'). \
- replace('>', '>'). \
- replace('"', '"'). \
- replace("'", ''')
-
-
-class2style = {}
-
-class SvgFormatter(Formatter):
- """
- Format tokens as an SVG graphics file. This formatter is still experimental.
- Each line of code is a ```` element with explicit ``x`` and ``y``
- coordinates containing ```` elements with the individual token styles.
-
- By default, this formatter outputs a full SVG document including doctype
- declaration and the ``