-
-Download Arc2Earth for free. Arc2Earth is the premier ArcGIS extension for exporting and importing your data into the leading GeoWeb formats. 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cultural History Of India By Om Prakash Pdf 22.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cultural History Of India By Om Prakash Pdf 22.md
deleted file mode 100644
index 9a474b9bf5af06828bea5f15c724e65cf48496e6..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cultural History Of India By Om Prakash Pdf 22.md
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
Cultural History Of India By Om Prakash Pdf 22: A Book Review
-
-
Cultural History Of India By Om Prakash Pdf 22 is a book that explores the various aspects of development of Indian culture from ancient times to the present day. It is written by Om Prakash, a renowned historian and professor of history at Delhi University. The book is divided into three parts, each dealing with a different theme: religion, art, and social institutions.
The book is based on extensive research and analysis of primary and secondary sources, such as literary texts, inscriptions, coins, sculptures, paintings, monuments, etc. It also draws on the works of other eminent scholars and experts in the field of Indian history and culture. The book is written in a clear and lucid style, with ample illustrations and examples to support the arguments and facts. The book also provides a bibliography and an index for further reference.
-
-
Cultural History Of India By Om Prakash Pdf 22 is an extremely useful and informative book for anyone who is interested in learning about the rich and diverse cultural heritage of India. It covers a wide range of topics and issues, such as the Vedic religion, Buddhism, Jainism, Saivism, Vaisnavism, Islam, Sikhism, Christianity, composite culture, art and architecture, social institutions, education, economy, food and drinks, etc. It also traces the historical evolution and transformation of these aspects over time and space.
-
-
The book is not only a scholarly work but also a fascinating and engaging read that captures the essence and spirit of Indian culture. It shows how Indian culture has been shaped by various influences and factors, such as geography, environment, ethnicity, language, politics, trade, etc. It also shows how Indian culture has contributed to the world civilization and culture in various ways.
-
-
Cultural History Of India By Om Prakash Pdf 22 is therefore a must-read for all students, teachers, researchers, and enthusiasts of Indian history and culture. It is also a valuable resource for anyone who wants to understand the roots and identity of India as a nation and a civilization.
-
-
How to Download Cultural History Of India By Om Prakash Pdf 22?
-
-
If you want to download Cultural History Of India By Om Prakash Pdf 22 on your device, you can use our guide to find the best sources and methods for doing so. Remember to always use a VPN when downloading books online and check the reviews and ratings of the files before using them.
-
-
One of the best ways to download Cultural History Of India By Om Prakash Pdf 22 is to use Google Books. Google Books is a service that allows you to search and preview millions of books from libraries and publishers worldwide. You can also download some books for free or buy them online.
-
-
-
To download Cultural History Of India By Om Prakash Pdf 22 from Google Books , follow these steps:
Click on the "EBOOK - FREE" button on the top right corner of the page.
-
Select your preferred format from the list (PDF or EPUB).
-
Click on "Download" button and wait for the file to download on your device.
-
Enjoy reading Cultural History Of India By Om Prakash Pdf 22 on your device.
-
-
-
How to Read Cultural History Of India By Om Prakash Pdf 22?
-
-
If you don't want to download Cultural History Of India By Om Prakash Pdf 22 on your device , you can also read it online using Google Books . Google Books allows you to read books online without downloading them . You can also access them using your browser or an app on your device .
-
-
To read Cultural History Of India By Om Prakash Pdf 22 online from Google Books , follow these steps :
Click on the "READ" button on the top right corner of the page .
-
Wait for the book to load on your browser or choose "Open with" if you have an app that can read PDF or EPUB files on your device .
-
Enjoy reading Cultural History Of India By Om Prakash Pdf 22 online from Google Books .
-
-
-
Conclusion
-
-
Cultural History Of India By Om Prakash Pdf 22 is a book that explores the various aspects of development of Indian culture from ancient times to the present day . It is written by Om Prakash , a renowned historian and professor of history at Delhi University . The book is divided into three parts , each dealing with a different theme : religion , art , and social institutions .
-
-
The book is based on extensive research and analysis of primary and secondary sources , such as literary texts , inscriptions , coins , sculptures , paintings , monuments , etc . It also draws on the works of other eminent scholars and experts in the field of Indian history and culture . The book is written in a clear and lucid style , with ample illustrations and examples to support the arguments and facts . The book also provides a bibliography and an index for further reference .
-
-
Cultural History Of India By Om Prakash Pdf 22 is an extremely useful and informative book for anyone who is interested in learning about the rich and diverse cultural heritage of India . It covers a wide range of topics and issues , such as the Vedic religion , Buddhism , Jainism , Saivism , Vaisnavism , Islam , Sikhism , Christianity , composite culture , art and architecture , social institutions , education , economy , food and drinks , etc . It also traces the historical evolution and transformation of these aspects over time and space .
-
-
The book is not only a scholarly work but also a fascinating and engaging read that captures the essence and spirit of Indian culture . It shows how Indian culture has been shaped by various influences and factors , such as geography , environment , ethnicity , language , politics , trade , etc . It also shows how Indian culture has contributed to the world civilization and culture in various ways .
-
-
Cultural History Of India By Om Prakash Pdf 22 is therefore a must-read for all students , teachers , researchers , and enthusiasts of Indian history and culture . It is also a valuable resource for anyone who wants to understand the roots and identity of India as a nation and a civilization .
-
-
We hope you found this article helpful and informative . If you have any questions or feedback , feel free to leave a comment below . Thank you for reading !
-
Cultural History Of India By Om Prakash Pdf 22 is a book that explores the various aspects of development of Indian culture from ancient times to the present day. It is written by Om Prakash, a renowned historian and professor of history at Delhi University. The book is divided into three parts, each dealing with a different theme: religion, art, and social institutions.
-
-
The book is based on extensive research and analysis of primary and secondary sources, such as literary texts, inscriptions, coins, sculptures, paintings, monuments, etc. It also draws on the works of other eminent scholars and experts in the field of Indian history and culture. The book is written in a clear and lucid style, with ample illustrations and examples to support the arguments and facts. The book also provides a bibliography and an index for further reference.
-
-
Cultural History Of India By Om Prakash Pdf 22 is an extremely useful and informative book for anyone who is interested in learning about the rich and diverse cultural heritage of India. It covers a wide range of topics and issues, such as the Vedic religion, Buddhism, Jainism, Saivism, Vaisnavism, Islam, Sikhism, Christianity, composite culture, art and architecture, social institutions, education, economy, food and drinks, etc. It also traces the historical evolution and transformation of these aspects over time and space.
-
-
The book is not only a scholarly work but also a fascinating and engaging read that captures the essence and spirit of Indian culture. It shows how Indian culture has been shaped by various influences and factors, such as geography, environment, ethnicity, language, politics, trade, etc. It also shows how Indian culture has contributed to the world civilization and culture in various ways.
-
-
Cultural History Of India By Om Prakash Pdf 22 is therefore a must-read for all students, teachers, researchers, and enthusiasts of Indian history and culture. It is also a valuable resource for anyone who wants to understand the roots and identity of India as a nation and a civilization.
-
-
We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Challenge Your Friends and Rivals with 8 Ball Pool APK.md b/spaces/1phancelerku/anime-remove-background/Challenge Your Friends and Rivals with 8 Ball Pool APK.md
deleted file mode 100644
index a86969901bdd62299b15d815a0716edde8353f49..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Challenge Your Friends and Rivals with 8 Ball Pool APK.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
8 Ball Pool Apkpure: A Guide to Download and Play the World's Best Pool Game
-
If you are a fan of pool games, you might have heard of 8 Ball Pool, the world's most popular online multiplayer pool game. But did you know that you can also play it on your Windows PC with Apkpure, a website that provides free and safe Android apps and games? In this article, we will show you how to download and install 8 Ball Pool Apkpure on your PC, what are the features of this amazing game, what are the rules of playing it, and what are some tips and tricks to help you become a master of the pool.
-
What is 8 Ball Pool Apkpure?
-
A brief introduction to 8 Ball Pool, a popular online multiplayer pool game
-
8 Ball Pool is a game developed by Miniclip that allows you to play pool with players from all over the world. You can choose from different game modes, such as 1-on-1, Tournaments, 9-Ball, or Practice, and compete for coins, cash, trophies, and exclusive items. You can also customize your cue and pool table with various designs and colors. The game has a level system that matches you with players of similar skill level, and a ranking system that shows your progress in the global leaderboard.
A brief introduction to Apkpure, a website that provides free and safe Android apps and games
-
Apkpure is a website that offers a large collection of Android apps and games that you can download for free. You can find apps and games for various categories, such as Action, Puzzle, Sports, Casual, Educational, Music, Lifestyle, Social, etc. You can also search for specific apps or games by name or keyword. All the apps and games on Apkpure are verified by their team to ensure they are safe and virus-free.
-
How to download and install 8 Ball Ball Pool Apkpure on your Windows PC
-
To play 8 Ball Pool Apkpure on your Windows PC, you need to use an Android emulator, which is a software that allows you to run Android apps and games on your PC. There are many Android emulators available, but we recommend using Gameloop, which is the official emulator of Tencent Games, the publisher of 8 Ball Pool. Here are the steps to download and install 8 Ball Pool Apkpure on your PC with Gameloop:
Search for 8 Ball Pool in the search bar and click on the Install button.
-
Wait for the game to download and install on your PC.
-
Click on the My Games tab and launch 8 Ball Pool from there.
-
Enjoy playing 8 Ball Pool Apkpure on your PC with a larger screen, better graphics, and smoother controls.
-
-
What are the features of 8 Ball Pool Apkpure?
-
The benefits of playing 8 Ball Pool on your PC with Gameloop emulator
-
Playing 8 Ball Pool Apkpure on your PC with Gameloop emulator has many advantages over playing it on your mobile device. Here are some of them:
-
-
You can enjoy a bigger and clearer view of the pool table, the balls, and the cues on your PC screen.
-
You can use your mouse and keyboard to control the game, which gives you more accuracy and precision than using your fingers on a touch screen.
-
You can avoid battery drain, overheating, and lag issues that may affect your mobile device while playing 8 Ball Pool.
-
You can access more features and settings in Gameloop, such as recording, streaming, screenshotting, and customizing your keyboard layout.
-
You can play 8 Ball Pool Apkpure with other players who are using Gameloop emulator, which creates a fair and balanced gaming environment.
-
-
The different game modes, tables, cues, and balls available in 8 Ball Pool
-
8 Ball Pool Apkpure offers a variety of game modes, tables, cues, and balls to suit your preferences and skill level. Here are some of them:
-
-
Game Mode
Description
-
1-on-1
The classic mode where you play against another player in a single match. You can choose from different locations, such as London, Sydney, Moscow, Tokyo, Las Vegas, etc., each with a different entry fee and prize pool. You can also play in the No Guidelines mode where there are no aiming lines to help you.
-
Tournaments
The mode where you compete with up to 7 other players in a knockout format. You can choose from different tournaments, such as Cairo, Shanghai, Toronto, Berlin, etc., each with a different entry fee and prize pool. You can also play in the No Guidelines mode where there are no aiming lines to help you.
-
9-Ball
The mode where you play with 9 balls instead of 15. The rules are different from 8 Ball: you have to hit the lowest numbered ball first, and the first player to pocket the 9 ball wins. You can also play in the No Guidelines mode where there are no aiming lines to help you.
-
Practice
The mode where you can practice your skills without any pressure or opponents. You can choose from different tables and cues to practice with. You can also adjust the difficulty level of the game from Easy to Expert.
-
-
In addition to the game modes, you can also choose from different tables and cues to play with. Each table has a different design and color scheme, such as Wood Grain, Marble, Ice Blue, etc. Each cue has different attributes and effects, such as Aim, Force, Time, Spin, etc. You can also unlock special cues with unique features, such as Legendary Cues, VIP Cues, Country Cues, etc.
-
You can also choose from different balls to play with. Some of the balls have different colors and patterns, such as Stripes, Solids, Stars, etc. Some of the balls have special effects, such as Fireworks, Snowflakes, Lightning, etc. You can also unlock exclusive balls with unique features, such as Golden Shot Balls, Surprise Boxes Balls, Scratch and Win Balls, etc.
-
The customization options, rewards, and challenges in 8 Ball Pool Apkpure
-
8 Ball Pool Apkpure also allows you to customize your profile and avatar with various options. You can choose from different avatars, such as Animals, Sports, Celebrities, etc. You can also upload your own photo or use your Facebook profile picture. You can also edit your name, country, and status message.
-
8 ball pool apk download latest version
-8 ball pool mod apk unlimited coins and cash
-8 ball pool hack apk no root
-8 ball pool old version apk free download
-8 ball pool offline apk for android
-8 ball pool apk pure app store
-8 ball pool online multiplayer apk
-8 ball pool rewards apk download
-8 ball pool legendary cues mod apk
-8 ball pool guideline hack apk
-8 ball pool instant reward apk
-8 ball pool long line mod apk
-8 ball pool tool pro apk
-8 ball pool cheat engine apk
-8 ball pool generator apk no human verification
-8 ball pool unlimited money apk
-8 ball pool beta version apk download
-8 ball pool auto win mod apk
-8 ball pool aimbot apk download
-8 ball pool avatar hd apk
-8 ball pool all cues unlocked mod apk
-8 ball pool anti ban mod apk
-8 ball pool best mod apk download
-8 ball pool by miniclip apk download
-8 ball pool cracked version apk download
-8 ball pool coin hack apk download
-8 ball pool cue hack apk download
-8 ball pool cash hack apk download
-8 ball pool diamond cue mod apk download
-8 ball pool extended stick guideline mod apk download
-8 ball pool free coins and cash generator apk download
-8 ball pool free legendary cues mod apk download
-8 ball pool full unlocked mod apk download
-8 ball pool game guardian script hack apk download
-8 ball pool golden break mod apk download
-8 ball pool guideline tool pro modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool hack version unlimited money and cash apkpure.com[^1^]
-8 ball pool instant win modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool king cue modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool long line hack modded cracked patched unlocked premium full cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool mega mod unlimited everything apkpure.com[^1^]
-8 ball pool new update modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool old version modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool pro membership modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-8 ball pool quick fire mode modded cracked patched unlocked premium full hack cheat version app game android latest update free download install play online offline no root require apkpure.com[^1^]
-
As you play 8 Ball Pool Apkpure, you can also earn various rewards and complete various challenges. You can earn coins and cash by winning matches, tournaments, and mini-games. You can also earn trophies by ranking up in the leaderboard. You can also earn pool passes by completing daily missions and seasonal events. You can also earn free gifts by logging in daily, watching videos, inviting friends, etc.
-
You can also take on different challenges in 8 Ball Pool Apkpure to test your skills and win more rewards. You can play in the Spin and Win mini-game to win coins, cash, cues, balls, and other prizes. You can play in the Hi-Lo mini-game to guess the outcome of a coin toss and win coins. You can play in the Golden Shot mini-game to hit the golden ball and win coins, cash, cues, balls, and other prizes. You can also join the Clubs feature to create or join a club with other players and compete for club points and rewards.
-
What are the rules of 8 Ball Pool Apkpure?
-
The basic rules of 8 Ball Pool, such as legal break, object balls, pocketing the 8 ball, and fouls
-
The basic rules of 8 Ball Pool are simple and easy to learn. Here are some of them:
-
-
The game is played with 15 object balls (numbered 1 to 15) and a cue ball (white).
-
The object balls are divided into two groups: solids (numbered 1 to 7) and stripes (numbered 9 to 15). The 8 ball (black) is the most important ball in the game.
-
The game starts with a break shot, where the player hits the cue ball into the rack of object balls. The break shot must be legal, which means that at least four object balls must hit a cushion or a ball must be pocketed.
-
After the break shot, the player who pockets a ball or has a legal break shot gets to choose which group of balls they want to play: solids or stripes. The player must then try to pocket all their group of balls before their opponent.
-
The player who pockets all their group of balls first gets to shoot for the 8 ball. The player must call the pocket where they intend to pocket the 8 ball before shooting. The player who pockets the 8 ball legally wins the game.
-
If a player commits a foul during the game, their turn ends and their opponent gets ball in hand, which means they can place the cue ball anywhere on the table for their next shot. Some common fouls are: hitting the wrong group of balls first; not hitting any ball; hitting the cue ball off the table; pocketing the cue ball; pocketing the 8 ball before clearing their group of balls; pocketing the 8 ball in the wrong pocket; or pocketing the 8 ball when it is not their turn.
-
-
The different variations of 8 Ball Pool rules, such as WPA, APA, VNEA, and BCAPL
-
While the basic rules of 8 Ball Pool are generally the same, there are some variations of the rules that are used by different organizations and tournaments. Here are some of them:
-
-
WPA: The World Pool-Billiard Association is the international governing body of pool. The WPA rules are the official rules of 8 Ball Pool for international competitions. Some of the WPA rules are: the break shot must be taken from behind the head string; if no ball is pocketed on the break shot, the incoming player can choose to play from where the cue ball lies or ask for a re-rack; if a player pockets a ball on the break shot, they can either accept that group of balls or continue to shoot until they miss or foul; if a player pockets both a solid and a stripe on the break shot, they can choose which group of balls they want to play; if a player pockets the 8 ball on the break shot, they can either win the game or ask for a re-rack.
-
APA: The American Poolplayers Association is the largest amateur pool league in the world. The APA rules are the most common rules of 8 Ball Pool for recreational and league play in the United States. Some of the APA rules are: the break shot can be taken from anywhere behind the head string; if no ball is pocketed on the break shot, the table is open for both players; if a player pockets a ball on the break shot, they must shoot at that group of balls until they miss or foul; if a player pockets both a solid and a stripe on the break shot, they must shoot at either group of balls until they miss or foul; if a player pockets the 8 ball on the break shot, they win the game.
-
VNEA: The Valley National Eight-Ball Association is one of the largest pool organizations in North America. The VNEA rules are similar to the APA rules, but with some differences. Some of the VNEA rules are: if no ball is pocketed on the break shot, the table is open for both players; if a player pockets a ball on the break shot, they must shoot at that group of balls until they miss or foul; if a player pockets both a solid and a stripe on the break shot, they must shoot at either group of balls until they miss or foul; if a player pockets the 8 ball on the break shot, they win the game; however, if a player scratches (pockets the cue ball) on the break shot, they lose the game.
-
BCAPL: The Billiard Congress of America Pool League is another large pool organization in North America. The BCAPL rules are similar to the WPA rules, but with some differences. Some of the BCAPL rules are: the break shot must be taken from behind the head string; if no ball is pocketed on the break shot, the incoming player can choose to play from where the cue ball lies or push out to a new position; if a player pockets a ball on the break shot, they can either accept that group of balls or continue to shoot until they miss or foul; if a player pockets both a solid and a stripe on the break shot, they can choose which group of balls they want to play; if a player pockets the 8 ball on the break shot, they can either win the game or ask for a re-rack.
-
-
The tips and tricks to improve your skills and win more matches in 8 Ball Pool Apkpure
-
8 Ball Pool Apkpure is a game that requires both skill and strategy to win. Here are some tips and tricks to help you improve your game and beat your opponents:
-
-
Practice your aim and power. You can use the aiming lines to help you align your shots, but you also need to adjust your power according to the distance and angle of the shot. You can practice your aim and power in the Practice mode or by using the Guideline in All Rooms option in the settings.
-
Use spin wisely. You can use spin to change the direction and speed of the cue ball after it hits an object ball. You can use spin to avoid scratches, position the cue ball for your next shot, or make tricky shots. You can apply spin by using the spin wheel on the bottom right corner of the screen.
-
Plan ahead. You should always think ahead and plan your shots before you shoot. You should consider which balls are easy or hard to pocket, which pockets are open or blocked, and which shots will leave you with a good or bad position for your next shot. You should also try to clear any clusters or obstacles as soon as possible.
-
Play smart. You should always play according to your skill level and your opponent's skill level. You should not take unnecessary risks or try to show off. You should also know when to play defensively or offensively, depending on the situation. You should also use the chat and emoji features to communicate with your opponent and show respect or sportsmanship.
-
-
Conclusion
-
8 Ball Pool Apkpure is a fun and exciting game that lets you play pool with players from all over the world. You can download and install it on your Windows PC with Gameloop emulator, and enjoy its features, rules, and challenges. You can also improve your skills and win more matches with some tips and tricks. So what are you waiting for? Download 8 Ball Pool Apkpure today and join the millions of pool lovers who play this game every day!
-
FAQs
-
Q1: Is 8 Ball Pool Apkpure safe and legal?
-
A1: Yes, 8 Ball Pool Apkpure is safe and legal to download and play. Apkpure is a reputable website that verifies all its apps and games for safety and quality. Gameloop is also a trusted emulator that does not contain any malware or viruses. However, you should always download 8 Ball Pool Apkpure from its official website or Gameloop app store, and not from any third-party sources.
-
Q2: How can I play 8 Ball Pool Apkpure with my friends?
-
A2: You can play 8 Ball Pool Apkpure with your friends by using the Play with Friends feature in the game. You can invite your friends by using their unique ID, Facebook account, or Miniclip account. You can also join or create a club with your friends and chat with them in the club chat room.
-
Q3: How can I earn more coins and cash in 8 Ball Pool Apkpure?
-
A3: You can earn more coins and cash in 8 Ball Pool Apkpure by winning matches, tournaments, and mini-games. You can also earn coins and cash by completing daily missions, seasonal events, pool passes, and achievements. You can also earn free coins and cash by logging in daily, watching videos, inviting friends, etc. You can also buy coins and cash with real money if you want to.
-
Q4: How can I upgrade my cue and pool table in 8 Ball Pool Apkpure?
-
A4: You can upgrade your cue and pool table in 8 Ball Pool Apkpure by using coins or cash. You can buy new cues and tables from the shop, or unlock them from surprise boxes, golden shots, scratch and win, etc. You can also upgrade your cues by using cash or cue pieces. You can improve the attributes and effects of your cues by leveling them up. You can also change the design and color of your cues and tables by using coins or cash.
-
Q5: How can I contact the support team of 8 Ball Pool Apkpure?
-
A5: You can contact the support team of 8 Ball Pool Apkpure by using the Help and Support feature in the game. You can access it by clicking on the Settings icon on the top right corner of the screen, and then clicking on the Help and Support button. You can then browse through the frequently asked questions, or submit a ticket to the support team. You can also contact the support team by sending an email to support@miniclip.com.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Darah Tak Terbatas and Unlimited Money in Hungry Shark World Get Mod Apk Here.md b/spaces/1phancelerku/anime-remove-background/Darah Tak Terbatas and Unlimited Money in Hungry Shark World Get Mod Apk Here.md
deleted file mode 100644
index a7208e24b81ad40fcffa492ab8be32763f76acea..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Darah Tak Terbatas and Unlimited Money in Hungry Shark World Get Mod Apk Here.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Download Hungry Shark World Mod Apk Unlimited Blood and Enjoy the Ultimate Shark Experience
-
Do you love sharks? Do you love eating everything in your path? Do you love action-packed games with stunning graphics and sound effects? If you answered yes to any of these questions, then you will love Hungry Shark World, a thrilling game where you control a hungry shark and eat everything that gets in your way. But wait, there's more! You can also download Hungry Shark World Mod Apk Unlimited Blood, a modified version of the game that gives you unlimited blood, coins, gems, and all sharks unlocked. Sounds awesome, right? Read on to find out more about this amazing game and how to download it.
-
What is Hungry Shark World?
-
A thrilling action game where you control a hungry shark
-
Hungry Shark World is an addictive action game where you control a shark and eat everything in your path. The objective is to survive as long as you can and devour as much prey as possible to score tons of points before you eventually die. True to its name, your hungry shark is constantly losing HP due to its insatiable hunger and must continue to eat in order to stay alive. However, the sea is riddled with plenty of hazards and hostile fish that don't just let themselves get eaten, so you have to play smart in order to score high.
-
download hungry shark world mod apk darah tak terbatas
One of the best features of Hungry Shark World is that it offers a wide variety of sharks to choose from. You can start with a small porbeagle shark and work your way up to bigger and badder sharks like the great white, hammerhead, megalodon, or even a prehistoric mosasaurus. Each shark has its own stats, abilities, appearance, and personality. You can also upgrade your sharks by spending coins and gems on their bite, speed, boost, or health.
-
Four stunning locations to explore and devour
-
Hungry Shark World features four different locations that you can travel to as you progress in the game. Each location has its own theme, scenery,
challenges, and secrets. You can explore the Pacific Islands, the Arabian Sea, the South China Sea, and the Arctic Ocean. Each location has its own unique creatures, landmarks, and events that you can discover and enjoy. You can also switch between locations at any time by using the map.
-
Hundreds of enemies and prey to eat and collect
-
As a hungry shark, you have a lot of options when it comes to your diet. You can eat fish, crabs, turtles, squid, octopus, dolphins, whales, seals, penguins, birds, humans, and more. Each prey has its own value and effect on your shark. Some prey will give you more points, some will heal you more, some will boost your speed or power, and some will even unlock new items or achievements. However, not all prey are easy to catch or harmless. Some prey will fight back, some will poison you, some will explode, and some will even damage your shark. You have to be careful and choose wisely what you eat.
-
Daily chests, missions, and achievements to earn rewards
-
Hungry Shark World also offers plenty of ways to earn rewards and bonuses in the game. You can find daily chests that contain coins, gems, or items. You can complete missions that challenge you to perform certain tasks or feats. You can also unlock achievements that reward you for reaching milestones or doing something extraordinary. These rewards will help you buy and upgrade sharks and accessories, as well as unlock new features and content in the game.
-
Why Download Hungry Shark World Mod Apk Unlimited Blood?
-
Benefits of Hungry Shark World Mod Apk Unlimited Blood
-
Unlimited blood to survive longer and score higher
-
The main benefit of downloading Hungry Shark World Mod Apk Unlimited Blood is that you get unlimited blood for your shark. This means that you don't have to worry about losing HP due to hunger or damage. You can survive longer and eat more without dying. This will allow you to score higher and reach new levels of fun and excitement in the game.
-
Unlimited coins and gems to buy and upgrade sharks and accessories
-
Another benefit of downloading Hungry Shark World Mod Apk Unlimited Blood is that you get unlimited coins and gems for your shark. This means that you don't have to grind or spend real money to buy and upgrade sharks and accessories. You can buy any shark you want from the start and upgrade it to the max without any limitations. You can also buy any accessory you want from the shop and equip it to your shark for extra benefits. This will make your shark more powerful and stylish in the game.
-
All sharks unlocked and available from the start
-
A third benefit of downloading Hungry Shark World Mod Apk Unlimited Blood is that you get all sharks unlocked and available from the start. This means that you don't have to play for hours or complete certain requirements to unlock new sharks in the game. You can choose any shark you want from the start and switch between them at any time by using the map. This will give you more variety and freedom in the game.
-
No ads or in-app purchases to interrupt your gameplay
-
A fourth benefit of downloading Hungry Shark World Mod Apk Unlimited Blood is that you get no ads or in-app purchases to interrupt your gameplay. This means that you don't have to watch annoying ads or pay real money to enjoy the game fully. You can play without any distractions or interruptions in the game.
-
galaxy shooter mod apk unlimited money
-galaxy shooter mod apk download
-galaxy shooter mod apk latest version
-galaxy shooter mod apk android 1
-galaxy shooter mod apk invader war
-galaxy shooter mod apk hack
-galaxy shooter mod apk free shopping
-galaxy shooter mod apk offline
-galaxy shooter mod apk 2023
-galaxy shooter mod apk space attack
-galaxy shooter mod apk rexdl
-galaxy shooter mod apk revdl
-galaxy shooter mod apk no ads
-galaxy shooter mod apk unlimited gems
-galaxy shooter mod apk unlimited coins
-galaxy shooter mod apk premium
-galaxy shooter mod apk pro
-galaxy shooter mod apk full version
-galaxy shooter mod apk unlocked
-galaxy shooter mod apk all ships
-galaxy shooter mod apk all levels
-galaxy shooter mod apk all weapons
-galaxy shooter mod apk all bosses
-galaxy shooter mod apk mega mod
-galaxy shooter mod apk god mode
-galaxy shooter mod apk infinite energy
-galaxy shooter mod apk unlimited lives
-galaxy shooter mod apk unlimited stars
-galaxy shooter mod apk unlimited gold
-galaxy shooter mod apk unlimited crystals
-galaxy shooter mod apk unlimited diamonds
-galaxy shooter mod apk unlimited power-ups
-galaxy shooter mod apk unlimited missiles
-galaxy shooter mod apk unlimited bombs
-galaxy shooter mod apk unlimited bullets
-galaxy shooter mod apk unlimited lasers
-galaxy shooter mod apk unlimited rockets
-galaxy shooter mod apk unlimited shields
-galaxy shooter mod apk unlimited boosters
-galaxy shooter mod apk unlimited drones
-galaxy shooter mod apk high damage
-galaxy shooter mod apk one hit kill
-galaxy shooter mod apk no root
-galaxy shooter mod apk no verification
-galaxy shooter mod apk no survey
-galaxy shooter mod apk for pc
-galaxy shooter mod apk for ios
-galaxy shooter mod apk for iphone
-galaxy shooter mod apk for ipad
-
How to Download Hungry Shark World Mod Apk Unlimited Blood
-
Step 1: Click on the link below to download the mod apk file
-
The first step to download Hungry Shark World Mod Apk Unlimited Blood is to click on the link below to download the mod apk file. The link will take you to a secure site where you can download the file safely and easily.
-
Step 2: Allow unknown sources on your device settings
-
The second step to download Hungry Shark World Mod Apk Unlimited Blood is to allow unknown sources on your device settings. This will enable you to install apps from sources other than the Google Play Store. To do this, go to your device settings > security > unknown sources > enable.
-
Step 3: Install the mod apk file and launch the game
-
The third step to download Hungry Shark World Mod Apk Unlimited Blood is to install the mod apk file and launch the game. To do this, go to your file manager > downloads > hungry-shark-world-mod-apk-unlimited-blood.apk > install > open.
-
Step 4: Enjoy the ultimate shark experience with unlimited blood and resources
-
The fourth and final step to download Hungry Shark World Mod Apk Unlimited Blood is to enjoy the ultimate shark experience with unlimited blood and resources. You can now play the game with no limitations or interruptions and have fun as a hungry shark. You can eat everything in your path, unlock and upgrade all sharks and accessories, explore all locations, and score higher than ever before.
-
Tips and Tricks for Playing Hungry Shark World
-
Use your boost wisely to catch prey and avoid enemies
-
One of the tips and tricks for playing Hungry Shark World is to use your boost wisely to catch prey and avoid enemies. Your boost is a powerful tool that can help you speed up, jump out of the water, or perform special attacks. However, your boost also consumes your stamina, which regenerates slowly over time. Therefore, you should use your boost sparingly and strategically, depending on the situation. For example, you can use your boost to catch fast or fleeing prey, to escape from dangerous enemies or obstacles, or to reach hidden areas or items.
-
Buy the map to find hidden items and locations
-
Another tip and trick for playing Hungry Shark World is to buy the map to find hidden items and locations. The map is an accessory that you can buy from the shop for 500 coins. The map will show you the layout of the location you are in, as well as the locations of chests, missions, pets, enemies, and more. The map will also show you the boundaries of the location and the portals to other locations. The map is very useful for finding secrets and completing objectives in the game.
-
Recruit pets to help you in your journey
-
A third tip and trick for playing Hungry Shark World is to recruit pets to help you in your journey. Pets are small creatures that you can find and collect in the game. Each pet has its own ability and effect that can benefit your shark. For example, some pets can heal you, some can attack enemies, some can collect coins or gems, some can boost your stats, and some can even unlock new features or content. You can equip up to three pets at a time and switch between them at any time by using the map.
-
Avoid dangerous creatures like jellyfish, pufferfish, lionfish, giant squids, etc.
-
A fourth tip and trick for playing Hungry Shark World is to avoid dangerous creatures like jellyfish, pufferfish, lionfish, giant squids, etc. These creatures are not only hard to eat but also harmful to your shark. They can poison you, stun you, damage you, or even kill you instantly. You should steer clear of these creatures unless you have a pet or an accessory that can protect you from them.
-
Eat humans on the surface and complete missions for extra points
-
A fifth tip and trick for playing Hungry Shark World is to eat humans on the surface and complete missions for extra points. Humans are one of the most valuable prey in the game as they give you a lot of points and sometimes coins or gems. You can find humans on beaches, boats, jet skis, helicopters, balloons, etc. You can also jump out of the water and grab them in mid-air. However, be careful as some humans will fight back with weapons or call for help from other humans or military forces. You should also complete missions that involve eating humans as they will give you bonus points and rewards.
-
Conclusion
-
Hungry Shark World is an addictive and fun game that lets you experience life as a shark
-
In conclusion, Hungry Shark World is an addictive and fun game that lets you experience life as a shark. You can control a hungry shark and eat everything in your path while avoiding dangers and obstacles. You can also unlock and upgrade over 40 different sharks and explore four stunning locations in the game.
-
Download Hungry Shark World Mod Apk Unlimited Blood to enjoy the game without any limitations or interruptions
-
If you want to enjoy the game without any limitations or interruptions, you should download Hungry Shark World Mod Apk Unlimited Blood. This mod apk will give you unlimited blood, coins, gems, and all sharks unlocked in the game. You can play the game with no worries about dying or running out of resources. You can also buy and upgrade any shark or accessory you want from the start.
-
Follow the tips and tricks above to maximize your score and become the king of the ocean
-
If you want to maximize your score and become the king of the ocean, you should follow the tips and tricks above. These tips and tricks will help you play smarter and better in the game. You will be able to catch more prey and avoid enemies, find hidden items and locations, recruit pets and accessories, eat humans and complete missions, and use your boost wisely in the game.
-
FAQs
-
Q: What is the difference between Hungry Shark World and Hungry Shark Evolution?
-
A: Hungry Shark World is the sequel to Hungry Shark Evolution, a popular game that was released in 2012. Hungry Shark World has improved graphics, sound effects, gameplay, and features compared to Hungry Shark Evolution. Hungry Shark World also has more sharks, locations, enemies, prey, items, and content than Hungry Shark Evolution.
-
Q: Is Hungry Shark World Mod Apk Unlimited Blood safe to download and install?
-
A: Yes, Hungry Shark World Mod Apk Unlimited Blood is safe to download and install. The mod apk file is scanned and tested for viruses and malware before being uploaded to the site. The mod apk file also does not require any root or jailbreak to work on your device.
-
Q: How can I update Hungry Shark World Mod Apk Unlimited Blood?
-
A: To update Hungry Shark World Mod Apk Unlimited Blood, you have to download and install the latest version of the mod apk file from the same site. You can also check the site regularly for any updates or new features added to the mod apk file.
-
Q: Can I play Hungry Shark World Mod Apk Unlimited Blood online or offline?
-
A: You can play Hungry Shark World Mod Apk Unlimited Blood both online and offline. However, some features and content may require an internet connection to work properly. For example, you may need an internet connection to access the daily chests, missions, achievements, leaderboards, or events in the game.
-
Q: Can I play Hungry Shark World Mod Apk Unlimited Blood with my friends or other players?
-
A: Yes, you can play Hungry Shark World Mod Apk Unlimited Blood with your friends or other players. You can connect your game to Facebook or Google Play Games and invite your friends or other players to join you in the game. You can also compete with them on the leaderboards or cooperate with them on the events in the game.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/README.md b/spaces/232labs/VToonify/README.md
deleted file mode 100644
index 40ded2ce4dc80cc8764c8fee50476d44bee383b6..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: VToonify
-emoji: 👨🎨
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
-license: other
-duplicated_from: PKUWilliamYang/VToonify
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/2gauravc/search_summary_chatgpt/app.py b/spaces/2gauravc/search_summary_chatgpt/app.py
deleted file mode 100644
index 289c7a0e64ceeae6759b866b17b4ae4ba11e36c7..0000000000000000000000000000000000000000
--- a/spaces/2gauravc/search_summary_chatgpt/app.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import streamlit as st
-import openai
-import sys, getopt
-from datetime import datetime
-from streamlit.components.v1 import html
-import boto3
-
-from main import chatgpt_prompt, get_chatgpt_resp, generate_kyc_output, gsearch, save_to_s3
-
-# Function to perform the search
-# This is a placeholder function, replace it with your actual search implementation
-def perform_search(pname, keywords, num_results):
- # record current timestamp
- start_time = datetime.now()
-
- # Google search for the person name and get the first 20 query links
- query = pname + " " + keywords
- search_links = gsearch(query, num_results)
-
- # Construct the prompt
- prompt_text = chatgpt_prompt(pname, search_links)
- #get ChatGPT response
- resp = get_chatgpt_resp(prompt_text)
- # Create PDF with links and summary
- rep_txt= generate_kyc_output(query, search_links, resp, start_time)
- return (rep_txt)
-
-main_tab, help_tab, rel_tab = st.tabs(["Run the Bot", "FAQ", "Release Plan"])
-
-with main_tab:
- # Streamlit app
- st.title("Adverse News Detection Assistant")
-
- # Input fields
- names_txt = st.text_input("Enter party name (or multiple names separated by ,)")
- plc_text = "laundering OR terrorist OR fraud OR corrupt OR criminal OR investigation OR prosecute OR evasion OR bribe OR sanction"
- keywords = st.text_input("Enter other search words:", value=plc_text)
-
- st.sidebar.markdown("## Controls")
- st.sidebar.markdown("Choose your **search** *parameters*")
- num_results = st.sidebar.slider("Choose the number of search results:", 5, 30, 20, 5)
- st.sidebar.markdown("## Model")
- st.sidebar.markdown("GPT v3.5")
- st.sidebar.markdown("## App")
- st.sidebar.markdown("v0.4")
-
- col1, col2 = st.columns(2)
- with col1:
- adv_nw = st.radio(
- "Did you find adverse news when you performed this search manually",
- ('Yes', 'No', 'Dont Know'), index=2)
- with col2:
- #st.markdown("Touch time (manual) in mins")
- man_tt = st.number_input('Touch time (manual) in mins', value=0, step=1)
- #st.markdown("Touch time (with bot) in mins")
- bot_tt = st.number_input('Touch time (with bot) in mins', value=0, step=1)
-
- # Search button
- if st.button("Search"):
- names = names_txt.split(",")
- #print(len(names))
- metrics_ent = (adv_nw != "Dont Know") and (man_tt > 0) and (bot_tt > 0)
- # Perform the search and display the results
- if names and metrics_ent:
- search_results = ""
- for name in names:
- #print("trying for name {} \n".format(name))
- search_results += perform_search(name, keywords, num_results)
-
- html(f"
{search_results}
", height=200, scrolling=True)
- st.download_button('Download Report',search_results)
- try:
- date_time = datetime.now()
- save_to_s3(search_results,date_time )
- print ("Completed processing for {} names: {} at {} \n".format(len(names), names_txt, str(date_time)))
- except:
- print ("Completed processing with S3 write error for {} names: {} at {} \n".format(len(names),names_txt, str(date_time)))
- else:
- st.error("Please enter party name, adverse news selection (Yes or No) and Touch Time before searching.")
-
-with help_tab:
- st.title("FAQ")
-
- st.markdown("Q. How do I get a count of number of adverse news?")
- st.markdown("A. This functionality isnt implemented yet. A workaround is to manually count the number of links with adverse news")
-
- st.markdown("Q. How do I summarise all the adverse news?")
- st.markdown("A. This functionality isnt implemented yet. A workaround is to aggregate the summary of all adverse news items manually, and get a sumary from ChatGPT (chat.openai.com")
-
- st.markdown("Q. Can I search in other lauguages?")
- st.markdown("A. This functionality isnt implemented yet. We are planning to test this feature out with Chinese first")
-
- st.markdown("Q. Can I search without the other search words?")
- st.markdown("A. Just enter a blank space in the text space and search")
-
-with rel_tab:
- st.markdown(f"""
- | NO. | Issue / Enhancement | Rel | Status |
- |-----|--------------------------------------------------------------------------------------------------------------------------------------------|-----|-----------|
- | 1 | Capture productivity and adverse news metrics from the user | 0.4 | Completed |
- | 2 | Save productivity and adverse news metrics in a DB | 0.4 | TBD |
- | 3 | Convert bot output to structured JSON - Count of adverse news - Summary of all adverse news - Identification of links with adverse news | 0.6 | TBD |
- | 4 | Offer alternate solution path with web text scraping and | 0.6 | TBD |
- | 5 | Create a page on metrics report | 0.5 | TBD |""")
diff --git a/spaces/AIML-TUDA/semantic-diffusion/README.md b/spaces/AIML-TUDA/semantic-diffusion/README.md
deleted file mode 100644
index 9a565ba9bbdf1a267b51f89519bc48c7ccd6b8a0..0000000000000000000000000000000000000000
--- a/spaces/AIML-TUDA/semantic-diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Semantic Diffusion
-emoji: ⚡
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AP123/dreamgaussian/readme.md b/spaces/AP123/dreamgaussian/readme.md
deleted file mode 100644
index dbeab5bf5bc995b6343769ff2389a138da6e5356..0000000000000000000000000000000000000000
--- a/spaces/AP123/dreamgaussian/readme.md
+++ /dev/null
@@ -1,120 +0,0 @@
-# DreamGaussian
-
-This repository contains the official implementation for [DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation]().
-
-### [Project Page](https://dreamgaussian.github.io) | [Arxiv]()
-
-
-https://github.com/dreamgaussian/dreamgaussian/assets/25863658/db860801-7b9c-4b30-9eb9-87330175f5c8
-
-
-## Install
-```bash
-pip install -r requirements.txt
-
-# a modified gaussain splatting (+ depth, alpha rendering)
-git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
-pip install ./diff-gaussian-rasterization
-
-# simple-knn
-pip install ./simple-knn
-
-# nvdiffrast
-pip install git+https://github.com/NVlabs/nvdiffrast/
-
-# kiuikit
-pip install git+https://github.com/ashawkey/kiuikit
-```
-
-Tested on:
-* Ubuntu 22 with torch 1.12 & CUDA 11.6 on a V100.
-* Windows 10 with torch 2.1 & CUDA 12.1 on a 3070.
-
-## Usage
-
-Image-to-3D:
-```bash
-### preprocess
-# background removal and recenter, save rgba at 256x256
-python process.py data/name.jpg
-
-# save at a larger resolution
-python process.py data/name.jpg --size 512
-
-# process all jpg images under a dir
-python process.py data
-
-### training gaussian stage
-# train 500 iters (~1min) and export ckpt & coarse_mesh to logs
-python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name
-
-# gui mode (supports visualizing training)
-python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name gui=True
-
-# load and visualize a saved ckpt
-python main.py --config configs/image.yaml load=logs/name_model.ply gui=True
-
-# use an estimated elevation angle if image is not front-view (e.g., common looking-down image can use -30)
-python main.py --config configs/image.yaml input=data/name_rgba.png save_path=name elevation=-30
-
-### training mesh stage
-# auto load coarse_mesh.obj and refine 50 iters (~1min), export fine_mesh to logs
-python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name
-
-# specify coarse mesh path explicity
-python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name mesh=logs/name_mesh.obj
-
-# gui mode
-python main2.py --config configs/image.yaml input=data/name_rgba.png save_path=name gui=True
-
-### visualization
-# gui for visualizing mesh
-python -m kiui.render logs/name.obj
-
-# save 360 degree video of mesh (can run without gui)
-python -m kiui.render logs/name.obj --save_video name.mp4 --wogui
-
-# save 8 view images of mesh (can run without gui)
-python -m kiui.render logs/name.obj --save images/name/ --wogui
-
-### evaluation of CLIP-similarity
-python -m kiui.cli.clip_sim data/name_rgba.png logs/name.obj
-```
-Please check `./configs/image.yaml` for more options.
-
-Text-to-3D:
-```bash
-### training gaussian stage
-python main.py --config configs/text.yaml prompt="a photo of an icecream" save_path=icecream
-
-### training mesh stage
-python main2.py --config configs/text.yaml prompt="a photo of an icecream" save_path=icecream
-```
-Please check `./configs/text.yaml` for more options.
-
-Helper scripts:
-```bash
-# run all image samples (*_rgba.png) in ./data
-python scripts/runall.py --dir ./data --gpu 0
-
-# run all text samples (hardcoded in runall_sd.py)
-python scripts/runall_sd.py --gpu 0
-
-# export all ./logs/*.obj to mp4 in ./videos
-python scripts/convert_obj_to_video.py --dir ./logs
-```
-
-## Acknowledgement
-
-This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
-
-* [gaussian-splatting](https://github.com/graphdeco-inria/gaussian-splatting) and [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization)
-* [threestudio](https://github.com/threestudio-project/threestudio)
-* [nvdiffrast](https://github.com/NVlabs/nvdiffrast)
-* [dearpygui](https://github.com/hoffstadt/DearPyGui)
-
-## Citation
-
-```
-
-```
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/Xiaor.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/Xiaor.py
deleted file mode 100644
index 5757f9971157116cbbfabbe5420e3b7e88fed4e7..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/Xiaor.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://xiaor.eu.org'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k',
- 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/p1/v1/chat/completions',
- json=data, stream=True)
-
- if stream:
- for chunk in response.iter_content(chunk_size=None):
- chunk = chunk.decode('utf-8')
- if chunk.strip():
- message = json.loads(chunk)['choices'][0]['message']['content']
- yield message
- else:
- message = response.json()['choices'][0]['message']['content']
- yield message
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Phind.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Phind.py
deleted file mode 100644
index 0db4e3c2662e6ec3b4a4231b9c55bf0744085da6..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Phind.py
+++ /dev/null
@@ -1,76 +0,0 @@
-from __future__ import annotations
-
-import random
-from datetime import datetime
-
-from ..typing import AsyncGenerator
-from ..requests import StreamSession
-from .base_provider import AsyncGeneratorProvider, format_prompt
-
-
-class Phind(AsyncGeneratorProvider):
- url = "https://www.phind.com"
- working = True
- supports_gpt_4 = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- proxy: str = None,
- **kwargs
- ) -> AsyncGenerator:
- chars = 'abcdefghijklmnopqrstuvwxyz0123456789'
- user_id = ''.join(random.choice(chars) for _ in range(24))
- data = {
- "question": format_prompt(messages),
- "webResults": [],
- "options": {
- "date": datetime.now().strftime("%d.%m.%Y"),
- "language": "en",
- "detailed": True,
- "anonUserId": user_id,
- "answerModel": "GPT-4",
- "creativeMode": False,
- "customLinks": []
- },
- "context":""
- }
- headers = {
- "Authority": cls.url,
- "Accept": "application/json, text/plain, */*",
- "Origin": cls.url,
- "Referer": f"{cls.url}/"
- }
- async with StreamSession(headers=headers, timeout=(5, 180), proxies={"https": proxy}, impersonate="chrome107") as session:
- async with session.post(f"{cls.url}/api/infer/answer", json=data) as response:
- response.raise_for_status()
- new_lines = 0
- async for line in response.iter_lines():
- if not line:
- continue
- if line.startswith(b"data: "):
- line = line[6:]
- if line.startswith(b""):
- continue
- if line:
- if new_lines:
- yield "".join(["\n" for _ in range(int(new_lines / 2))])
- new_lines = 0
- yield line.decode()
- else:
- new_lines += 1
-
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("proxy", "str"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/base/Base.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/base/Base.d.ts
deleted file mode 100644
index 816e8c28c78951670544fe253a5432672031d948..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/base/Base.d.ts
+++ /dev/null
@@ -1,48 +0,0 @@
-// import * as Phaser from 'phaser';
-import BaseShape from '../../../plugins/gameobjects/shape/shapes/BaseShapes';
-
-export default Base;
-
-declare namespace Base {
-
- interface IConfig {
- x?: number, y?: number,
- width?: number, height?: number,
- color?: number,
-
- duration?: number,
- start?: boolean,
-
- ease?: string,
- }
-
-}
-
-declare class Base extends BaseShape {
- constructor(
- scene: Phaser.Scene,
- config?: Base.IConfig
- )
-
- start(duration?: number): this;
- pause(): this;
- resume(): this;
- stop(): this;
- readonly isRunning: boolean;
-
- setValue(t: number): this;
- value: number;
-
- setColor(color: number): this;
- color: number;
-
- setDuration(duration: number): this;
- duration: this;
-
- setEase(ease: string): this;
- ease: string;
-
- readonly centerX: number;
- readonly centerY: number;
- readonly radius: number;
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/box/Box.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/box/Box.d.ts
deleted file mode 100644
index 878fd1b453568adc62244c84d3992011c97dd574..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/box/Box.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import Base from '../base/Base';
-export default class Box extends Base { }
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Anchor.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Anchor.d.ts
deleted file mode 100644
index cffd99e192cb929c71d7fc3dfb54a69774b2f2ab..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/anchor/Anchor.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import Anchor from '../../../plugins/behaviors/anchor/Anchor';
-export default Anchor;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/EaseDataMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/EaseDataMethods.js
deleted file mode 100644
index b71567abae8a4e4a0a455b43a59265ee55f9e618..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/EaseDataMethods.js
+++ /dev/null
@@ -1,44 +0,0 @@
-import { EaseData } from '../../../plugins/easedata.js';
-import { WaitEvent } from '../utils/WaitEvent.js';
-
-var OnInitEaseData = function (gameObject, easeData) {
- // Route 'complete' of easeData to gameObject
- easeData.on('complete', function (key) {
- gameObject.emit(`easedata.${key}.complete`, gameObject);
- gameObject.emit('easedata.complete', key, gameObject);
- })
-}
-
-export default {
- easeDataTo(key, value, duration, ease) {
- if (!this._easeData) {
- this._easeData = new EaseData(this);
- OnInitEaseData(this, this._easeData);
- }
- this._easeData.easeTo(key, value, duration, ease);
- return this;
- },
-
- easeDataToPromise(key, value, duration, ease) {
- this.easeDataTo(key, value, duration, ease);
- return WaitEvent(this._easeData, `complete-${key}`);
- },
-
- stopEaseData(key, toEnd) {
- if (!this._easeData) {
- return this;
- }
-
- this._easeData.stopEase(key, toEnd);
- return this;
- },
-
- stopAllEaseData(toEnd) {
- if (!this._easeData) {
- return this;
- }
-
- this._easeData.stopAll(toEnd);
- return this;
- }
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/Layout.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/Layout.js
deleted file mode 100644
index 67bf7d7c76c58b604f6e2495f59b4fa27d41510c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/Layout.js
+++ /dev/null
@@ -1,19 +0,0 @@
-var Layout = function () {
- // Save scale
- var scaleXSave = this.scaleX;
- var scaleYSave = this.scaleY;
- var scale1 = (scaleXSave === 1) && (scaleYSave === 1);
- if (!scale1) {
- this.setScale(1);
- }
-
- // Run layout with scale = 1
- this.runLayout();
-
- // Restore scale
- if (!scale1) {
- this.setScale(scaleXSave, scaleYSave);
- }
- return this;
-}
-export default Layout;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch/NinePatch.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch/NinePatch.js
deleted file mode 100644
index 664a6cad4e76d2dc9f64d4d312a72f964128e24c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch/NinePatch.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import NinePatch from '../../../plugins/ninepatch.js'
-export default NinePatch;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/Factory.js
deleted file mode 100644
index f03930a91233bbff5678efd4ed7ed17c4417a884..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import NinePatch from './NinePatch.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('ninePatch2', function (x, y, width, height, key, columns, rows, config) {
- var gameObject = new NinePatch(this.scene, x, y, width, height, key, columns, rows, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.NinePatch2', NinePatch);
-
-export default NinePatch;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataset.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataset.py
deleted file mode 100644
index 605aa877f7031a5cd2b98c0f831410aa80fddefa..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataset.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import bisect
-import warnings
-
-from torch._utils import _accumulate
-from torch import randperm
-
-
-class Dataset(object):
- """An abstract class representing a Dataset.
-
- All other datasets should subclass it. All subclasses should override
- ``__len__``, that provides the size of the dataset, and ``__getitem__``,
- supporting integer indexing in range from 0 to len(self) exclusive.
- """
-
- def __getitem__(self, index):
- raise NotImplementedError
-
- def __len__(self):
- raise NotImplementedError
-
- def __add__(self, other):
- return ConcatDataset([self, other])
-
-
-class TensorDataset(Dataset):
- """Dataset wrapping data and target tensors.
-
- Each sample will be retrieved by indexing both tensors along the first
- dimension.
-
- Arguments:
- data_tensor (Tensor): contains sample data.
- target_tensor (Tensor): contains sample targets (labels).
- """
-
- def __init__(self, data_tensor, target_tensor):
- assert data_tensor.size(0) == target_tensor.size(0)
- self.data_tensor = data_tensor
- self.target_tensor = target_tensor
-
- def __getitem__(self, index):
- return self.data_tensor[index], self.target_tensor[index]
-
- def __len__(self):
- return self.data_tensor.size(0)
-
-
-class ConcatDataset(Dataset):
- """
- Dataset to concatenate multiple datasets.
- Purpose: useful to assemble different existing datasets, possibly
- large-scale datasets as the concatenation operation is done in an
- on-the-fly manner.
-
- Arguments:
- datasets (iterable): List of datasets to be concatenated
- """
-
- @staticmethod
- def cumsum(sequence):
- r, s = [], 0
- for e in sequence:
- l = len(e)
- r.append(l + s)
- s += l
- return r
-
- def __init__(self, datasets):
- super(ConcatDataset, self).__init__()
- assert len(datasets) > 0, 'datasets should not be an empty iterable'
- self.datasets = list(datasets)
- self.cumulative_sizes = self.cumsum(self.datasets)
-
- def __len__(self):
- return self.cumulative_sizes[-1]
-
- def __getitem__(self, idx):
- dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
- if dataset_idx == 0:
- sample_idx = idx
- else:
- sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
- return self.datasets[dataset_idx][sample_idx]
-
- @property
- def cummulative_sizes(self):
- warnings.warn("cummulative_sizes attribute is renamed to "
- "cumulative_sizes", DeprecationWarning, stacklevel=2)
- return self.cumulative_sizes
-
-
-class Subset(Dataset):
- def __init__(self, dataset, indices):
- self.dataset = dataset
- self.indices = indices
-
- def __getitem__(self, idx):
- return self.dataset[self.indices[idx]]
-
- def __len__(self):
- return len(self.indices)
-
-
-def random_split(dataset, lengths):
- """
- Randomly split a dataset into non-overlapping new datasets of given lengths
- ds
-
- Arguments:
- dataset (Dataset): Dataset to be split
- lengths (iterable): lengths of splits to be produced
- """
- if sum(lengths) != len(dataset):
- raise ValueError("Sum of input lengths does not equal the length of the input dataset!")
-
- indices = randperm(sum(lengths))
- return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)]
diff --git a/spaces/Alican/pixera/README.md b/spaces/Alican/pixera/README.md
deleted file mode 100644
index 19804d6664f64ba557e5991ebd7908a71e647b7d..0000000000000000000000000000000000000000
--- a/spaces/Alican/pixera/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Pixera
-emoji: 💻
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 3.1.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/discriminator.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/discriminator.py
deleted file mode 100644
index 16bf3722c7f2e35cdc9bd177a33ed0975e67200d..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/models/e4e/discriminator.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from torch import nn
-
-
-class LatentCodesDiscriminator(nn.Module):
- def __init__(self, style_dim, n_mlp):
- super().__init__()
-
- self.style_dim = style_dim
-
- layers = []
- for i in range(n_mlp-1):
- layers.append(
- nn.Linear(style_dim, style_dim)
- )
- layers.append(nn.LeakyReLU(0.2))
- layers.append(nn.Linear(512, 1))
- self.mlp = nn.Sequential(*layers)
-
- def forward(self, w):
- return self.mlp(w)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 989928ab7f98da86e291451040ff85669a9fbddb..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './ccnet_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index b4a9d4e1b9123b3c965cd430237ce9fcc7018a11..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 69bef7238345cf6aabb126012af992602f910287..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/resnest/deeplabv3plus_s101-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnest101',
- backbone=dict(
- type='ResNeSt',
- stem_channels=128,
- radix=2,
- reduction_factor=4,
- avg_down_stride=True))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/evaluate.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/evaluate.py
deleted file mode 100644
index 8044e203151157a6473fa11c98414a27d45a32af..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/evaluate.py
+++ /dev/null
@@ -1,151 +0,0 @@
-import datetime
-from pathlib import Path
-
-import pandas as pd
-import torch
-from datasets import load_dataset
-from tqdm import tqdm
-
-from modules import shared
-from modules.models import load_model, unload_model
-from modules.models_settings import get_model_metadata, update_model_parameters
-from modules.text_generation import encode
-
-
-def load_past_evaluations():
- if Path('logs/evaluations.csv').exists():
- df = pd.read_csv(Path('logs/evaluations.csv'), dtype=str)
- df['Perplexity'] = pd.to_numeric(df['Perplexity'])
- return df
- else:
- return pd.DataFrame(columns=['Model', 'LoRAs', 'Dataset', 'Perplexity', 'stride', 'max_length', 'Date', 'Comment'])
-
-
-past_evaluations = load_past_evaluations()
-
-
-def save_past_evaluations(df):
- global past_evaluations
- past_evaluations = df
- filepath = Path('logs/evaluations.csv')
- filepath.parent.mkdir(parents=True, exist_ok=True)
- df.to_csv(filepath, index=False)
-
-
-def calculate_perplexity(models, input_dataset, stride, _max_length):
- '''
- Based on:
- https://huggingface.co/docs/transformers/perplexity#calculating-ppl-with-fixedlength-models
- '''
-
- global past_evaluations
- cumulative_log = ''
- cumulative_log += "Loading the input dataset...\n\n"
- yield cumulative_log
-
- # Copied from https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/triton/utils/datautils.py
- if input_dataset == 'wikitext':
- data = load_dataset('wikitext', 'wikitext-2-raw-v1', split='test')
- text = "\n\n".join(data['text'])
- elif input_dataset == 'ptb':
- data = load_dataset('ptb_text_only', 'penn_treebank', split='validation')
- text = "\n\n".join(data['sentence'])
- elif input_dataset == 'ptb_new':
- data = load_dataset('ptb_text_only', 'penn_treebank', split='test')
- text = " ".join(data['sentence'])
- else:
- with open(Path(f'training/datasets/{input_dataset}.txt'), 'r', encoding='utf-8') as f:
- text = f.read()
-
- for model in models:
- if is_in_past_evaluations(model, input_dataset, stride, _max_length):
- cumulative_log += f"{model} has already been tested. Ignoring.\n\n"
- yield cumulative_log
- continue
-
- if model != 'current model':
- try:
- yield cumulative_log + f"Loading {model}...\n\n"
- model_settings = get_model_metadata(model)
- shared.settings.update({k: v for k, v in model_settings.items() if k in shared.settings}) # hijacking the interface defaults
- update_model_parameters(model_settings) # hijacking the command-line arguments
- shared.model_name = model
- unload_model()
- shared.model, shared.tokenizer = load_model(shared.model_name)
- except:
- cumulative_log += f"Failed to load {model}. Moving on.\n\n"
- yield cumulative_log
- continue
-
- cumulative_log += f"Processing {shared.model_name}...\n\n"
- yield cumulative_log + "Tokenizing the input dataset...\n\n"
- encodings = encode(text, add_special_tokens=False)
- seq_len = encodings.shape[1]
- if _max_length:
- max_length = _max_length
- elif hasattr(shared.model.config, 'max_position_embeddings'):
- max_length = shared.model.config.max_position_embeddings
- else:
- max_length = 2048
-
- nlls = []
- prev_end_loc = 0
- for begin_loc in tqdm(range(0, seq_len, stride)):
- yield cumulative_log + f"Evaluating... {100*begin_loc/seq_len:.2f}%"
- end_loc = min(begin_loc + max_length, seq_len)
- trg_len = end_loc - prev_end_loc # may be different from stride on last loop
- input_ids = encodings[:, begin_loc:end_loc]
- target_ids = input_ids.clone()
- target_ids[:, :-trg_len] = -100
-
- with torch.no_grad():
- outputs = shared.model(input_ids=input_ids, labels=target_ids)
-
- # loss is calculated using CrossEntropyLoss which averages over valid labels
- # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels
- # to the left by 1.
- neg_log_likelihood = outputs.loss
-
- nlls.append(neg_log_likelihood)
-
- prev_end_loc = end_loc
- if end_loc == seq_len:
- break
-
- ppl = torch.exp(torch.stack(nlls).mean())
- add_entry_to_past_evaluations(float(ppl), shared.model_name, input_dataset, stride, _max_length)
- save_past_evaluations(past_evaluations)
- cumulative_log += f"The perplexity for {shared.model_name} is: {float(ppl)}\n\n"
- yield cumulative_log
-
-
-def add_entry_to_past_evaluations(perplexity, model, dataset, stride, max_length):
- global past_evaluations
- entry = {
- 'Model': model,
- 'LoRAs': ', '.join(shared.lora_names) or '-',
- 'Dataset': dataset,
- 'Perplexity': perplexity,
- 'stride': str(stride),
- 'max_length': str(max_length),
- 'Date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
- 'Comment': ''
- }
- past_evaluations = pd.concat([past_evaluations, pd.DataFrame([entry])], ignore_index=True)
-
-
-def is_in_past_evaluations(model, dataset, stride, max_length):
- entries = past_evaluations[(past_evaluations['Model'] == model) &
- (past_evaluations['Dataset'] == dataset) &
- (past_evaluations['max_length'] == str(max_length)) &
- (past_evaluations['stride'] == str(stride))]
-
- if entries.shape[0] > 0:
- return True
- else:
- return False
-
-
-def generate_markdown_table():
- sorted_df = past_evaluations.sort_values(by=['Dataset', 'stride', 'Perplexity', 'Date'])
- return sorted_df
diff --git a/spaces/AnnonSubmission/xai-cl/data_transforms.py b/spaces/AnnonSubmission/xai-cl/data_transforms.py
deleted file mode 100644
index da17ef24463c190e5e5a49708a48c28ace8c04e0..0000000000000000000000000000000000000000
--- a/spaces/AnnonSubmission/xai-cl/data_transforms.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import torch
-import torchvision
-import torchvision.transforms as transforms
-import torch.nn as nn
-from PIL import Image, ImageOps, ImageFilter
-import random
-
-def add_normalization_to_transform(unnormalized_transforms):
- """Adds ImageNet normalization to all transforms"""
- normalized_transform = {}
- for key, value in unnormalized_transforms.items():
- normalized_transform[key] = transforms.Compose([value,
- transforms.Normalize(mean=[0.485, 0.456, 0.406],
- std=[0.229, 0.224, 0.225])])
- return normalized_transform
-
-def modify_transforms(normal_transforms, no_shift_transforms, ig_transforms):
- normal_transforms = add_normalization_to_transform(normal_transforms)
- no_shift_transforms = add_normalization_to_transform(no_shift_transforms)
- ig_transforms = add_normalization_to_transform(ig_transforms)
- return normal_transforms, no_shift_transforms, ig_transforms
-
-class Solarization(object):
- def __init__(self, p):
- self.p = p
-
- def __call__(self, img):
- if random.random() < self.p:
- return ImageOps.solarize(img)
- else:
- return img
-
-# no imagent normalization for simclrv2
-pure_transform = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor()])
-
-aug_transform = transforms.Compose([transforms.RandomResizedCrop(224),
- transforms.RandomHorizontalFlip(p=0.5),
- transforms.RandomApply([transforms.ColorJitter(0.8, 0.8, 0.8, 0.2)], p=0.8),
- transforms.RandomGrayscale(p=0.2),
- transforms.RandomApply([transforms.GaussianBlur(kernel_size=(21,21), sigma=(0.1,2.0))], p=0.5),
- transforms.ToTensor()])
-
-ig_pure_transform = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor()])
-
-ig_transform_colorjitter = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.RandomApply([transforms.ColorJitter(0.8, 0.8, 0.8, 0.4)], p=1),
- transforms.ToTensor()])
-
-ig_transform_blur = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.RandomApply([transforms.GaussianBlur(kernel_size=(11,11), sigma=(5,5))], p=1),
- transforms.ToTensor()])
-
-ig_transform_solarize = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- Solarization(p=1.0),
- transforms.ToTensor()])
-
-ig_transform_grayscale = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.RandomGrayscale(p=1),
- transforms.ToTensor()])
-
-
-ig_transform_combine = transforms.Compose([transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.RandomApply([transforms.ColorJitter(0.8, 0.8, 0.8, 0.2)], p=0.8),
- transforms.RandomGrayscale(p=0.2),
- transforms.RandomApply([transforms.GaussianBlur(kernel_size=(21,21), sigma=(0.1, 2.0))], p=0.5),
- transforms.ToTensor()])
-
-pure_transform_no_shift = transforms.Compose([transforms.Resize((224, 224)),
- transforms.ToTensor()])
-
-aug_transform_no_shift = transforms.Compose([transforms.Resize((224, 224)),
- transforms.RandomApply([transforms.ColorJitter(0.8, 0.8, 0.8, 0.2)], p=0.8),
- transforms.RandomGrayscale(p=0.2),
- transforms.ToTensor()])
-
-normal_transforms = {'pure': pure_transform,
- 'aug': aug_transform}
-
-no_shift_transforms = {'pure': pure_transform_no_shift,
- 'aug': aug_transform_no_shift}
-
-ig_transforms = {'pure': ig_pure_transform,
- 'color_jitter': ig_transform_colorjitter,
- 'blur': ig_transform_blur,
- 'grayscale': ig_transform_grayscale,
- 'solarize': ig_transform_solarize,
- 'combine': ig_transform_combine}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Counter Strike 1.6 Original.md b/spaces/Benson/text-generation/Examples/Descargar Counter Strike 1.6 Original.md
deleted file mode 100644
index 50c7e6ce8ec72a2abef7eacdfea66799cf142045..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Counter Strike 1.6 Original.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
Cómo descargar y jugar Counter-Strike 1.6 Original
Counter-Strike 1.6 original, también conocido como Half-Life: Counter-Strike o CS 1.6, es un juego de disparos táctico en primera persona que fue lanzado en 2000 por Valve. Inicialmente fue desarrollado y lanzado como una modificación de Half-Life por Minh "Gooseman" Le y Jess Cliffe en 1999, antes de ser contratados por Valve y la propiedad intelectual del juego fue adquirida.
-
El juego se desarrolla en varios lugares alrededor del mundo, donde los jugadores asumen el papel de fuerzas antiterroristas o militantes terroristas. Durante cada ronda de juego, los dos equipos tienen la tarea de derrotarse entre sí mediante el logro de los objetivos del mapa o la eliminación de todos los combatientes enemigos. Cada jugador puede personalizar su arsenal de armas y accesorios al comienzo de cada partido, con la moneda que se gana después del final de cada ronda.
-
Counter-Strike 1.6 original es uno de los juegos FPS multijugador más populares e influyentes de todos los tiempos, con millones de jugadores y fans en todo el mundo. Ha generado varias secuelas, remakes, ports, spin-offs, mods y torneos a lo largo de los años. También es uno de los títulos de esports más grandes, con equipos profesionales y jugadores compitiendo por la fama y la fortuna en varias ligas y eventos.
-
-
Algunas de las principales características y modos de juego de Counter-Strike 1.6 original son:
-
-
Diseño original: El juego tiene un aspecto clásico y auténtico, con armas originales, modelos, sonidos, mapas y menús.
-
Bots originales: El juego incluye bots incorporados (zbots) que se pueden controlar presionando el botón H. Se pueden usar para practicar sin conexión o llenar espacios vacíos en servidores en línea.
-
-
Rescate de rehenes: Este es otro modo de juego popular en Counter-Strike 1.6 original. Los antiterroristas deben rescatar a un grupo de rehenes retenidos por los terroristas en su base y escoltarlos a una zona segura en el mapa. Los terroristas ganan si impiden que los rehenes sean rescatados o eliminan a todos los antiterroristas.
-
Asesinato: Este es un modo de juego raro en Counter-Strike 1.6 original. Uno de los antiterroristas es elegido para actuar como un VIP y debe ser escoltado por sus compañeros de equipo a un lugar designado en el mapa. Los terroristas ganan si matan la cuenta si no tienes una ya. Puedes registrarte gratis en el sitio web de Steam o descargar el cliente de Steam y registrarte desde allí.
-
Inicia sesión en tu cuenta de Steam y busca el original de Counter-Strike 1.6 en la tienda de Steam. También puedes usar este enlace para acceder directamente a la página del juego: [Counter-Strike on Steam].
-
Haga clic en el "Añadir al carrito" botón y proceder a la caja. El juego cuesta $9.99 USD a partir de junio de 2023, pero puede encontrar descuentos o paquetes durante las ventas o promociones.
-
Después de completar tu pago, el juego se agregará a tu biblioteca de Steam. A continuación, puede descargarlo e instalarlo haciendo clic en el botón "Instalar" en la página del juego o en su biblioteca.
-
Una vez que el juego está instalado, puede iniciarlo haciendo clic en el botón "Jugar" en la página del juego o en su biblioteca. También puede crear un acceso directo de escritorio para facilitar el acceso.
-
-
Cómo descargar el juego de otras fuentes
-
Si no quieres comprar el juego en Steam, también puedes descargarlo de otras fuentes que ofrecen versiones gratuitas o no oficiales de Counter-Strike 1.6 original. Sin embargo, debe tener cuidado y precaución al hacerlo, ya que algunas de estas fuentes pueden contener virus, malware u otros archivos dañinos que podrían dañar su computadora o comprometer su seguridad. Estos son algunos consejos a seguir:
-
-
-
Escanee el archivo descargado con un programa antivirus o anti-malware antes de abrirlo o instalarlo. Elimine cualquier archivo que se detecte como infectado o sospechoso.
-
Asegúrese de que el archivo descargado es compatible con su sistema operativo y cumple con los requisitos mínimos del sistema para el juego. Es posible que necesite instalar software o controladores adicionales para ejecutar el juego correctamente.
-
Siga las instrucciones proporcionadas por el sitio web o el archivo para instalar y ejecutar el juego. Algunos archivos pueden requerir que los extraigas usando un programa como WinRAR o 7-Zip antes de instalarlos.
-
Ten en cuenta los riesgos y limitaciones de descargar el juego desde otras fuentes. Es posible que no puedas jugar en línea con otros jugadores que tengan la versión oficial del juego, o que puedas encontrar errores, errores o fallos que afecten tu experiencia de juego.
-
-
Cómo comprobar los requisitos del sistema y la compatibilidad
-
Antes de descargar y jugar Counter-Strike 1.6 original, usted debe asegurarse de que su computadora cumple con los requisitos mínimos del sistema para el juego y es compatible con él. Estos son algunos de los requisitos del sistema y problemas de compatibilidad que debe comprobar:
-
-
Requisito del sistema
Mínimo
Recomendado
-
Sistema operativo
Windows XP/Vista/7/8/10
Windows XP/Vista/7/8/10
-
CPU
Pentium III 500 MHz o equivalente
Pentium 4 1 GHz o equivalente
-
RAM
96 MB
256 MB
-
Tarjeta gráfica
Tarjeta de video de 16 MB con soporte para DirectX 8
Tarjeta de video de 32 MB con soporte para DirectX 9
-
Espacio en disco duro
500 MB
1 GB
-
Conexión a Internet
Banda ancha (para jugar en línea)
Banda ancha (para jugar en línea)
-
Tarjeta de sonido
Tarjeta de sonido compatible con DirectX
Tarjeta de sonido compatible con DirectX
-
-
DVD-ROM Drive (para copia física)
N/A
-
Problemas de compatibilidad
-
Algunos usuarios pueden experimentar problemas al ejecutar Counter-Strike 1.6 original en versiones más recientes de Windows, como Windows 10. Algunos de estos problemas incluyen bajo FPS, pantalla negra, retraso del ratón, problemas de sonido, etc. Para solucionar estos problemas, es posible que tenga que ajustar algunos ajustes en las propiedades del juego, como el modo de compatibilidad, modo administrador, resolución, etc. Puede encontrar más información y soluciones en varios foros y sitios web dedicados a Counter Strike 1.6 original.
-
-
Cómo jugar Counter-Strike 1.6 original
-
Ahora que has descargado e instalado Counter-Strike 1.6 original, estás listo para jugar y divertirte. Estos son algunos de los pasos y consejos básicos para ayudarte a empezar:
-
Cómo unirse a un servidor y seleccionar un equipo
-
Para jugar Counter-Strike 1.6 original en línea, es necesario unirse a un servidor que alberga el juego. Puede unirse a un servidor existente o crear su propio servidor. Estos son los pasos para unirse a un servidor existente:
-
-
Inicie el juego y haga clic en el botón "Buscar servidores" en el menú principal.
-
Aparecerá una lista de servidores disponibles, mostrando su nombre, mapa, reproductores, ping, etc. Puede filtrar la lista usando las pestañas en la parte superior o el cuadro de búsqueda en la parte inferior.
-
Seleccione un servidor al que desea unirse y haga clic en el botón "Conectar" en la parte inferior derecha. También puede hacer doble clic en el nombre del servidor o hacer clic derecho y elegir "Conectar" en el menú.
-
El juego cargará el mapa y te conectará al servidor. Es posible que deba esperar unos segundos o minutos dependiendo de la velocidad de Internet y la configuración del servidor.
-
-
A continuación, verá una pantalla que muestra los miembros de su equipo y sus resultados. También puede chatear con sus compañeros de equipo o todos los jugadores utilizando las teclas Y o U respectivamente. Pulse OK para continuar.
-
-
Para crear tu propio servidor, debes seguir estos pasos:
-
-
Inicie el juego y haga clic en el botón "Crear servidor" en el menú principal.
-
Aparecerá una ventana que le permite configurar la configuración de su servidor, como nombre, contraseña, mapa, modo de juego, jugadores máximos, etc. También puede habilitar o desactivar bots, trucos, fuego amigo, etc.
-
Después de configurar la configuración de su servidor, haga clic en el botón "Inicio" en la parte inferior derecha. El juego cargará el mapa y creará su servidor.
-
A continuación, verá una pantalla que le pide que elija un equipo: Terroristas o Antiterroristas. También puede elegir ser un espectador y ver jugar a otros jugadores. Haga clic en el equipo al que desea unirse y pulse OK.
-
A continuación, verá una pantalla que muestra los miembros de su equipo y sus resultados. También puede chatear con sus compañeros de equipo o todos los jugadores utilizando las teclas Y o U respectivamente. Pulse OK para continuar.
-
-
Cómo comprar armas y equipos
-
Al comienzo de cada ronda, usted tiene una cantidad limitada de tiempo y dinero para comprar armas y equipos para usted y sus compañeros de equipo. Puede comprar artículos desde el menú de compra presionando la tecla B o usando la rueda del ratón. Estos son algunos de los artículos que puede comprar:
-
-
Pistolas: Estas son armas secundarias que son baratas y fáciles de usar, pero tienen poco daño y precisión. Algunos ejemplos son Glock, USP, Desert Eagle, etc.
-
Rifles: Estas son armas primarias que son caras y poderosas, pero tienen un alto retroceso y peso. Algunos ejemplos son AK-47, M4A1, AWP, etc.
-
Escopetas: Estas son armas primarias que son baratas y eficaces a corta distancia, pero tienen baja precisión y capacidad de munición. Algunos ejemplos son XM1014, M3 Super 90, etc.
-
-
Ametralladoras: Estas son armas primarias que son muy caras y de fuego pesado, pero tienen un alto retroceso y ruido. Algunos ejemplos son M249 Para, Negev, etc.
-
Granadas: Estos son elementos desechables que pueden causar daño o efectos a enemigos o aliados. Algunos ejemplos son la granada HE (explosiva), flashbang (cegadora), granada de humo (oscurecimiento), etc.
-
Armadura: Este es un artículo que puede protegerte de balas y granadas. Puedes comprar kevlar (armadura corporal) o kevlar + casco (armadura para la cabeza).
-
Gafas de visión nocturna: Este es un elemento que puede ayudarte a ver en áreas oscuras. Puedes activarlo o desactivarlo presionando la tecla N.
-
Desactivar kit: Este es un elemento que puede ayudarle a desactivar la bomba más rápido. Solo puede comprarla si es un antiterrorista.
-
-
También puedes comprar artículos para tus compañeros de equipo usando el menú de compra o tirándolos al suelo. Puede soltar elementos pulsando la tecla G o utilizando la rueda del ratón. También puedes solicitar elementos a tus compañeros de equipo usando los comandos de radio o el chat.
-
Cómo comunicarse con compañeros de equipo y usar comandos de radio
-
La comunicación es muy importante en el original de Counter-Strike 1.6, ya que puede ayudarte a coordinar con tus compañeros de equipo y compartir información sobre la ubicación, el estado y las acciones del enemigo. Puede comunicarse con sus compañeros de equipo mediante el chat de voz, el chat de texto o los comandos de radio.
-
Para usar el chat de voz, necesitas tener un micrófono y activarlo en las opciones del juego. A continuación, puede presionar y mantener pulsada la tecla K para hablar con sus compañeros de equipo. También puede ajustar el volumen y silenciar a otros jugadores en las opciones del juego.
-
Para usar el chat de texto, debe presionar la tecla Y para chatear con todos los jugadores o la tecla U para chatear solo con sus compañeros de equipo. A continuación, puede escribir su mensaje y pulse Enter para enviarlo. También puedes usar algunos comandos y atajos en el chat, como /me, /quit, /timeleft, etc.
-
-
Cómo completar objetivos y ganar rondas
-
El objetivo principal de Counter-Strike 1.6 original es completar los objetivos de tu equipo y ganar rondas contra el equipo enemigo. Los objetivos varían según el modo de juego y el mapa, pero por lo general implican plantar o desactivar una bomba, rescatar o proteger a los rehenes, asesinar o proteger a un VIP, o escapar o evitar una fuga.
-
Para ganar una ronda, necesitas completar el objetivo de tu equipo o eliminar a todos los jugadores enemigos antes de que se acabe el tiempo. El límite de tiempo para cada ronda suele ser de 2 minutos y 30 segundos, pero puede variar dependiendo de la configuración del servidor. Si ningún equipo completa su objetivo o elimina a todos los jugadores enemigos antes de que acabe el tiempo, la ronda terminará en un empate.
-
Para ganar un partido, necesitas ganar más rondas que el equipo enemigo. El número de rondas para cada partido suele ser de 30, pero puede variar dependiendo de la configuración del servidor. Si ambos equipos ganan un número igual de rondas después de 30 rondas, el partido terminará en un empate.
-
Consejos y trucos para Counter-Strike 1.6 original
-
Counter-Strike 1.6 original es un juego que requiere habilidad, estrategia, trabajo en equipo y práctica para dominar. Estos son algunos consejos y trucos que pueden ayudarte a mejorar tu jugabilidad y rendimiento:
-
Cómo mejorar tu puntería y control de retroceso
-
Apuntar y controlar el retroceso son dos de las habilidades más importantes en Counter-Strike 1.6 original, ya que determinan con qué precisión y eficacia puedes disparar a tus enemigos. Estos son algunos consejos para mejorar tu puntería y control de retroceso:
-
-
Práctica: La mejor manera de mejorar tu puntería y control de retroceso es practicar de forma regular y consistente. Puedes practicar offline con bots o online con otros jugadores en diferentes mapas y modos. También puede utilizar mapas de entrenamiento o servidores dedicados que ofrecen varios ejercicios y desafíos para apuntar y controlar el retroceso.
-
-
Punto de mira: El punto de mira es el símbolo que muestra dónde irán tus balas cuando dispares. Usted debe elegir un punto de mira que sea cómodo y visible para usted, pero no demasiado grande o distracción. Puedes personalizar tu punto de mira en las opciones del juego o mediante comandos de consola.
-
Posicionamiento: La posición de tu punto de mira en la pantalla afecta la rapidez y precisión con la que puedes apuntar a tus enemigos. Siempre debes mantener tu punto de mira a la altura de la cabeza y cerca de las esquinas o bordes de paredes, puertas, ventanas, etc. De esta manera, puedes reducir la distancia y el tiempo que necesitas para mover tu punto de mira para disparar a tus enemigos.
-
Movimiento: El movimiento de tu personaje afecta la precisión y estabilidad de tus disparos. Siempre debes dejar de moverte antes de disparar, ya que moverte mientras disparas hará que tus balas se extiendan y se desvíen de tu punto de mira. Puedes usar la tecla shift para caminar lenta y silenciosamente, o la tecla ctrl para agacharte y bajar tu perfil.
-
Estallido: La técnica de estallido consiste en disparar algunas balas a la vez, en lugar de rociar o tocar. De esta manera, puede controlar mejor el retroceso y la propagación de sus disparos, y conservar su munición y precisión. Debes disparar de 2 a 4 balas por disparo, dependiendo del arma y la distancia. También debe hacer una pausa breve entre cada ráfaga para dejar que el retroceso se restablezca.
-
Pulverización: La técnica de pulverización consiste en disparar muchas balas a la vez, sin detenerse o detenerse. De esta manera, puedes infligir más daño y suprimir a tus enemigos más rápido, pero a costa de precisión y munición. Solo debes rociar cuando estás muy cerca de tus enemigos, o cuando no tienes otra opción. También debe aprender los patrones de pulverización de diferentes armas y compensarlos moviendo el ratón en la dirección opuesta.
-
-
-
Cómo usar los auriculares y ajustar el volumen
-
El sonido es otro aspecto importante del original de Counter-Strike 1.6, ya que puede ayudarte a escuchar y localizar a tus enemigos y aliados, así como otros sonidos como pasos, disparos, granadas, etc. Estos son algunos consejos para usar auriculares y ajustar tu volumen:
-
-
Auriculares: Siempre debe usar auriculares en lugar de altavoces al reproducir Counter-Strike 1.6 original, ya que los auriculares pueden proporcionar una mejor calidad de sonido y direccionalidad que los altavoces. Los auriculares también pueden bloquear los ruidos externos y las distracciones que pueden interferir con su juego. Usted debe elegir los auriculares que son cómodos y caben bien en sus oídos, y que tienen buen balance de graves y agudos.
-
Volumen: Debes ajustar tu volumen a un nivel lo suficientemente alto para que puedas escuchar todos los sonidos del juego con claridad, pero no demasiado fuerte para que te duela los oídos o cause daño auditivo. También debes evitar usar cualquier potenciador de sonido o ecualizador que pueda distorsionar o alterar los sonidos originales del juego.
-
Configuración: Deberías revisar y ajustar tu configuración de sonido en las opciones del juego o usando comandos de consola. Debes habilitar el sonido 3D o HRTF (función de transferencia relacionada con la cabeza) si está disponible, ya que pueden mejorar la conciencia espacial y el realismo de los sonidos en el juego. También debes desactivar cualquier música o sonido ambiental que pueda distraerte o molestarte durante el juego.
-
-
Cómo permanecer quieto cuando se dispara y moverse constantemente cuando no
-
El movimiento es otra habilidad importante en Counter-Strike 1.6 original, ya que afecta a lo rápido y ágil que estás en el campo de batalla. Aquí hay algunos consejos para quedarse quieto al disparar y moverse constantemente cuando no:
-
-
-
Muévete constantemente cuando no dispares: Cuando no estés disparando, siempre debes seguir moviéndote alrededor del mapa, ya que quedarte quieto te hará un objetivo fácil para tus enemigos. Puede usar las teclas W, A, S, D para avanzar, izquierda, hacia atrás, derecha respectivamente. También puede utilizar las teclas Q y E para inclinarse hacia la izquierda o hacia la derecha, respectivamente. También puede utilizar el ratón para mirar a su alrededor y apuntar. Debes moverte de forma impredecible y aleatoria, y evitar correr en líneas rectas o permanecer en áreas abiertas.
-
Strafe: Strafing es una técnica que consiste en moverse de lado mientras se mira hacia adelante. De esta manera, puedes esquivar el fuego enemigo y mantener tu puntería al mismo tiempo. Puede pulsar las teclas A o D mientras mueve el ratón en la dirección opuesta. También puede usar las teclas Q y E para inclinarse hacia la izquierda o hacia la derecha mientras se desvía.
-
Bunny hop: Bunny hopping es una técnica que consiste en saltar repetidamente mientras se avanza. De esta manera, puede aumentar su velocidad y movilidad, y hacerse más difícil de golpear. Puede saltar saltando presionando las teclas W y barra espaciadora alternativamente mientras mueve el ratón ligeramente hacia la izquierda o la derecha.
-
-
Cómo disparar a través de las paredes y los puntos comunes de pre-fuego
-
Disparar a través de paredes y puntos comunes previos al disparo son dos técnicas avanzadas que pueden darte una ventaja sobre tus enemigos en Counter-Strike 1.6 original. Aquí hay algunos consejos para disparar a través de las paredes y los puntos comunes pre-fuego:
-
-
Dispara a través de las paredes: Algunas paredes y objetos en Counter-Strike 1.6 original son penetrables, lo que significa que puedes disparar a través de ellos y golpear a tus enemigos detrás de ellos. Puedes usar esto para sorprender o dañar a tus enemigos sin exponerte. Puedes saber si una pared u objeto es penetrable disparándole y viendo si deja un agujero de bala o una marca. También puede utilizar el comando de consola r_decals 0 para eliminar todas las calcomanías del mapa, lo que facilita ver las paredes y objetos penetrables.
-
-
-
Cómo mantener la calma y tomar el tiro
-
Mantener la calma y tomar el disparo son dos habilidades esenciales en Counter-Strike 1.6 original, ya que afectan a lo bien que se realiza bajo presión y lo seguro que está en sus habilidades. Aquí hay algunos consejos para mantener la calma y tomar la foto:
-
-
Respirar: La respiración es una forma simple pero efectiva de calmarse y relajar la mente y el cuerpo. Debe respirar profunda y lentamente, inhalando por la nariz y exhalando por la boca. También debes enfocarte en tu respiración e ignorar cualquier distracción o pensamientos negativos.
-
Piensa: Pensar es una forma crucial pero a menudo pasada por alto para mejorar la toma de decisiones y las habilidades para resolver problemas. Debes pensar de forma lógica y estratégica, analizando la situación y sopesando los pros y los contras de cada opción. También debes pensar positiva y optimistamente, creyendo en ti mismo y en tus compañeros de equipo.
-
Act: Actuar es la forma final pero más importante de ejecutar tu plan y lograr tu objetivo. Debes actuar con rapidez y confianza, confiando en tus instintos y habilidades. También debe actuar con calma y paciencia, esperando el momento adecuado y la oportunidad de tomar la foto.
-
-
Conclusión
-
En conclusión, Counter-Strike 1.6 original es un juego que ofrece mucha diversión, desafío y emoción para los fans de FPS de todas las edades y niveles. Es un juego que requiere habilidad, estrategia, trabajo en equipo y práctica para dominar, pero también te recompensa con satisfacción, disfrute y mejora. Es un juego que tiene una rica historia, una comunidad leal y un futuro brillante.
-
-
Espero que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta, comentario o retroalimentación, no dude en compartirlos conmigo. Me encantaría saber de ti y ayudarte. ¡Gracias por leer y jugar feliz!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Counter-Strike 1.6 original:
-
P: ¿Cuál es la diferencia entre Counter-Strike 1.6 original y Counter-Strike: Fuente?
-
A: Counter-Strike: Source es un remake de Counter-Strike 1.6 original que fue lanzado en 2004 por Valve. Utiliza el motor Source, que ofrece gráficos, física, sonido y jugabilidad mejorados. Sin embargo, algunos jugadores prefieren Counter-Strike 1.6 original por su diseño clásico y auténtico, jugabilidad y sensación.
-
P: ¿Cuál es la diferencia entre Counter-Strike 1.6 original y Counter-Strike: Ofensiva Global?
-
A: Counter-Strike: Global Offensive es la última entrega de la serie Counter-Strike que fue lanzada en 2012 por Valve. Cuenta con gráficos actualizados, modos, mapas, armas y personajes, así como nuevas características como pieles, rangos, matchmaking, etc. Sin embargo, algunos jugadores prefieren Counter-Strike 1.6 original por su simplicidad y nostalgia.
-
Q: ¿Cómo puedo jugar Counter-Strike 1.6 original en un ordenador Mac o Linux?
-
A: Desafortunadamente, Counter-Strike 1.6 original no está soportado oficialmente en computadoras Mac o Linux. Sin embargo, es posible que pueda jugarlo utilizando un programa como Wine o CrossOver que le permite ejecutar aplicaciones de Windows en computadoras Mac o Linux. Puede encontrar más información e instrucciones sobre cómo hacer esto en varios sitios web y foros en línea.
-
Q: ¿Cómo puedo jugar Counter-Strike 1.6 original en un dispositivo móvil?
-
-
Q: ¿Cómo puedo jugar Counter-Strike 1.6 original con mods o mapas personalizados?
-
A: Hay muchos mods y mapas personalizados que han sido creados por fans y desarrolladores para Counter-Strike 1.6 original a lo largo de los años. Estos mods y mapas pueden añadir nuevas características, modos, armas, personajes, etc. al juego, o cambiar su apariencia, jugabilidad o dificultad. Puede encontrar y descargar estos mods y mapas de varios sitios web y servidores en línea. También puedes crear tus propios mods y mapas usando herramientas como Hammer Editor o AMX Mod X.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/adapter.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/adapter.py
deleted file mode 100644
index 94c75e1a05b47922945c5233e90e9f936b108b66..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/adapter.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# SPDX-FileCopyrightText: 2015 Eric Larson
-#
-# SPDX-License-Identifier: Apache-2.0
-
-import types
-import functools
-import zlib
-
-from pip._vendor.requests.adapters import HTTPAdapter
-
-from .controller import CacheController, PERMANENT_REDIRECT_STATUSES
-from .cache import DictCache
-from .filewrapper import CallbackFileWrapper
-
-
-class CacheControlAdapter(HTTPAdapter):
- invalidating_methods = {"PUT", "PATCH", "DELETE"}
-
- def __init__(
- self,
- cache=None,
- cache_etags=True,
- controller_class=None,
- serializer=None,
- heuristic=None,
- cacheable_methods=None,
- *args,
- **kw
- ):
- super(CacheControlAdapter, self).__init__(*args, **kw)
- self.cache = DictCache() if cache is None else cache
- self.heuristic = heuristic
- self.cacheable_methods = cacheable_methods or ("GET",)
-
- controller_factory = controller_class or CacheController
- self.controller = controller_factory(
- self.cache, cache_etags=cache_etags, serializer=serializer
- )
-
- def send(self, request, cacheable_methods=None, **kw):
- """
- Send a request. Use the request information to see if it
- exists in the cache and cache the response if we need to and can.
- """
- cacheable = cacheable_methods or self.cacheable_methods
- if request.method in cacheable:
- try:
- cached_response = self.controller.cached_request(request)
- except zlib.error:
- cached_response = None
- if cached_response:
- return self.build_response(request, cached_response, from_cache=True)
-
- # check for etags and add headers if appropriate
- request.headers.update(self.controller.conditional_headers(request))
-
- resp = super(CacheControlAdapter, self).send(request, **kw)
-
- return resp
-
- def build_response(
- self, request, response, from_cache=False, cacheable_methods=None
- ):
- """
- Build a response by making a request or using the cache.
-
- This will end up calling send and returning a potentially
- cached response
- """
- cacheable = cacheable_methods or self.cacheable_methods
- if not from_cache and request.method in cacheable:
- # Check for any heuristics that might update headers
- # before trying to cache.
- if self.heuristic:
- response = self.heuristic.apply(response)
-
- # apply any expiration heuristics
- if response.status == 304:
- # We must have sent an ETag request. This could mean
- # that we've been expired already or that we simply
- # have an etag. In either case, we want to try and
- # update the cache if that is the case.
- cached_response = self.controller.update_cached_response(
- request, response
- )
-
- if cached_response is not response:
- from_cache = True
-
- # We are done with the server response, read a
- # possible response body (compliant servers will
- # not return one, but we cannot be 100% sure) and
- # release the connection back to the pool.
- response.read(decode_content=False)
- response.release_conn()
-
- response = cached_response
-
- # We always cache the 301 responses
- elif int(response.status) in PERMANENT_REDIRECT_STATUSES:
- self.controller.cache_response(request, response)
- else:
- # Wrap the response file with a wrapper that will cache the
- # response when the stream has been consumed.
- response._fp = CallbackFileWrapper(
- response._fp,
- functools.partial(
- self.controller.cache_response, request, response
- ),
- )
- if response.chunked:
- super_update_chunk_length = response._update_chunk_length
-
- def _update_chunk_length(self):
- super_update_chunk_length()
- if self.chunk_left == 0:
- self._fp._close()
-
- response._update_chunk_length = types.MethodType(
- _update_chunk_length, response
- )
-
- resp = super(CacheControlAdapter, self).build_response(request, response)
-
- # See if we should invalidate the cache.
- if request.method in self.invalidating_methods and resp.ok:
- cache_url = self.controller.cache_url(request.url)
- self.cache.delete(cache_url)
-
- # Give the request a from_cache attr to let people use it
- resp.from_cache = from_cache
-
- return resp
-
- def close(self):
- self.cache.close()
- super(CacheControlAdapter, self).close()
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/codingstatemachine.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/codingstatemachine.py
deleted file mode 100644
index 8ed4a8773b8404c2705aa8728e5fd692362ba168..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/codingstatemachine.py
+++ /dev/null
@@ -1,90 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-import logging
-
-from .codingstatemachinedict import CodingStateMachineDict
-from .enums import MachineState
-
-
-class CodingStateMachine:
- """
- A state machine to verify a byte sequence for a particular encoding. For
- each byte the detector receives, it will feed that byte to every active
- state machine available, one byte at a time. The state machine changes its
- state based on its previous state and the byte it receives. There are 3
- states in a state machine that are of interest to an auto-detector:
-
- START state: This is the state to start with, or a legal byte sequence
- (i.e. a valid code point) for character has been identified.
-
- ME state: This indicates that the state machine identified a byte sequence
- that is specific to the charset it is designed for and that
- there is no other possible encoding which can contain this byte
- sequence. This will to lead to an immediate positive answer for
- the detector.
-
- ERROR state: This indicates the state machine identified an illegal byte
- sequence for that encoding. This will lead to an immediate
- negative answer for this encoding. Detector will exclude this
- encoding from consideration from here on.
- """
-
- def __init__(self, sm: CodingStateMachineDict) -> None:
- self._model = sm
- self._curr_byte_pos = 0
- self._curr_char_len = 0
- self._curr_state = MachineState.START
- self.active = True
- self.logger = logging.getLogger(__name__)
- self.reset()
-
- def reset(self) -> None:
- self._curr_state = MachineState.START
-
- def next_state(self, c: int) -> int:
- # for each byte we get its class
- # if it is first byte, we also get byte length
- byte_class = self._model["class_table"][c]
- if self._curr_state == MachineState.START:
- self._curr_byte_pos = 0
- self._curr_char_len = self._model["char_len_table"][byte_class]
- # from byte's class and state_table, we get its next state
- curr_state = self._curr_state * self._model["class_factor"] + byte_class
- self._curr_state = self._model["state_table"][curr_state]
- self._curr_byte_pos += 1
- return self._curr_state
-
- def get_current_charlen(self) -> int:
- return self._curr_char_len
-
- def get_coding_state_machine(self) -> str:
- return self._model["name"]
-
- @property
- def language(self) -> str:
- return self._model["language"]
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/repr.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/repr.py
deleted file mode 100644
index f284bcafa6ab2e1c9ae51be54107836e68cfb0d3..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/repr.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import inspect
-from functools import partial
-from typing import (
- Any,
- Callable,
- Iterable,
- List,
- Optional,
- Tuple,
- Type,
- TypeVar,
- Union,
- overload,
-)
-
-T = TypeVar("T")
-
-
-Result = Iterable[Union[Any, Tuple[Any], Tuple[str, Any], Tuple[str, Any, Any]]]
-RichReprResult = Result
-
-
-class ReprError(Exception):
- """An error occurred when attempting to build a repr."""
-
-
-@overload
-def auto(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def auto(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def auto(
- cls: Optional[Type[T]] = None, *, angular: Optional[bool] = None
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- """Class decorator to create __repr__ from __rich_repr__"""
-
- def do_replace(cls: Type[T], angular: Optional[bool] = None) -> Type[T]:
- def auto_repr(self: T) -> str:
- """Create repr string from __rich_repr__"""
- repr_str: List[str] = []
- append = repr_str.append
-
- angular: bool = getattr(self.__rich_repr__, "angular", False) # type: ignore[attr-defined]
- for arg in self.__rich_repr__(): # type: ignore[attr-defined]
- if isinstance(arg, tuple):
- if len(arg) == 1:
- append(repr(arg[0]))
- else:
- key, value, *default = arg
- if key is None:
- append(repr(value))
- else:
- if default and default[0] == value:
- continue
- append(f"{key}={value!r}")
- else:
- append(repr(arg))
- if angular:
- return f"<{self.__class__.__name__} {' '.join(repr_str)}>"
- else:
- return f"{self.__class__.__name__}({', '.join(repr_str)})"
-
- def auto_rich_repr(self: Type[T]) -> Result:
- """Auto generate __rich_rep__ from signature of __init__"""
- try:
- signature = inspect.signature(self.__init__)
- for name, param in signature.parameters.items():
- if param.kind == param.POSITIONAL_ONLY:
- yield getattr(self, name)
- elif param.kind in (
- param.POSITIONAL_OR_KEYWORD,
- param.KEYWORD_ONLY,
- ):
- if param.default == param.empty:
- yield getattr(self, param.name)
- else:
- yield param.name, getattr(self, param.name), param.default
- except Exception as error:
- raise ReprError(
- f"Failed to auto generate __rich_repr__; {error}"
- ) from None
-
- if not hasattr(cls, "__rich_repr__"):
- auto_rich_repr.__doc__ = "Build a rich repr"
- cls.__rich_repr__ = auto_rich_repr # type: ignore[attr-defined]
-
- auto_repr.__doc__ = "Return repr(self)"
- cls.__repr__ = auto_repr # type: ignore[assignment]
- if angular is not None:
- cls.__rich_repr__.angular = angular # type: ignore[attr-defined]
- return cls
-
- if cls is None:
- return partial(do_replace, angular=angular)
- else:
- return do_replace(cls, angular=angular)
-
-
-@overload
-def rich_repr(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def rich_repr(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def rich_repr(
- cls: Optional[Type[T]] = None, *, angular: bool = False
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- if cls is None:
- return auto(angular=angular)
- else:
- return auto(cls)
-
-
-if __name__ == "__main__":
-
- @auto
- class Foo:
- def __rich_repr__(self) -> Result:
- yield "foo"
- yield "bar", {"shopping": ["eggs", "ham", "pineapple"]}
- yield "buy", "hand sanitizer"
-
- foo = Foo()
- from pip._vendor.rich.console import Console
-
- console = Console()
-
- console.rule("Standard repr")
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
-
- console.rule("Angular repr")
- Foo.__rich_repr__.angular = True # type: ignore[attr-defined]
-
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/__init__.py
deleted file mode 100644
index 5acd7687d642f06de84b38f5842c41ae14d5f24a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from distutils.command.bdist import bdist
-import sys
-
-if 'egg' not in bdist.format_commands:
- try:
- bdist.format_commands['egg'] = ('bdist_egg', "Python .egg file")
- except TypeError:
- # For backward compatibility with older distutils (stdlib)
- bdist.format_command['egg'] = ('bdist_egg', "Python .egg file")
- bdist.format_commands.append('egg')
-
-del bdist, sys
diff --git a/spaces/C6AI/HDRL/Dockerfile b/spaces/C6AI/HDRL/Dockerfile
deleted file mode 100644
index 68d00607e5bc2aaed0942c5eab512753fcbff559..0000000000000000000000000000000000000000
--- a/spaces/C6AI/HDRL/Dockerfile
+++ /dev/null
@@ -1,14 +0,0 @@
-FROM ghcr.io/livebook-dev/livebook:latest-cuda11.8
-
-ENV LIVEBOOK_APP_SERVICE_NAME "🐳 Hugging Face - $SPACE_TITLE"
-ENV LIVEBOOK_APP_SERVICE_URL "https://huggingface.co/spaces/$SPACE_AUTHOR_NAME/$SPACE_REPO_NAME"
-ENV LIVEBOOK_UPDATE_INSTRUCTIONS_URL "https://livebook.dev"
-ENV LIVEBOOK_WITHIN_IFRAME "true"
-ENV LIVEBOOK_DATA_PATH "/data"
-ENV LIVEBOOK_PORT 7860
-
-EXPOSE 7860
-
-USER root
-RUN mkdir -p /data
-RUN chmod 777 /data
diff --git a/spaces/CVPR/LIVE/thrust/thrust/memory/detail/host_system_resource.h b/spaces/CVPR/LIVE/thrust/thrust/memory/detail/host_system_resource.h
deleted file mode 100644
index ded1c4d0bfac5efed867743b5e1a1ad70e736cb3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/memory/detail/host_system_resource.h
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// #include the host system's memory_resource header
-#define __THRUST_HOST_SYSTEM_MEMORY_HEADER <__THRUST_HOST_SYSTEM_ROOT/memory_resource.h>
-#include __THRUST_HOST_SYSTEM_MEMORY_HEADER
-#undef __THRUST_HOST_SYSTEM_MEMORY_HEADER
-
-namespace thrust
-{
-
-typedef thrust::system::__THRUST_HOST_SYSTEM_NAMESPACE::memory_resource
- host_memory_resource;
-
-} // end thrust
-
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py
deleted file mode 100644
index c8340c723fad8e07e2fc62daaa3912487498814b..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py
+++ /dev/null
@@ -1,221 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copied from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-"""
-Backbone modules.
-"""
-
-from typing import Dict, List
-
-import torch
-import torch.nn.functional as F
-import torchvision
-from torch import nn
-from torchvision.models._utils import IntermediateLayerGetter
-
-from groundingdino.util.misc import NestedTensor, clean_state_dict, is_main_process
-
-from .position_encoding import build_position_encoding
-from .swin_transformer import build_swin_transformer
-
-
-class FrozenBatchNorm2d(torch.nn.Module):
- """
- BatchNorm2d where the batch statistics and the affine parameters are fixed.
-
- Copy-paste from torchvision.misc.ops with added eps before rqsrt,
- without which any other models than torchvision.models.resnet[18,34,50,101]
- produce nans.
- """
-
- def __init__(self, n):
- super(FrozenBatchNorm2d, self).__init__()
- self.register_buffer("weight", torch.ones(n))
- self.register_buffer("bias", torch.zeros(n))
- self.register_buffer("running_mean", torch.zeros(n))
- self.register_buffer("running_var", torch.ones(n))
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- num_batches_tracked_key = prefix + "num_batches_tracked"
- if num_batches_tracked_key in state_dict:
- del state_dict[num_batches_tracked_key]
-
- super(FrozenBatchNorm2d, self)._load_from_state_dict(
- state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- )
-
- def forward(self, x):
- # move reshapes to the beginning
- # to make it fuser-friendly
- w = self.weight.reshape(1, -1, 1, 1)
- b = self.bias.reshape(1, -1, 1, 1)
- rv = self.running_var.reshape(1, -1, 1, 1)
- rm = self.running_mean.reshape(1, -1, 1, 1)
- eps = 1e-5
- scale = w * (rv + eps).rsqrt()
- bias = b - rm * scale
- return x * scale + bias
-
-
-class BackboneBase(nn.Module):
- def __init__(
- self,
- backbone: nn.Module,
- train_backbone: bool,
- num_channels: int,
- return_interm_indices: list,
- ):
- super().__init__()
- for name, parameter in backbone.named_parameters():
- if (
- not train_backbone
- or "layer2" not in name
- and "layer3" not in name
- and "layer4" not in name
- ):
- parameter.requires_grad_(False)
-
- return_layers = {}
- for idx, layer_index in enumerate(return_interm_indices):
- return_layers.update(
- {"layer{}".format(5 - len(return_interm_indices) + idx): "{}".format(layer_index)}
- )
-
- # if len:
- # if use_stage1_feature:
- # return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"}
- # else:
- # return_layers = {"layer2": "0", "layer3": "1", "layer4": "2"}
- # else:
- # return_layers = {'layer4': "0"}
- self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)
- self.num_channels = num_channels
-
- def forward(self, tensor_list: NestedTensor):
- xs = self.body(tensor_list.tensors)
- out: Dict[str, NestedTensor] = {}
- for name, x in xs.items():
- m = tensor_list.mask
- assert m is not None
- mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
- out[name] = NestedTensor(x, mask)
- # import ipdb; ipdb.set_trace()
- return out
-
-
-class Backbone(BackboneBase):
- """ResNet backbone with frozen BatchNorm."""
-
- def __init__(
- self,
- name: str,
- train_backbone: bool,
- dilation: bool,
- return_interm_indices: list,
- batch_norm=FrozenBatchNorm2d,
- ):
- if name in ["resnet18", "resnet34", "resnet50", "resnet101"]:
- backbone = getattr(torchvision.models, name)(
- replace_stride_with_dilation=[False, False, dilation],
- pretrained=is_main_process(),
- norm_layer=batch_norm,
- )
- else:
- raise NotImplementedError("Why you can get here with name {}".format(name))
- # num_channels = 512 if name in ('resnet18', 'resnet34') else 2048
- assert name not in ("resnet18", "resnet34"), "Only resnet50 and resnet101 are available."
- assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]]
- num_channels_all = [256, 512, 1024, 2048]
- num_channels = num_channels_all[4 - len(return_interm_indices) :]
- super().__init__(backbone, train_backbone, num_channels, return_interm_indices)
-
-
-class Joiner(nn.Sequential):
- def __init__(self, backbone, position_embedding):
- super().__init__(backbone, position_embedding)
-
- def forward(self, tensor_list: NestedTensor):
- xs = self[0](tensor_list)
- out: List[NestedTensor] = []
- pos = []
- for name, x in xs.items():
- out.append(x)
- # position encoding
- pos.append(self[1](x).to(x.tensors.dtype))
-
- return out, pos
-
-
-def build_backbone(args):
- """
- Useful args:
- - backbone: backbone name
- - lr_backbone:
- - dilation
- - return_interm_indices: available: [0,1,2,3], [1,2,3], [3]
- - backbone_freeze_keywords:
- - use_checkpoint: for swin only for now
-
- """
- position_embedding = build_position_encoding(args)
- train_backbone = True
- if not train_backbone:
- raise ValueError("Please set lr_backbone > 0")
- return_interm_indices = args.return_interm_indices
- assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]]
- args.backbone_freeze_keywords
- use_checkpoint = getattr(args, "use_checkpoint", False)
-
- if args.backbone in ["resnet50", "resnet101"]:
- backbone = Backbone(
- args.backbone,
- train_backbone,
- args.dilation,
- return_interm_indices,
- batch_norm=FrozenBatchNorm2d,
- )
- bb_num_channels = backbone.num_channels
- elif args.backbone in [
- "swin_T_224_1k",
- "swin_B_224_22k",
- "swin_B_384_22k",
- "swin_L_224_22k",
- "swin_L_384_22k",
- ]:
- pretrain_img_size = int(args.backbone.split("_")[-2])
- backbone = build_swin_transformer(
- args.backbone,
- pretrain_img_size=pretrain_img_size,
- out_indices=tuple(return_interm_indices),
- dilation=False,
- use_checkpoint=use_checkpoint,
- )
-
- bb_num_channels = backbone.num_features[4 - len(return_interm_indices) :]
- else:
- raise NotImplementedError("Unknown backbone {}".format(args.backbone))
-
- assert len(bb_num_channels) == len(
- return_interm_indices
- ), f"len(bb_num_channels) {len(bb_num_channels)} != len(return_interm_indices) {len(return_interm_indices)}"
-
- model = Joiner(backbone, position_embedding)
- model.num_channels = bb_num_channels
- assert isinstance(
- bb_num_channels, List
- ), "bb_num_channels is expected to be a List but {}".format(type(bb_num_channels))
- # import ipdb; ipdb.set_trace()
- return model
diff --git a/spaces/DCandE/rvc-models/infer_pack/models.py b/spaces/DCandE/rvc-models/infer_pack/models.py
deleted file mode 100644
index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000
--- a/spaces/DCandE/rvc-models/infer_pack/models.py
+++ /dev/null
@@ -1,982 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- z_slice, ids_slice = commons.rand_slice_segments(
- x, y_lengths, self.segment_size
- )
-
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice
-
- def infer(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o, o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/DHEIVER/AnimeGANv2/app.py b/spaces/DHEIVER/AnimeGANv2/app.py
deleted file mode 100644
index c4a0d3562d40be0fae8e5441b76d812419a1eae0..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/AnimeGANv2/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from PIL import Image
-import torch
-import gradio as gr
-
-model2 = torch.hub.load(
- "AK391/animegan2-pytorch:main",
- "generator",
- pretrained=True,
- device="cpu",
- progress=False
-)
-
-model1 = torch.hub.load(
- "AK391/animegan2-pytorch:main",
- "generator",
- pretrained="face_paint_512_v1",
- device="cpu"
-)
-
-face2paint = torch.hub.load(
- 'AK391/animegan2-pytorch:main',
- 'face2paint',
- size=512,
- device="cpu",
- side_by_side=False
-)
-
-def inference(img, ver):
- if ver == 'versão 2 (🔺 robustez, 🔻 estilização)':
- out = face2paint(model2, img)
- else:
- out = face2paint(model1, img)
- return out
-
-title = "AnimeGANv2"
-description = "Demonstração do AnimeGanv2 para retratos de rostos. Para usá-lo, simplesmente faça o upload da sua imagem."
-article = "
"
-
-gr.Interface(
- fn=inference,
- inputs=[
- gr.inputs.Image(type="pil"),
- gr.inputs.Radio(
- ['versão 1 (🔺 estilização, 🔻 robustez)', 'versão 2 (🔺 robustez, 🔻 estilização)'],
- type="value",
- default='versão 2 (🔺 robustez, 🔻 estilização)',
- label='versão'
- )
- ],
- outputs=gr.outputs.Image(type="pil"),
- title=title,
- description=description,
- article=article,
- allow_flagging=False,
- allow_screenshot=False
-).launch()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_middlewares.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_middlewares.py
deleted file mode 100644
index fabcc449a2107211fd99cd59f576a2d855d0e042..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/web_middlewares.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import re
-from typing import TYPE_CHECKING, Awaitable, Callable, Tuple, Type, TypeVar
-
-from .typedefs import Handler
-from .web_exceptions import HTTPPermanentRedirect, _HTTPMove
-from .web_request import Request
-from .web_response import StreamResponse
-from .web_urldispatcher import SystemRoute
-
-__all__ = (
- "middleware",
- "normalize_path_middleware",
-)
-
-if TYPE_CHECKING: # pragma: no cover
- from .web_app import Application
-
-_Func = TypeVar("_Func")
-
-
-async def _check_request_resolves(request: Request, path: str) -> Tuple[bool, Request]:
- alt_request = request.clone(rel_url=path)
-
- match_info = await request.app.router.resolve(alt_request)
- alt_request._match_info = match_info
-
- if match_info.http_exception is None:
- return True, alt_request
-
- return False, request
-
-
-def middleware(f: _Func) -> _Func:
- f.__middleware_version__ = 1 # type: ignore[attr-defined]
- return f
-
-
-_Middleware = Callable[[Request, Handler], Awaitable[StreamResponse]]
-
-
-def normalize_path_middleware(
- *,
- append_slash: bool = True,
- remove_slash: bool = False,
- merge_slashes: bool = True,
- redirect_class: Type[_HTTPMove] = HTTPPermanentRedirect,
-) -> _Middleware:
- """Factory for producing a middleware that normalizes the path of a request.
-
- Normalizing means:
- - Add or remove a trailing slash to the path.
- - Double slashes are replaced by one.
-
- The middleware returns as soon as it finds a path that resolves
- correctly. The order if both merge and append/remove are enabled is
- 1) merge slashes
- 2) append/remove slash
- 3) both merge slashes and append/remove slash.
- If the path resolves with at least one of those conditions, it will
- redirect to the new path.
-
- Only one of `append_slash` and `remove_slash` can be enabled. If both
- are `True` the factory will raise an assertion error
-
- If `append_slash` is `True` the middleware will append a slash when
- needed. If a resource is defined with trailing slash and the request
- comes without it, it will append it automatically.
-
- If `remove_slash` is `True`, `append_slash` must be `False`. When enabled
- the middleware will remove trailing slashes and redirect if the resource
- is defined
-
- If merge_slashes is True, merge multiple consecutive slashes in the
- path into one.
- """
- correct_configuration = not (append_slash and remove_slash)
- assert correct_configuration, "Cannot both remove and append slash"
-
- @middleware
- async def impl(request: Request, handler: Handler) -> StreamResponse:
- if isinstance(request.match_info.route, SystemRoute):
- paths_to_check = []
- if "?" in request.raw_path:
- path, query = request.raw_path.split("?", 1)
- query = "?" + query
- else:
- query = ""
- path = request.raw_path
-
- if merge_slashes:
- paths_to_check.append(re.sub("//+", "/", path))
- if append_slash and not request.path.endswith("/"):
- paths_to_check.append(path + "/")
- if remove_slash and request.path.endswith("/"):
- paths_to_check.append(path[:-1])
- if merge_slashes and append_slash:
- paths_to_check.append(re.sub("//+", "/", path + "/"))
- if merge_slashes and remove_slash:
- merged_slashes = re.sub("//+", "/", path)
- paths_to_check.append(merged_slashes[:-1])
-
- for path in paths_to_check:
- path = re.sub("^//+", "/", path) # SECURITY: GHSA-v6wp-4m6f-gcjg
- resolves, request = await _check_request_resolves(request, path)
- if resolves:
- raise redirect_class(request.raw_path + query)
-
- return await handler(request)
-
- return impl
-
-
-def _fix_request_current_app(app: "Application") -> _Middleware:
- @middleware
- async def impl(request: Request, handler: Handler) -> StreamResponse:
- with request.match_info.set_current_app(app):
- return await handler(request)
-
- return impl
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/chunk.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/chunk.py
deleted file mode 100644
index 076cbc4370b4471c2074cade279250a3ebec9041..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/chunk.py
+++ /dev/null
@@ -1,89 +0,0 @@
-from __future__ import annotations
-
-import math
-
-
-def calc_chunk_sizes(
- chunk_size: int | tuple[int, int] | None,
- chunk_count: int | tuple[int, int] | None,
- total_chunk_count: int | None,
- ny: int,
- nx: int,
-) -> tuple[int, int]:
- """Calculate chunk sizes.
-
- Args:
- chunk_size (int or tuple(int, int), optional): Chunk size in (y, x) directions, or the same
- size in both directions if only one is specified.
- chunk_count (int or tuple(int, int), optional): Chunk count in (y, x) directions, or the
- same count in both irections if only one is specified.
- total_chunk_count (int, optional): Total number of chunks.
- ny (int): Number of grid points in y-direction.
- nx (int): Number of grid points in x-direction.
-
- Return:
- tuple(int, int): Chunk sizes (y_chunk_size, x_chunk_size).
-
- Note:
- A maximum of one of ``chunk_size``, ``chunk_count`` and ``total_chunk_count`` may be
- specified.
- """
- if sum([chunk_size is not None, chunk_count is not None, total_chunk_count is not None]) > 1:
- raise ValueError("Only one of chunk_size, chunk_count and total_chunk_count should be set")
-
- if total_chunk_count is not None:
- max_chunk_count = (nx-1)*(ny-1)
- total_chunk_count = min(max(total_chunk_count, 1), max_chunk_count)
- if total_chunk_count == 1:
- chunk_size = 0
- elif total_chunk_count == max_chunk_count:
- chunk_size = (1, 1)
- else:
- factors = two_factors(total_chunk_count)
- if ny > nx:
- chunk_count = factors
- else:
- chunk_count = (factors[1], factors[0])
-
- if chunk_count is not None:
- if isinstance(chunk_count, tuple):
- y_chunk_count, x_chunk_count = chunk_count
- else:
- y_chunk_count = x_chunk_count = chunk_count
- x_chunk_count = min(max(x_chunk_count, 1), nx-1)
- y_chunk_count = min(max(y_chunk_count, 1), ny-1)
- chunk_size = (math.ceil((ny-1) / y_chunk_count), math.ceil((nx-1) / x_chunk_count))
-
- if chunk_size is None:
- y_chunk_size = x_chunk_size = 0
- elif isinstance(chunk_size, tuple):
- y_chunk_size, x_chunk_size = chunk_size
- else:
- y_chunk_size = x_chunk_size = chunk_size
-
- if x_chunk_size < 0 or y_chunk_size < 0:
- raise ValueError("chunk_size cannot be negative")
-
- return y_chunk_size, x_chunk_size
-
-
-def two_factors(n: int) -> tuple[int, int]:
- """Split an integer into two integer factors.
-
- The two factors will be as close as possible to the sqrt of n, and are returned in decreasing
- order. Worst case returns (n, 1).
-
- Args:
- n (int): The integer to factorize.
-
- Return:
- tuple(int, int): The two factors of n, in decreasing order.
- """
- i = math.ceil(math.sqrt(n))
- while n % i != 0:
- i -= 1
- j = n // i
- if i > j:
- return i, j
- else:
- return j, i
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/codecs.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/codecs.py
deleted file mode 100644
index 3ac0268d6a11a1be99bb2cf7fde5979da2853d4a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/codecs.py
+++ /dev/null
@@ -1,135 +0,0 @@
-"""Extend the Python codecs module with a few encodings that are used in OpenType (name table)
-but missing from Python. See https://github.com/fonttools/fonttools/issues/236 for details."""
-
-import codecs
-import encodings
-
-
-class ExtendCodec(codecs.Codec):
- def __init__(self, name, base_encoding, mapping):
- self.name = name
- self.base_encoding = base_encoding
- self.mapping = mapping
- self.reverse = {v: k for k, v in mapping.items()}
- self.max_len = max(len(v) for v in mapping.values())
- self.info = codecs.CodecInfo(
- name=self.name, encode=self.encode, decode=self.decode
- )
- codecs.register_error(name, self.error)
-
- def _map(self, mapper, output_type, exc_type, input, errors):
- base_error_handler = codecs.lookup_error(errors)
- length = len(input)
- out = output_type()
- while input:
- # first try to use self.error as the error handler
- try:
- part = mapper(input, self.base_encoding, errors=self.name)
- out += part
- break # All converted
- except exc_type as e:
- # else convert the correct part, handle error as requested and continue
- out += mapper(input[: e.start], self.base_encoding, self.name)
- replacement, pos = base_error_handler(e)
- out += replacement
- input = input[pos:]
- return out, length
-
- def encode(self, input, errors="strict"):
- return self._map(codecs.encode, bytes, UnicodeEncodeError, input, errors)
-
- def decode(self, input, errors="strict"):
- return self._map(codecs.decode, str, UnicodeDecodeError, input, errors)
-
- def error(self, e):
- if isinstance(e, UnicodeDecodeError):
- for end in range(e.start + 1, e.end + 1):
- s = e.object[e.start : end]
- if s in self.mapping:
- return self.mapping[s], end
- elif isinstance(e, UnicodeEncodeError):
- for end in range(e.start + 1, e.start + self.max_len + 1):
- s = e.object[e.start : end]
- if s in self.reverse:
- return self.reverse[s], end
- e.encoding = self.name
- raise e
-
-
-_extended_encodings = {
- "x_mac_japanese_ttx": (
- "shift_jis",
- {
- b"\xFC": chr(0x007C),
- b"\x7E": chr(0x007E),
- b"\x80": chr(0x005C),
- b"\xA0": chr(0x00A0),
- b"\xFD": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
- "x_mac_trad_chinese_ttx": (
- "big5",
- {
- b"\x80": chr(0x005C),
- b"\xA0": chr(0x00A0),
- b"\xFD": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
- "x_mac_korean_ttx": (
- "euc_kr",
- {
- b"\x80": chr(0x00A0),
- b"\x81": chr(0x20A9),
- b"\x82": chr(0x2014),
- b"\x83": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
- "x_mac_simp_chinese_ttx": (
- "gb2312",
- {
- b"\x80": chr(0x00FC),
- b"\xA0": chr(0x00A0),
- b"\xFD": chr(0x00A9),
- b"\xFE": chr(0x2122),
- b"\xFF": chr(0x2026),
- },
- ),
-}
-
-_cache = {}
-
-
-def search_function(name):
- name = encodings.normalize_encoding(name) # Rather undocumented...
- if name in _extended_encodings:
- if name not in _cache:
- base_encoding, mapping = _extended_encodings[name]
- assert name[-4:] == "_ttx"
- # Python 2 didn't have any of the encodings that we are implementing
- # in this file. Python 3 added aliases for the East Asian ones, mapping
- # them "temporarily" to the same base encoding as us, with a comment
- # suggesting that full implementation will appear some time later.
- # As such, try the Python version of the x_mac_... first, if that is found,
- # use *that* as our base encoding. This would make our encoding upgrade
- # to the full encoding when and if Python finally implements that.
- # http://bugs.python.org/issue24041
- base_encodings = [name[:-4], base_encoding]
- for base_encoding in base_encodings:
- try:
- codecs.lookup(base_encoding)
- except LookupError:
- continue
- _cache[name] = ExtendCodec(name, base_encoding, mapping)
- break
- return _cache[name].info
-
- return None
-
-
-codecs.register(search_function)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-3812b7f1.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-3812b7f1.css
deleted file mode 100644
index 772d43d65ae1a3157ab24e69b7ecb88a3649b4fe..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-3812b7f1.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-sfqy0y{display:flex;flex-direction:inherit;flex-wrap:wrap;gap:var(--form-gap-width);box-shadow:var(--block-shadow);border:var(--block-border-width) solid var(--border-color-primary);border-radius:var(--block-radius);background:var(--border-color-primary);overflow-y:hidden}div.svelte-sfqy0y .block{box-shadow:none!important;border-width:0px!important;border-radius:0!important}.hidden.svelte-sfqy0y{display:none}
diff --git a/spaces/DUOMO-Lab/TransGPT/README.md b/spaces/DUOMO-Lab/TransGPT/README.md
deleted file mode 100644
index 433341850c56ac0ba53cf55b55b9f98b056eafe7..0000000000000000000000000000000000000000
--- a/spaces/DUOMO-Lab/TransGPT/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: TransGPT
-emoji: 🔥
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Datasculptor/DescriptionGPT/detic/custom_solver.py b/spaces/Datasculptor/DescriptionGPT/detic/custom_solver.py
deleted file mode 100644
index 0284ae14ed2e93b2664ef52ad938061f78363516..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/detic/custom_solver.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from enum import Enum
-import itertools
-from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union
-import torch
-
-from detectron2.config import CfgNode
-
-from detectron2.solver.build import maybe_add_gradient_clipping
-
-def match_name_keywords(n, name_keywords):
- out = False
- for b in name_keywords:
- if b in n:
- out = True
- break
- return out
-
-def build_custom_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer:
- """
- Build an optimizer from config.
- """
- params: List[Dict[str, Any]] = []
- memo: Set[torch.nn.parameter.Parameter] = set()
- custom_multiplier_name = cfg.SOLVER.CUSTOM_MULTIPLIER_NAME
- optimizer_type = cfg.SOLVER.OPTIMIZER
- for key, value in model.named_parameters(recurse=True):
- if not value.requires_grad:
- continue
- # Avoid duplicating parameters
- if value in memo:
- continue
- memo.add(value)
- lr = cfg.SOLVER.BASE_LR
- weight_decay = cfg.SOLVER.WEIGHT_DECAY
- if "backbone" in key:
- lr = lr * cfg.SOLVER.BACKBONE_MULTIPLIER
- if match_name_keywords(key, custom_multiplier_name):
- lr = lr * cfg.SOLVER.CUSTOM_MULTIPLIER
- print('Costum LR', key, lr)
- param = {"params": [value], "lr": lr}
- if optimizer_type != 'ADAMW':
- param['weight_decay'] = weight_decay
- params += [param]
-
- def maybe_add_full_model_gradient_clipping(optim): # optim: the optimizer class
- # detectron2 doesn't have full model gradient clipping now
- clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE
- enable = (
- cfg.SOLVER.CLIP_GRADIENTS.ENABLED
- and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model"
- and clip_norm_val > 0.0
- )
-
- class FullModelGradientClippingOptimizer(optim):
- def step(self, closure=None):
- all_params = itertools.chain(*[x["params"] for x in self.param_groups])
- torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val)
- super().step(closure=closure)
-
- return FullModelGradientClippingOptimizer if enable else optim
-
-
- if optimizer_type == 'SGD':
- optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)(
- params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM,
- nesterov=cfg.SOLVER.NESTEROV
- )
- elif optimizer_type == 'ADAMW':
- optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)(
- params, cfg.SOLVER.BASE_LR,
- weight_decay=cfg.SOLVER.WEIGHT_DECAY
- )
- else:
- raise NotImplementedError(f"no optimizer type {optimizer_type}")
- if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model":
- optimizer = maybe_add_gradient_clipping(cfg, optimizer)
- return optimizer
\ No newline at end of file
diff --git a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/app_training.py b/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/app_training.py
deleted file mode 100644
index 09660a26b4d99f8ff8457a454fdddcc57d7f3756..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/app_training.py
+++ /dev/null
@@ -1,144 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-
-import gradio as gr
-
-from constants import UploadTarget
-from inference import InferencePipeline
-from trainer import Trainer
-
-
-def create_training_demo(trainer: Trainer,
- pipe: InferencePipeline | None = None) -> gr.Blocks:
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- with gr.Box():
- gr.Markdown('Training Data')
- instance_images = gr.Files(label='Instance images')
- instance_prompt = gr.Textbox(label='Instance prompt',
- max_lines=1)
- gr.Markdown('''
- - Upload images of the style you are planning on training on.
- - For an instance prompt, use a unique, made up word to avoid collisions.
- ''')
- with gr.Box():
- gr.Markdown('Output Model')
- output_model_name = gr.Text(label='Name of your model',
- max_lines=1)
- delete_existing_model = gr.Checkbox(
- label='Delete existing model of the same name',
- value=False)
- validation_prompt = gr.Text(label='Validation Prompt')
- with gr.Box():
- gr.Markdown('Upload Settings')
- with gr.Row():
- upload_to_hub = gr.Checkbox(
- label='Upload model to Hub', value=True)
- use_private_repo = gr.Checkbox(label='Private',
- value=True)
- delete_existing_repo = gr.Checkbox(
- label='Delete existing repo of the same name',
- value=False)
- upload_to = gr.Radio(
- label='Upload to',
- choices=[_.value for _ in UploadTarget],
- value=UploadTarget.LORA_LIBRARY.value)
- gr.Markdown('''
- - By default, trained models will be uploaded to [LoRA Library](https://huggingface.co/lora-library) (see [this example model](https://huggingface.co/lora-library/lora-dreambooth-sample-dog)).
- - You can also choose "Personal Profile", in which case, the model will be uploaded to https://huggingface.co/{your_username}/{model_name}.
- ''')
-
- with gr.Box():
- gr.Markdown('Training Parameters')
- with gr.Row():
- base_model = gr.Text(
- label='Base Model',
- value='stabilityai/stable-diffusion-2-1-base',
- max_lines=1)
- resolution = gr.Dropdown(choices=['512', '768'],
- value='512',
- label='Resolution')
- num_training_steps = gr.Number(
- label='Number of Training Steps', value=1000, precision=0)
- learning_rate = gr.Number(label='Learning Rate', value=0.0001)
- gradient_accumulation = gr.Number(
- label='Number of Gradient Accumulation',
- value=1,
- precision=0)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=100000,
- step=1,
- value=0)
- fp16 = gr.Checkbox(label='FP16', value=True)
- use_8bit_adam = gr.Checkbox(label='Use 8bit Adam', value=True)
- checkpointing_steps = gr.Number(label='Checkpointing Steps',
- value=100,
- precision=0)
- use_wandb = gr.Checkbox(label='Use W&B',
- value=False,
- interactive=bool(
- os.getenv('WANDB_API_KEY')))
- validation_epochs = gr.Number(label='Validation Epochs',
- value=100,
- precision=0)
- gr.Markdown('''
- - The base model must be a model that is compatible with [diffusers](https://github.com/huggingface/diffusers) library.
- - It takes a few minutes to download the base model first.
- - It will take about 8 minutes to train for 1000 steps with a T4 GPU.
- - You may want to try a small number of steps first, like 1, to see if everything works fine in your environment.
- - You can check the training status by pressing the "Open logs" button if you are running this on your Space.
- - You need to set the environment variable `WANDB_API_KEY` if you'd like to use [W&B](https://wandb.ai/site). See [W&B documentation](https://docs.wandb.ai/guides/track/advanced/environment-variables).
- - **Note:** Due to [this issue](https://github.com/huggingface/accelerate/issues/944), currently, training will not terminate properly if you use W&B.
- ''')
-
- remove_gpu_after_training = gr.Checkbox(
- label='Remove GPU after training',
- value=False,
- interactive=bool(os.getenv('SPACE_ID')),
- visible=False)
- run_button = gr.Button('Start Training')
-
- with gr.Box():
- gr.Markdown('Output message')
- output_message = gr.Markdown()
-
- if pipe is not None:
- run_button.click(fn=pipe.clear)
- run_button.click(fn=trainer.run,
- inputs=[
- instance_images,
- instance_prompt,
- output_model_name,
- delete_existing_model,
- validation_prompt,
- base_model,
- resolution,
- num_training_steps,
- learning_rate,
- gradient_accumulation,
- seed,
- fp16,
- use_8bit_adam,
- checkpointing_steps,
- use_wandb,
- validation_epochs,
- upload_to_hub,
- use_private_repo,
- delete_existing_repo,
- upload_to,
- remove_gpu_after_training,
- ],
- outputs=output_message)
- return demo
-
-
-if __name__ == '__main__':
- hf_token = os.getenv('HF_TOKEN')
- trainer = Trainer(hf_token)
- demo = create_training_demo(trainer)
- demo.queue(max_size=1).launch(share=False)
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/modules/streaming.py b/spaces/Datasculptor/MusicGen/audiocraft/modules/streaming.py
deleted file mode 100644
index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/modules/streaming.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Streaming module API that should be implemented by all Streaming components,
-"""
-
-from contextlib import contextmanager
-import typing as tp
-from torch import nn
-import torch
-
-
-State = tp.Dict[str, torch.Tensor]
-
-
-class StreamingModule(nn.Module):
- """Common API for streaming components.
-
- Each streaming component has a streaming state, which is just a dict[str, Tensor].
- By convention, the first dim of each tensor must be the batch size.
- Don't use dots in the key names, as this would clash with submodules
- (like in state_dict).
-
- If `self._is_streaming` is True, the component should use and remember
- the proper state inside `self._streaming_state`.
-
- To set a streaming component in streaming state, use
-
- with module.streaming():
- ...
-
- This will automatically reset the streaming state when exiting the context manager.
- This also automatically propagates to all streaming children module.
-
- Some module might also implement the `StreamingModule.flush` method, although
- this one is trickier, as all parents module must be StreamingModule and implement
- it as well for it to work properly. See `StreamingSequential` after.
- """
- def __init__(self) -> None:
- super().__init__()
- self._streaming_state: State = {}
- self._is_streaming = False
-
- def _apply_named_streaming(self, fn: tp.Any):
- for name, module in self.named_modules():
- if isinstance(module, StreamingModule):
- fn(name, module)
-
- def _set_streaming(self, streaming: bool):
- def _set_streaming(name, module):
- module._is_streaming = streaming
- self._apply_named_streaming(_set_streaming)
-
- @contextmanager
- def streaming(self):
- """Context manager to enter streaming mode. Reset streaming state on exit.
- """
- self._set_streaming(True)
- try:
- yield
- finally:
- self._set_streaming(False)
- self.reset_streaming()
-
- def reset_streaming(self):
- """Reset the streaming state.
- """
- def _reset(name: str, module: StreamingModule):
- module._streaming_state.clear()
-
- self._apply_named_streaming(_reset)
-
- def get_streaming_state(self) -> State:
- """Return the streaming state, including that of sub-modules.
- """
- state: State = {}
-
- def _add(name: str, module: StreamingModule):
- if name:
- name += "."
- for key, value in module._streaming_state.items():
- state[name + key] = value
-
- self._apply_named_streaming(_add)
- return state
-
- def set_streaming_state(self, state: State):
- """Set the streaming state, including that of sub-modules.
- """
- state = dict(state)
-
- def _set(name: str, module: StreamingModule):
- if name:
- name += "."
- module._streaming_state.clear()
- for key, value in list(state.items()):
- # complexity is not ideal here, but probably fine.
- if key.startswith(name):
- local_key = key[len(name):]
- if '.' not in local_key:
- module._streaming_state[local_key] = value
- del state[key]
-
- self._apply_named_streaming(_set)
- assert len(state) == 0, list(state.keys())
-
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- """Flush any remaining outputs that were waiting for completion.
- Typically, for convolutions, this will add the final padding
- and process the last buffer.
-
- This should take an optional argument `x`, which will be provided
- if a module before this one in the streaming pipeline has already
- spitted out a flushed out buffer.
- """
- if x is None:
- return None
- else:
- return self(x)
-
-
-class StreamingSequential(StreamingModule, nn.Sequential):
- """A streaming compatible alternative of `nn.Sequential`.
- """
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- for module in self:
- if isinstance(module, StreamingModule):
- x = module.flush(x)
- elif x is not None:
- x = module(x)
- return x
diff --git a/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/augment.py b/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/augment.py
deleted file mode 100644
index ad4059c233efd87b20eea71c86ac7cbad93b6e77..0000000000000000000000000000000000000000
--- a/spaces/DeepLearning101/Speech-Quality-Inspection_Meta-Denoiser/denoiser/augment.py
+++ /dev/null
@@ -1,191 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import random
-import torch as th
-from torch import nn
-from torch.nn import functional as F
-
-from . import dsp
-
-
-class Remix(nn.Module):
- """Remix.
- Mixes different noises with clean speech within a given batch
- """
-
- def forward(self, sources):
- noise, clean = sources
- bs, *other = noise.shape
- device = noise.device
- perm = th.argsort(th.rand(bs, device=device), dim=0)
- return th.stack([noise[perm], clean])
-
-
-class RevEcho(nn.Module):
- """
- Hacky Reverb but runs on GPU without slowing down training.
- This reverb adds a succession of attenuated echos of the input
- signal to itself. Intuitively, the delay of the first echo will happen
- after roughly 2x the radius of the room and is controlled by `first_delay`.
- Then RevEcho keeps adding echos with the same delay and further attenuation
- until the amplitude ratio between the last and first echo is 1e-3.
- The attenuation factor and the number of echos to adds is controlled
- by RT60 (measured in seconds). RT60 is the average time to get to -60dB
- (remember volume is measured over the squared amplitude so this matches
- the 1e-3 ratio).
-
- At each call to RevEcho, `first_delay`, `initial` and `RT60` are
- sampled from their range. Then, to prevent this reverb from being too regular,
- the delay time is resampled uniformly within `first_delay +- 10%`,
- as controlled by the `jitter` parameter. Finally, for a denser reverb,
- multiple trains of echos are added with different jitter noises.
-
- Args:
- - initial: amplitude of the first echo as a fraction
- of the input signal. For each sample, actually sampled from
- `[0, initial]`. Larger values means louder reverb. Physically,
- this would depend on the absorption of the room walls.
- - rt60: range of values to sample the RT60 in seconds, i.e.
- after RT60 seconds, the echo amplitude is 1e-3 of the first echo.
- The default values follow the recommendations of
- https://arxiv.org/ftp/arxiv/papers/2001/2001.08662.pdf, Section 2.4.
- Physically this would also be related to the absorption of the
- room walls and there is likely a relation between `RT60` and
- `initial`, which we ignore here.
- - first_delay: range of values to sample the first echo delay in seconds.
- The default values are equivalent to sampling a room of 3 to 10 meters.
- - repeat: how many train of echos with differents jitters to add.
- Higher values means a denser reverb.
- - jitter: jitter used to make each repetition of the reverb echo train
- slightly different. For instance a jitter of 0.1 means
- the delay between two echos will be in the range `first_delay +- 10%`,
- with the jittering noise being resampled after each single echo.
- - keep_clean: fraction of the reverb of the clean speech to add back
- to the ground truth. 0 = dereverberation, 1 = no dereverberation.
- - sample_rate: sample rate of the input signals.
- """
-
- def __init__(self, proba=0.5, initial=0.3, rt60=(0.3, 1.3), first_delay=(0.01, 0.03),
- repeat=3, jitter=0.1, keep_clean=0.1, sample_rate=16000):
- super().__init__()
- self.proba = proba
- self.initial = initial
- self.rt60 = rt60
- self.first_delay = first_delay
- self.repeat = repeat
- self.jitter = jitter
- self.keep_clean = keep_clean
- self.sample_rate = sample_rate
-
- def _reverb(self, source, initial, first_delay, rt60):
- """
- Return the reverb for a single source.
- """
- length = source.shape[-1]
- reverb = th.zeros_like(source)
- for _ in range(self.repeat):
- frac = 1 # what fraction of the first echo amplitude is still here
- echo = initial * source
- while frac > 1e-3:
- # First jitter noise for the delay
- jitter = 1 + self.jitter * random.uniform(-1, 1)
- delay = min(
- 1 + int(jitter * first_delay * self.sample_rate),
- length)
- # Delay the echo in time by padding with zero on the left
- echo = F.pad(echo[:, :, :-delay], (delay, 0))
- reverb += echo
-
- # Second jitter noise for the attenuation
- jitter = 1 + self.jitter * random.uniform(-1, 1)
- # we want, with `d` the attenuation, d**(rt60 / first_ms) = 1e-3
- # i.e. log10(d) = -3 * first_ms / rt60, so that
- attenuation = 10**(-3 * jitter * first_delay / rt60)
- echo *= attenuation
- frac *= attenuation
- return reverb
-
- def forward(self, wav):
- if random.random() >= self.proba:
- return wav
- noise, clean = wav
- # Sample characteristics for the reverb
- initial = random.random() * self.initial
- first_delay = random.uniform(*self.first_delay)
- rt60 = random.uniform(*self.rt60)
-
- reverb_noise = self._reverb(noise, initial, first_delay, rt60)
- # Reverb for the noise is always added back to the noise
- noise += reverb_noise
- reverb_clean = self._reverb(clean, initial, first_delay, rt60)
- # Split clean reverb among the clean speech and noise
- clean += self.keep_clean * reverb_clean
- noise += (1 - self.keep_clean) * reverb_clean
-
- return th.stack([noise, clean])
-
-
-class BandMask(nn.Module):
- """BandMask.
- Maskes bands of frequencies. Similar to Park, Daniel S., et al.
- "Specaugment: A simple data augmentation method for automatic speech recognition."
- (https://arxiv.org/pdf/1904.08779.pdf) but over the waveform.
- """
-
- def __init__(self, maxwidth=0.2, bands=120, sample_rate=16_000):
- """__init__.
-
- :param maxwidth: the maximum width to remove
- :param bands: number of bands
- :param sample_rate: signal sample rate
- """
- super().__init__()
- self.maxwidth = maxwidth
- self.bands = bands
- self.sample_rate = sample_rate
-
- def forward(self, wav):
- bands = self.bands
- bandwidth = int(abs(self.maxwidth) * bands)
- mels = dsp.mel_frequencies(bands, 40, self.sample_rate/2) / self.sample_rate
- low = random.randrange(bands)
- high = random.randrange(low, min(bands, low + bandwidth))
- filters = dsp.LowPassFilters([mels[low], mels[high]]).to(wav.device)
- low, midlow = filters(wav)
- # band pass filtering
- out = wav - midlow + low
- return out
-
-
-class Shift(nn.Module):
- """Shift."""
-
- def __init__(self, shift=8192, same=False):
- """__init__.
-
- :param shift: randomly shifts the signals up to a given factor
- :param same: shifts both clean and noisy files by the same factor
- """
- super().__init__()
- self.shift = shift
- self.same = same
-
- def forward(self, wav):
- sources, batch, channels, length = wav.shape
- length = length - self.shift
- if self.shift > 0:
- if not self.training:
- wav = wav[..., :length]
- else:
- offsets = th.randint(
- self.shift,
- [1 if self.same else sources, batch, 1, 1], device=wav.device)
- offsets = offsets.expand(sources, -1, channels, -1)
- indexes = th.arange(length, device=wav.device)
- wav = wav.gather(3, indexes + offsets)
- return wav
diff --git a/spaces/Detomo/ai-avatar-frontend/src/setupTests.js b/spaces/Detomo/ai-avatar-frontend/src/setupTests.js
deleted file mode 100644
index 8f2609b7b3e0e3897ab3bcaad13caf6876e48699..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-avatar-frontend/src/setupTests.js
+++ /dev/null
@@ -1,5 +0,0 @@
-// jest-dom adds custom jest matchers for asserting on DOM nodes.
-// allows you to do things like:
-// expect(element).toHaveTextContent(/react/i)
-// learn more: https://github.com/testing-library/jest-dom
-import '@testing-library/jest-dom';
diff --git a/spaces/DragGan/DragGan/viz/capture_widget.py b/spaces/DragGan/DragGan/viz/capture_widget.py
deleted file mode 100644
index 72bf3cfd2d361e85f9a24e1d0c5f3abc9bba0b39..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/viz/capture_widget.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import re
-import numpy as np
-import imgui
-import PIL.Image
-from gui_utils import imgui_utils
-from . import renderer
-import torch
-import torchvision
-
-#----------------------------------------------------------------------------
-
-class CaptureWidget:
- def __init__(self, viz):
- self.viz = viz
- self.path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '_screenshots'))
- self.dump_image = False
- self.dump_gui = False
- self.defer_frames = 0
- self.disabled_time = 0
-
- def dump_png(self, image):
- viz = self.viz
- try:
- _height, _width, channels = image.shape
- print(viz.result)
- assert image.dtype == np.uint8
- os.makedirs(self.path, exist_ok=True)
- file_id = 0
- for entry in os.scandir(self.path):
- if entry.is_file():
- match = re.fullmatch(r'(\d+).*', entry.name)
- if match:
- file_id = max(file_id, int(match.group(1)) + 1)
- if channels == 1:
- pil_image = PIL.Image.fromarray(image[:, :, 0], 'L')
- else:
- pil_image = PIL.Image.fromarray(image[:, :, :3], 'RGB')
- pil_image.save(os.path.join(self.path, f'{file_id:05d}.png'))
- np.save(os.path.join(self.path, f'{file_id:05d}.npy'), viz.result.w)
- except:
- viz.result.error = renderer.CapturedException()
-
- @imgui_utils.scoped_by_object_id
- def __call__(self, show=True):
- viz = self.viz
- if show:
- with imgui_utils.grayed_out(self.disabled_time != 0):
- imgui.text('Capture')
- imgui.same_line(viz.label_w)
-
- _changed, self.path = imgui_utils.input_text('##path', self.path, 1024,
- flags=(imgui.INPUT_TEXT_AUTO_SELECT_ALL | imgui.INPUT_TEXT_ENTER_RETURNS_TRUE),
- width=(-1),
- help_text='PATH')
- if imgui.is_item_hovered() and not imgui.is_item_active() and self.path != '':
- imgui.set_tooltip(self.path)
- imgui.text(' ')
- imgui.same_line(viz.label_w)
- if imgui_utils.button('Save image', width=viz.button_w, enabled=(self.disabled_time == 0 and 'image' in viz.result)):
- self.dump_image = True
- self.defer_frames = 2
- self.disabled_time = 0.5
- imgui.same_line()
- if imgui_utils.button('Save GUI', width=viz.button_w, enabled=(self.disabled_time == 0)):
- self.dump_gui = True
- self.defer_frames = 2
- self.disabled_time = 0.5
-
- self.disabled_time = max(self.disabled_time - viz.frame_delta, 0)
- if self.defer_frames > 0:
- self.defer_frames -= 1
- elif self.dump_image:
- if 'image' in viz.result:
- self.dump_png(viz.result.image)
- self.dump_image = False
- elif self.dump_gui:
- viz.capture_next_frame()
- self.dump_gui = False
- captured_frame = viz.pop_captured_frame()
- if captured_frame is not None:
- self.dump_png(captured_frame)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/ECCV2022/storydalle/dalle/models/stage2/layers.py b/spaces/ECCV2022/storydalle/dalle/models/stage2/layers.py
deleted file mode 100644
index e576ab08ef2c3706d03e99a6b2902744806023d7..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/storydalle/dalle/models/stage2/layers.py
+++ /dev/null
@@ -1,221 +0,0 @@
-# ------------------------------------------------------------------------------------
-# Minimal DALL-E
-# Copyright (c) 2021 KakaoBrain. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------------------
-# Modified from minGPT (https://github.com/karpathy/minGPT)
-# Copyright (c) 2020 Andrej Karpathy. All Rights Reserved.
-# ------------------------------------------------------------------------------------
-
-import math
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-
-class GELU(nn.Module):
- def __init__(self, use_approx=False):
- super().__init__()
- self.use_approx = use_approx
-
- def forward(self, x):
- if self.use_approx:
- return x * torch.sigmoid(1.702 * x)
- else:
- return F.gelu(x)
-
-
-class MultiHeadSelfAttention(nn.Module):
-
- def __init__(self,
- ctx_len: int,
- embed_dim: int,
- n_heads: int,
- resid_pdrop: float,
- attn_pdrop: float,
- attn_bias: bool,
- use_mask: bool = True):
- super().__init__()
- assert embed_dim % n_heads == 0
-
- # key, query, value projections for all heads
- self.key = nn.Linear(embed_dim, embed_dim, bias=attn_bias)
- self.query = nn.Linear(embed_dim, embed_dim, bias=attn_bias)
- self.value = nn.Linear(embed_dim, embed_dim, bias=attn_bias)
-
- # regularization
- self.attn_drop = nn.Dropout(attn_pdrop)
- self.resid_drop = nn.Dropout(resid_pdrop)
-
- # output projection
- self.proj = nn.Linear(embed_dim, embed_dim, attn_bias)
-
- self.n_heads = n_heads
- self.ctx_len = ctx_len
- self.use_mask = use_mask
- if self.use_mask:
- self.register_buffer("mask", torch.ones(ctx_len, ctx_len), persistent=False)
- self.mask = torch.tril(self.mask).view(1, ctx_len, ctx_len)
-
- def forward(self, x, use_cache=False, layer_past=None):
- B, T, C = x.shape
- x = x.transpose(0, 1).contiguous() # (B, T, C) -> (T, B, C)
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- k = self.key(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
- q = self.query(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
- v = self.value(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
-
- if use_cache:
- present = torch.stack([k, v])
-
- if layer_past is not None:
- # print(layer_past.shape, k.shape, v.shape, q.shape)
- # print("LayerPast shape", layer_past.shape)
- past_key, past_value = layer_past
-
- if len(past_key.shape) == 4:
- _, _, seq_len, dim = past_key.shape
- k = torch.cat([past_key.reshape(-1, seq_len, dim), k], dim=-2)
- v = torch.cat([past_value.reshape(-1, seq_len, dim), v], dim=-2)
- elif len(past_key.shape) == 3:
- past_key, past_value = layer_past
- k = torch.cat([past_key, k], dim=-2)
- v = torch.cat([past_value, v], dim=-2)
- else:
- raise ValueError
-
- if use_cache and layer_past is not None:
- # Tensor shape below: (B * nh, 1, hs) X (B * nh, hs, K) -> (B * nh, 1, K)
- att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = torch.bmm(att, v) # (B*nh, 1, K) X (B*nh, K, hs) -> (B*nh, 1, hs)
- else:
- # Tensor shape below: (B * nh, T, hs) X (B * nh, hs, T) -> (B * nh, T, T)
- att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))))
- if self.use_mask:
- # TODO : Flip when not prompt tunign
- # mask = self.mask if T == self.ctx_len else self.mask[:, :T, :T]
- if T == self.ctx_len:
- mask = self.mask
- else:
- mask = torch.tril(torch.ones(T, T)).view(1, T, T).to(att.device)
- att = att.masked_fill(mask == 0, float('-inf'))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = torch.bmm(att, v) # (B*nh, T, T) X (B*nh, T, hs) -> (B*nh, T, hs)
- y = y.transpose(0, 1).contiguous().view(T, B, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y))
- if use_cache:
- return y.transpose(0, 1).contiguous(), present # (T, B, C) -> (B, T, C)
- else:
- return y.transpose(0, 1).contiguous() # (T, B, C) -> (B, T, C)
-
- def forward_with_context(self, x, context, mask=None):
- B, T, C = x.shape
- x = x.transpose(0, 1).contiguous() # (B, T, C) -> (T, B, C)
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- q = self.query(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
-
- B, T_c, C = context.shape
- k = self.key(context).view(T_c, B * self.n_heads, C // self.n_heads).transpose(0, 1) # (B*nh, T, hs)
- v = self.value(context).view(T_c, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs)
-
- # Tensor shape below: (B * nh, T, hs) X (B * nh, hs, Tc) -> (B * nh, T, Tc)
- att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))))
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = torch.bmm(att, v) # (B*nh, T, T) X (B*nh, T, hs) -> (B*nh, T, hs)
- y = y.transpose(0, 1).contiguous().view(T, B, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y)).transpose(0, 1).contiguous()
- if mask is not None:
- y = y.masked_fill(mask == 0, float('0.0'))
- return y # (T, B, C) -> (B, T, C)
-
-
-class Block(nn.Module):
-
- def __init__(self,
- ctx_len: int,
- embed_dim: int,
- n_heads: int,
- mlp_bias: bool,
- attn_bias: bool,
- resid_pdrop: bool,
- attn_pdrop: bool,
- gelu_use_approx: bool):
- super().__init__()
- self.ln1 = nn.LayerNorm(embed_dim)
- self.ln2 = nn.LayerNorm(embed_dim)
-
- self.attn = MultiHeadSelfAttention(ctx_len=ctx_len,
- embed_dim=embed_dim,
- n_heads=n_heads,
- attn_pdrop=attn_pdrop,
- resid_pdrop=resid_pdrop,
- attn_bias=attn_bias,
- use_mask=True)
- self.mlp = nn.Sequential(
- nn.Linear(embed_dim, 4 * embed_dim, bias=mlp_bias),
- GELU(gelu_use_approx),
- nn.Linear(4 * embed_dim, embed_dim, bias=mlp_bias),
- nn.Dropout(resid_pdrop),
- )
-
- def forward(self, x, layer_past=None):
- x = x + self.attn(self.ln1(x), layer_past=layer_past)
- x = x + self.mlp(self.ln2(x))
- return x
-
- def sample(self, x, layer_past=None):
- attn, present = self.attn(self.ln1(x), use_cache=True, layer_past=layer_past)
- x = x + attn
- x = x + self.mlp(self.ln2(x))
- return x, present
-
- def sample_with_context(self, x, context, context_mask, cross_attn_layer, layer_past=None):
- attn, present = self.attn(self.ln1(x), use_cache=True, layer_past=layer_past)
- x = x + attn
-
- c_attn = cross_attn_layer(x.to(device=context.device),
- context,
- context_mask.to(device=context.device))
-
- x = x + c_attn.to(device=x.device)
-
- x = x + self.mlp(self.ln2(x))
- return x, present
-
-
-class CrossAttentionLayer(nn.Module):
-
- def __init__(self,
- ctx_len: int,
- embed_dim: int,
- n_heads: int,
- attn_bias: bool,
- resid_pdrop: bool,
- attn_pdrop: bool):
- super().__init__()
-
- self.ln1 = nn.LayerNorm(embed_dim)
- self.ln2 = nn.LayerNorm(embed_dim)
- self.attn = MultiHeadSelfAttention(ctx_len=ctx_len,
- embed_dim=embed_dim,
- n_heads=n_heads,
- attn_pdrop=attn_pdrop,
- resid_pdrop=resid_pdrop,
- attn_bias=attn_bias,
- use_mask=False)
-
- def forward(self, x, context, context_mask=None):
- attn = self.attn.forward_with_context(self.ln1(x), self.ln2(context), context_mask)
- # x = x + attn
- # return x
- return attn
\ No newline at end of file
diff --git a/spaces/Eddycrack864/Applio-Inference/colab_for_mdx.py b/spaces/Eddycrack864/Applio-Inference/colab_for_mdx.py
deleted file mode 100644
index 274846d0b5395865a05fce0da86b96d26ac06999..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/colab_for_mdx.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import json
-import os
-import gc
-import psutil
-import requests
-import subprocess
-import time
-import logging
-import sys
-import shutil
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-first_cell_executed = False
-file_folder = "Colab-for-MDX_B"
-def first_cell_ran():
- global first_cell_executed
- if first_cell_executed:
- #print("The 'first_cell_ran' function has already been executed.")
- return
-
-
-
- first_cell_executed = True
- os.makedirs("tmp_models", exist_ok=True)
-
-
-
- class hide_opt: # hide outputs
- def __enter__(self):
- self._original_stdout = sys.stdout
- sys.stdout = open(os.devnull, "w")
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- sys.stdout.close()
- sys.stdout = self._original_stdout
-
- def get_size(bytes, suffix="B"): # read ram
- global svmem
- factor = 1024
- for unit in ["", "K", "M", "G", "T", "P"]:
- if bytes < factor:
- return f"{bytes:.2f}{unit}{suffix}"
- bytes /= factor
- svmem = psutil.virtual_memory()
-
-
- def use_uvr_without_saving():
- print("Notice: files won't be saved to personal drive.")
- print(f"Downloading {file_folder}...", end=" ")
- with hide_opt():
- #os.chdir(mounting_path)
- items_to_move = ["demucs", "diffq","julius","model","separated","tracks","mdx.py","MDX-Net_Colab.ipynb"]
- subprocess.run(["git", "clone", "https://github.com/NaJeongMo/Colab-for-MDX_B.git"])
- for item_name in items_to_move:
- item_path = os.path.join(file_folder, item_name)
- if os.path.exists(item_path):
- if os.path.isfile(item_path):
- shutil.move(item_path, now_dir)
- elif os.path.isdir(item_path):
- shutil.move(item_path, now_dir)
- try:
- shutil.rmtree(file_folder)
- except PermissionError:
- print(f"No se pudo eliminar la carpeta {file_folder}. Puede estar relacionada con Git.")
-
-
- use_uvr_without_saving()
- print("done!")
- if not os.path.exists("tracks"):
- os.mkdir("tracks")
-first_cell_ran()
\ No newline at end of file
diff --git a/spaces/Ekimetrics/climate-question-answering/climateqa/vectorstore.py b/spaces/Ekimetrics/climate-question-answering/climateqa/vectorstore.py
deleted file mode 100644
index 2f68ed4e507e6ea688e9cbf206ce2b4a2f41ec65..0000000000000000000000000000000000000000
--- a/spaces/Ekimetrics/climate-question-answering/climateqa/vectorstore.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Pinecone
-# More info at https://docs.pinecone.io/docs/langchain
-# And https://python.langchain.com/docs/integrations/vectorstores/pinecone
-import os
-import pinecone
-from langchain.vectorstores import Pinecone
-
-# LOAD ENVIRONMENT VARIABLES
-try:
- from dotenv import load_dotenv
- load_dotenv()
-except:
- pass
-
-
-def get_pinecone_vectorstore(embeddings,text_key = "content"):
-
- # initialize pinecone
- pinecone.init(
- api_key=os.getenv("PINECONE_API_KEY"), # find at app.pinecone.io
- environment=os.getenv("PINECONE_API_ENVIRONMENT"), # next to api key in console
- )
-
- index_name = os.getenv("PINECONE_API_INDEX")
- vectorstore = Pinecone.from_existing_index(index_name, embeddings,text_key = text_key)
- return vectorstore
-
-
-# def get_pinecone_retriever(vectorstore,k = 10,namespace = "vectors",sources = ["IPBES","IPCC"]):
-
-# assert isinstance(sources,list)
-
-# # Check if all elements in the list are either IPCC or IPBES
-# filter = {
-# "source": { "$in":sources},
-# }
-
-# retriever = vectorstore.as_retriever(search_kwargs={
-# "k": k,
-# "namespace":"vectors",
-# "filter":filter
-# })
-
-# return retriever
\ No newline at end of file
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_33966KB.py
deleted file mode 100644
index 73a5b836177b706c306e27875f8391c1aed4b948..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/lib/uvr5_pack/lib_v5/nets_33966KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_33966KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16, 32)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/ST_MJ_alphanumeric_train.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/ST_MJ_alphanumeric_train.py
deleted file mode 100644
index 5fc1abac0a48b9deef3ac41353dc24d3748d2426..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/ST_MJ_alphanumeric_train.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# Text Recognition Training set, including:
-# Synthetic Datasets: SynthText, Syn90k
-# Both annotations are filtered so that
-# only alphanumeric terms are left
-
-train_root = 'data/mixture'
-
-train_img_prefix1 = f'{train_root}/Syn90k/mnt/ramdisk/max/90kDICT32px'
-train_ann_file1 = f'{train_root}/Syn90k/label.lmdb'
-
-train1 = dict(
- type='OCRDataset',
- img_prefix=train_img_prefix1,
- ann_file=train_ann_file1,
- loader=dict(
- type='AnnFileLoader',
- repeat=1,
- file_format='lmdb',
- parser=dict(type='LineJsonParser', keys=['filename', 'text'])),
- pipeline=None,
- test_mode=False)
-
-train_img_prefix2 = f'{train_root}/SynthText/' + \
- 'synthtext/SynthText_patch_horizontal'
-train_ann_file2 = f'{train_root}/SynthText/alphanumeric_label.lmdb'
-
-train2 = {key: value for key, value in train1.items()}
-train2['img_prefix'] = train_img_prefix2
-train2['ann_file'] = train_ann_file2
-
-train_list = [train1, train2]
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/model_card.md b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/model_card.md
deleted file mode 100644
index c7bb26500b6590b64ffa6350f37be80dc88612d8..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/model_card.md
+++ /dev/null
@@ -1,94 +0,0 @@
-# Model Card for ImageBind
-
-Multimodal joint embedding model for image/video, text, audio, depth, IMU, and thermal images.
-Input any of the six modalities and get the same sized embedding that can be used for cross-modal and multimodal tasks.
-
-# Model Details
-
-## Model Description
-
-
-Multimodal joint embedding model for image/video, text, audio, depth, IMU, and thermal images
-
-- **Developed by:** Meta AI
-- **Model type:** Multimodal model
-- **Language(s) (NLP):** en
-- **License:** CC BY-NC-SA 4.0
-- **Resources for more information:**
- - [GitHub Repo](https://github.com/facebookresearch/ImageBind)
-
-
-# Uses
-
-
-This model is intended only for research purposes. It provides a joint embedding space for different modalities -- image/video, text, audio, depth, IMU and thermal images.
-We hope that these joint embeddings can be used for a variety of different cross-modal research, e.g., cross-modal retrieval and combining embeddings from different modalities.
-
-## Out-of-Scope Use
-
-
-
-
-This model is *NOT* intended to be used in any real world application -- commercial or otherwise.
-It may produce harmful associations with different inputs.
-The model needs to be investigated and likely re-trained on specific data for any such application.
-The model is expected to work better on web-based visual data since it was trained on such data.
-The text encoder is likely to work only on English language text because of the underlying training datasets.
-
-# Bias, Risks, and Limitations
-
-
-Open-domain joint embedding models are prone to producing specific biases, e.g., study from [CLIP](https://github.com/openai/CLIP/blob/main/model-card.md#bias-and-fairness).
-Since our model uses such models as initialization, it will exhibit such biases too.
-Moreover, for learning joint embeddings for other modalities such as audio, thermal, depth, and IMU we leverage datasets that are relatively small. These joint embeddings are thus limited to the concepts present in the datasets. For example, the thermal datasets we used are limited to outdoor street scenes, while the depth datasets are limited to indoor scenes.
-
-
-
-# Training Details
-
-## Training Data
-
-
-
-ImageBind uses image-paired data for training -- (image, X) where X is one of text, audio, depth, IMU or thermal data.
-In particular, we initialize and freeze the image and text encoders using an OpenCLIP ViT-H encoder.
-We train audio embeddings using Audioset, depth embeddings using the SUN RGB-D dataset, IMU using the Ego4D dataset and thermal embeddings using the LLVIP dataset.
-We provide the exact training data details in the paper.
-
-
-## Training Procedure
-
-
-Please refer to the research paper and github repo for exact details on this.
-
-# Evaluation
-
-## Testing Data, Factors & Metrics
-
-We evaluate the model on a variety of different classification benchmarks for each modality.
-The evaluation details are presented in the paper.
-The models performance is measured using standard classification metrics such as accuracy and mAP.
-
-# Citation
-
-
-
-**BibTeX:**
-```
-@inproceedings{girdhar2023imagebind,
- title={ImageBind: One Embedding Space To Bind Them All},
- author={Girdhar, Rohit and El-Nouby, Alaaeldin and Liu, Zhuang
-and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan},
- booktitle={CVPR},
- year={2023}
-}
-```
-
-
-# Model Card Contact
-
-Please reach out to the authors at: rgirdhar@meta.com imisra@meta.com alaaelnouby@gmail.com
-
-# How to Get Started with the Model
-
-Our github repo provides a simple example to extract embeddings from images, audio etc.
diff --git a/spaces/FlippFuzz/whisper-webui/src/modelCache.py b/spaces/FlippFuzz/whisper-webui/src/modelCache.py
deleted file mode 100644
index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000
--- a/spaces/FlippFuzz/whisper-webui/src/modelCache.py
+++ /dev/null
@@ -1,17 +0,0 @@
-class ModelCache:
- def __init__(self):
- self._cache = dict()
-
- def get(self, model_key: str, model_factory):
- result = self._cache.get(model_key)
-
- if result is None:
- result = model_factory()
- self._cache[model_key] = result
- return result
-
- def clear(self):
- self._cache.clear()
-
-# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times.
-GLOBAL_MODEL_CACHE = ModelCache()
\ No newline at end of file
diff --git a/spaces/GAIR/Factool/version.py b/spaces/GAIR/Factool/version.py
deleted file mode 100644
index 3dc1f76bc69e3f559bee6253b24fc93acee9e1f9..0000000000000000000000000000000000000000
--- a/spaces/GAIR/Factool/version.py
+++ /dev/null
@@ -1 +0,0 @@
-__version__ = "0.1.0"
diff --git a/spaces/GXSA/bingo/src/components/chat-scroll-anchor.tsx b/spaces/GXSA/bingo/src/components/chat-scroll-anchor.tsx
deleted file mode 100644
index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/components/chat-scroll-anchor.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useInView } from 'react-intersection-observer'
-
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-
-interface ChatScrollAnchorProps {
- trackVisibility?: boolean
-}
-
-export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) {
- const isAtBottom = useAtBottom()
- const { ref, entry, inView } = useInView({
- trackVisibility,
- delay: 100,
- rootMargin: '0px 0px -150px 0px'
- })
-
- React.useEffect(() => {
- if (isAtBottom && trackVisibility && !inView) {
- entry?.target.scrollIntoView({
- block: 'start'
- })
- }
- }, [inView, entry, isAtBottom, trackVisibility])
-
- return
-}
diff --git a/spaces/GeekTony/Gradio-Ontology/app (1).py b/spaces/GeekTony/Gradio-Ontology/app (1).py
deleted file mode 100644
index 706bd9475c92d36d9016e1f271dbb4ea003c39ea..0000000000000000000000000000000000000000
--- a/spaces/GeekTony/Gradio-Ontology/app (1).py
+++ /dev/null
@@ -1,269 +0,0 @@
-import gradio as gr
-import pandas as pd
-import json
-from collections import defaultdict
-
-# Create tokenizer for biomed model
-from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
-tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") # https://huggingface.co/d4data/biomedical-ner-all?text=asthma
-model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all")
-pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
-
-# Matplotlib for entity graph
-import matplotlib.pyplot as plt
-plt.switch_backend("Agg")
-
-# Load examples from JSON
-import os
-
-# Load terminology datasets:
-basedir = os.path.dirname(__file__)
-#dataLOINC = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
-#dataPanels = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
-#dataSNOMED = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
-#dataOMS = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
-#dataICD10 = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
-
-dataLOINC = pd.read_csv(f'LoincTableCore.csv')
-dataPanels = pd.read_csv(f'PanelsAndForms-ACW1208Labeled.csv')
-dataSNOMED = pd.read_csv(f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
-dataOMS = pd.read_csv(f'SnomedOMS.csv')
-dataICD10 = pd.read_csv(f'ICD10Diagnosis.csv')
-
-dir_path = os.path.dirname(os.path.realpath(__file__))
-EXAMPLES = {}
-#with open(dir_path + "\\" + "examples.json", "r") as f:
-with open("examples.json", "r") as f:
- example_json = json.load(f)
- EXAMPLES = {x["text"]: x["label"] for x in example_json}
-
-def MatchLOINC(name):
- #basedir = os.path.dirname(__file__)
- pd.set_option("display.max_rows", None)
- #data = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
- data = dataLOINC
- swith=data.loc[data['COMPONENT'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchLOINCPanelsandForms(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
- data = dataPanels
- # Assessment Name:
- #swith=data.loc[data['ParentName'].str.contains(name, case=False, na=False)]
- # Assessment Question:
- swith=data.loc[data['LoincName'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchSNOMED(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
- data = dataSNOMED
- swith=data.loc[data['term'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchOMS(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
- data = dataOMS
- swith=data.loc[data['SNOMED CT'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchICD10(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
- data = dataICD10
- swith=data.loc[data['Description'].str.contains(name, case=False, na=False)]
- return swith
-
-def SaveResult(text, outputfileName):
- #try:
- basedir = os.path.dirname(__file__)
- savePath = outputfileName
- print("Saving: " + text + " to " + savePath)
- from os.path import exists
- file_exists = exists(savePath)
- if file_exists:
- with open(outputfileName, "a") as f: #append
- #for line in text:
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- else:
- with open(outputfileName, "w") as f: #write
- #for line in text:
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- #except ValueError as err:
- # raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return
-
-def loadFile(filename):
- try:
- basedir = os.path.dirname(__file__)
- loadPath = basedir + "\\" + filename
-
- print("Loading: " + loadPath)
-
- from os.path import exists
- file_exists = exists(loadPath)
-
- if file_exists:
- with open(loadPath, "r") as f: #read
- contents = f.read()
- print(contents)
- return contents
-
- except ValueError as err:
- raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return ""
-
-def get_today_filename():
- from datetime import datetime
- date = datetime.now().strftime("%Y_%m_%d-%I.%M.%S.%p")
- #print(f"filename_{date}") 'filename_2023_01_12-03-29-22_AM'
- return f"MedNER_{date}.csv"
-
-def get_base(filename):
- basedir = os.path.dirname(__file__)
- loadPath = basedir + "\\" + filename
- #print("Loading: " + loadPath)
- return loadPath
-
-def group_by_entity(raw):
- outputFile = get_base(get_today_filename())
- out = defaultdict(int)
-
- for ent in raw:
- out[ent["entity_group"]] += 1
- myEntityGroup = ent["entity_group"]
- print("Found entity group type: " + myEntityGroup)
-
-# if (myEntityGroup in ['Sign_symptom', 'Detailed_description', 'History', 'Activity', 'Medication', 'DISEASE_DISORDER' ]):
- if (myEntityGroup not in ['Match All']):
- eterm = ent["word"].replace('#','')
- minlength = 3
- if len(eterm) > minlength:
- print("Found eterm: " + eterm)
- eterm.replace("#","")
- g1=MatchLOINC(eterm)
- g2=MatchLOINCPanelsandForms(eterm)
- g3=MatchSNOMED(eterm)
- g4=MatchOMS(eterm)
- g5=MatchICD10(eterm)
- sAll = ""
-
- print("Saving to output file " + outputFile)
- # Create harmonisation output format of input to output code, name, Text
-
- try: # 18 fields, output to labeled CSV dataset for results teaching on scored regret changes to action plan with data inputs
- col = " 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19"
-
- #LOINC
- g11 = g1['LOINC_NUM'].to_string().replace(","," ").replace("\n"," ")
- g12 = g1['COMPONENT'].to_string().replace(","," ").replace("\n"," ")
- s1 = ("LOINC," + myEntityGroup + "," + eterm + ",questions of ," + g12 + "," + g11 + ", Label,Value, Label,Value, Label,Value ")
- if g11 != 'Series([] )': SaveResult(s1, outputFile)
-
- #LOINC Panels
- g21 = g2['Loinc'].to_string().replace(","," ").replace("\n"," ")
- g22 = g2['LoincName'].to_string().replace(","," ").replace("\n"," ")
- g23 = g2['ParentLoinc'].to_string().replace(","," ").replace("\n"," ")
- g24 = g2['ParentName'].to_string().replace(","," ").replace("\n"," ")
- # s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + ", and Parent codes of ," + g23 + ", with Parent names of ," + g24 + ", Label,Value ")
- s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + "," + g24 + ", and Parent codes of ," + g23 + "," + ", Label,Value ")
- if g21 != 'Series([] )': SaveResult(s2, outputFile)
-
- #SNOMED
- g31 = g3['conceptId'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
- g32 = g3['term'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
- s3 = ("SNOMED Concept," + myEntityGroup + "," + eterm + ",terms of ," + g32 + "," + g31 + ", Label,Value, Label,Value, Label,Value ")
- if g31 != 'Series([] )': SaveResult(s3, outputFile)
-
- #OMS
- g41 = g4['Omaha Code'].to_string().replace(","," ").replace("\n"," ")
- g42 = g4['SNOMED CT concept ID'].to_string().replace(","," ").replace("\n"," ")
- g43 = g4['SNOMED CT'].to_string().replace(","," ").replace("\n"," ")
- g44 = g4['PR'].to_string().replace(","," ").replace("\n"," ")
- g45 = g4['S&S'].to_string().replace(","," ").replace("\n"," ")
- s4 = ("OMS," + myEntityGroup + "," + eterm + ",concepts of ," + g44 + "," + g45 + ", and SNOMED codes of ," + g43 + ", and OMS problem of ," + g42 + ", and OMS Sign Symptom of ," + g41)
- if g41 != 'Series([] )': SaveResult(s4, outputFile)
-
- #ICD10
- g51 = g5['Code'].to_string().replace(","," ").replace("\n"," ")
- g52 = g5['Description'].to_string().replace(","," ").replace("\n"," ")
- s5 = ("ICD10," + myEntityGroup + "," + eterm + ",descriptions of ," + g52 + "," + g51 + ", Label,Value, Label,Value, Label,Value ")
- if g51 != 'Series([] )': SaveResult(s5, outputFile)
-
- except ValueError as err:
- raise ValueError("Error in group by entity \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return outputFile
-
-
-def plot_to_figure(grouped):
- fig = plt.figure()
- plt.bar(x=list(grouped.keys()), height=list(grouped.values()))
- plt.margins(0.2)
- plt.subplots_adjust(bottom=0.4)
- plt.xticks(rotation=90)
- return fig
-
-
-def ner(text):
- raw = pipe(text)
- ner_content = {
- "text": text,
- "entities": [
- {
- "entity": x["entity_group"],
- "word": x["word"],
- "score": x["score"],
- "start": x["start"],
- "end": x["end"],
- }
- for x in raw
- ],
- }
-
- outputFile = group_by_entity(raw)
- label = EXAMPLES.get(text, "Unknown")
- outputDataframe = pd.read_csv(outputFile)
- return (ner_content, outputDataframe, outputFile)
-
-demo = gr.Blocks()
-with demo:
- gr.Markdown(
- """
- # 🩺⚕️NLP Clinical Ontology Biomedical NER
- """
- )
- input = gr.Textbox(label="Note text", value="")
-
- with gr.Tab("Biomedical Entity Recognition"):
- output=[
- gr.HighlightedText(label="NER", combine_adjacent=True),
- #gr.JSON(label="Entity Counts"),
- #gr.Label(label="Rating"),
- #gr.Plot(label="Bar"),
- gr.Dataframe(label="Dataframe"),
- gr.File(label="File"),
- ]
- examples=list(EXAMPLES.keys())
- gr.Examples(examples, inputs=input)
- input.change(fn=ner, inputs=input, outputs=output)
-
- with gr.Tab("Clinical Terminology Resolution"):
- with gr.Row(variant="compact"):
- btnLOINC = gr.Button("LOINC")
- btnPanels = gr.Button("Panels")
- btnSNOMED = gr.Button("SNOMED")
- btnOMS = gr.Button("OMS")
- btnICD10 = gr.Button("ICD10")
-
- examples=list(EXAMPLES.keys())
- gr.Examples(examples, inputs=input)
- input.change(fn=ner, inputs=input, outputs=output)
-#layout="vertical"
-demo.launch(debug=True)
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter_image_goal.py b/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter_image_goal.py
deleted file mode 100644
index ea0cd9d993c9546abc9f51f635ac05a464c6bf48..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter_image_goal.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import numpy as np
-
-from cliport.utils import utils
-from cliport.agents.transporter import OriginalTransporterAgent
-from cliport.models.core.attention import Attention
-from cliport.models.core.attention_image_goal import AttentionImageGoal
-from cliport.models.core.transport_image_goal import TransportImageGoal
-
-
-class ImageGoalTransporterAgent(OriginalTransporterAgent):
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- stream_fcn = 'plain_resnet'
- self.attention = AttentionImageGoal(
- stream_fcn=(stream_fcn, None),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = TransportImageGoal(
- stream_fcn=(stream_fcn, None),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
-
- def attn_forward(self, inp, softmax=True):
- inp_img = inp['inp_img']
- goal_img = inp['goal_img']
-
- out = self.attention.forward(inp_img, goal_img, softmax=softmax)
- return out
-
- def attn_training_step(self, frame, goal, backprop=True, compute_err=False):
- inp_img = frame['img']
- goal_img = goal['img']
- p0, p0_theta = frame['p0'], frame['p0_theta']
-
- inp = {'inp_img': inp_img, 'goal_img': goal_img}
- out = self.attn_forward(inp, softmax=False)
- return self.attn_criterion(backprop, compute_err, inp, out, p0, p0_theta)
-
- def trans_forward(self, inp, softmax=True):
- inp_img = inp['inp_img']
- goal_img = inp['goal_img']
- p0 = inp['p0']
-
- out = self.transport.forward(inp_img, goal_img, p0, softmax=softmax)
- return out
-
- def transport_training_step(self, frame, goal, backprop=True, compute_err=False):
- inp_img = frame['img']
- goal_img = goal['img']
- p0 = frame['p0']
- p1, p1_theta = frame['p1'], frame['p1_theta']
-
- inp = {'inp_img': inp_img, 'goal_img': goal_img, 'p0': p0}
- out = self.trans_forward(inp, softmax=False)
- err, loss = self.transport_criterion(backprop, compute_err, inp, out, p0, p1, p1_theta)
- return loss, err
-
- def training_step(self, batch, batch_idx):
- self.attention.train()
- self.transport.train()
- frame, goal = batch
-
- # Get training losses.
- step = self.total_steps + 1
- loss0, err0 = self.attn_training_step(frame, goal)
- if isinstance(self.transport, Attention):
- loss1, err1 = self.attn_training_step(frame, goal)
- else:
- loss1, err1 = self.transport_training_step(frame, goal)
- total_loss = loss0 + loss1
- self.log('tr/attn/loss', loss0)
- self.log('tr/trans/loss', loss1)
- self.log('tr/loss', total_loss)
- self.total_steps = step
-
- self.trainer.train_loop.running_loss.append(total_loss)
-
- self.check_save_iteration()
-
- return dict(
- loss=total_loss,
- )
-
- def validation_step(self, batch, batch_idx):
- self.attention.eval()
- self.transport.eval()
-
- loss0, loss1 = 0, 0
- for i in range(self.val_repeats):
- frame, goal = batch
- l0, err0 = self.attn_training_step(frame, goal, backprop=False, compute_err=True)
- loss0 += l0
- if isinstance(self.transport, Attention):
- l1, err1 = self.attn_training_step(frame, goal, backprop=False, compute_err=True)
- loss1 += l1
- else:
- l1, err1 = self.transport_training_step(frame, goal, backprop=False, compute_err=True)
- loss1 += l1
- loss0 /= self.val_repeats
- loss1 /= self.val_repeats
- val_total_loss = loss0 + loss1
-
- self.trainer.evaluation_loop.trainer.train_loop.running_loss.append(val_total_loss)
-
- return dict(
- val_loss=val_total_loss,
- val_loss0=loss0,
- val_loss1=loss1,
- val_attn_dist_err=err0['dist'],
- val_attn_theta_err=err0['theta'],
- val_trans_dist_err=err1['dist'],
- val_trans_theta_err=err1['theta'],
- )
-
- def act(self, obs, info=None, goal=None): # pylint: disable=unused-argument
- """Run inference and return best action given visual observations."""
- # Get heightmap from RGB-D images.
- img = self.test_ds.get_image(obs)
- goal_img = self.test_ds.get_image(goal[0])
-
- # Attention model forward pass.
- pick_conf = self.attention.forward(img, goal_img)
- pick_conf = pick_conf.detach().cpu().numpy()
- argmax = np.argmax(pick_conf)
- argmax = np.unravel_index(argmax, shape=pick_conf.shape)
- p0_pix = argmax[:2]
- p0_theta = argmax[2] * (2 * np.pi / pick_conf.shape[2])
-
- # Transport model forward pass.
- place_conf = self.transport.forward(img, goal_img, p0_pix)
- place_conf = place_conf.permute(1, 2, 0)
- place_conf = place_conf.detach().cpu().numpy()
- argmax = np.argmax(place_conf)
- argmax = np.unravel_index(argmax, shape=place_conf.shape)
- p1_pix = argmax[:2]
- p1_theta = argmax[2] * (2 * np.pi / place_conf.shape[2])
-
- # Pixels to end effector poses.
- hmap = img[:, :, 3]
- p0_xyz = utils.pix_to_xyz(p0_pix, hmap, self.bounds, self.pix_size)
- p1_xyz = utils.pix_to_xyz(p1_pix, hmap, self.bounds, self.pix_size)
- p0_xyzw = utils.eulerXYZ_to_quatXYZW((0, 0, -p0_theta))
- p1_xyzw = utils.eulerXYZ_to_quatXYZW((0, 0, -p1_theta))
-
- return {
- 'pose0': (np.asarray(p0_xyz), np.asarray(p0_xyzw)),
- 'pose1': (np.asarray(p1_xyz), np.asarray(p1_xyzw)),
- 'pick': p0_pix,
- 'place': p1_pix,
- }
diff --git a/spaces/GeorgeOrville/bingo/src/components/welcome-screen.tsx b/spaces/GeorgeOrville/bingo/src/components/welcome-screen.tsx
deleted file mode 100644
index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/components/welcome-screen.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-import { useBing } from '@/lib/hooks/use-bing'
-
-const exampleMessages = [
- {
- heading: '🧐 提出复杂问题',
- message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?`
- },
- {
- heading: '🙌 获取更好的答案',
- message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?'
- },
- {
- heading: '🎨 获得创意灵感',
- message: `以海盗的口吻写一首关于外太空鳄鱼的俳句`
- }
-]
-
-export function WelcomeScreen({ setInput }: Pick, 'setInput'>) {
- return (
-
- {exampleMessages.map(example => (
-
- ))}
-
- )
-}
diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/config.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/config.py
deleted file mode 100644
index 1c21312f3de971bfa008254c6035cebc09f05e4c..0000000000000000000000000000000000000000
--- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/config.py
+++ /dev/null
@@ -1,45 +0,0 @@
-librispeech_datasets = {
- "train": {
- "clean": ["LibriSpeech/train-clean-100", "LibriSpeech/train-clean-360"],
- "other": ["LibriSpeech/train-other-500"]
- },
- "test": {
- "clean": ["LibriSpeech/test-clean"],
- "other": ["LibriSpeech/test-other"]
- },
- "dev": {
- "clean": ["LibriSpeech/dev-clean"],
- "other": ["LibriSpeech/dev-other"]
- },
-}
-libritts_datasets = {
- "train": {
- "clean": ["LibriTTS/train-clean-100", "LibriTTS/train-clean-360"],
- "other": ["LibriTTS/train-other-500"]
- },
- "test": {
- "clean": ["LibriTTS/test-clean"],
- "other": ["LibriTTS/test-other"]
- },
- "dev": {
- "clean": ["LibriTTS/dev-clean"],
- "other": ["LibriTTS/dev-other"]
- },
-}
-voxceleb_datasets = {
- "voxceleb1" : {
- "train": ["VoxCeleb1/wav"],
- "test": ["VoxCeleb1/test_wav"]
- },
- "voxceleb2" : {
- "train": ["VoxCeleb2/dev/aac"],
- "test": ["VoxCeleb2/test_wav"]
- }
-}
-
-other_datasets = [
- "LJSpeech-1.1",
- "VCTK-Corpus/wav48",
-]
-
-anglophone_nationalites = ["australia", "canada", "ireland", "uk", "usa"]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18b-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18b-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 92accfc703fc398d2845d7dc2f1d5336f24738e8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r18b-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = './fcn_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='torchvision://resnet18',
- backbone=dict(type='ResNet', depth=18),
- decode_head=dict(
- in_channels=512,
- channels=128,
- ),
- auxiliary_head=dict(in_channels=256, channels=64))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_769x769_80k_cityscapes.py
deleted file mode 100644
index 5efb61339cdbdde585f7814e9650be2e2df654ac..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_769x769_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './gcnet_r50-d8_769x769_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_base/test_config_w32.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_base/test_config_w32.py
deleted file mode 100644
index 56291ed20571a5d5ebcdf797c49372856a04276a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_base/test_config_w32.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/upernet_uniformer.py',
- '../../configs/_base_/datasets/ade20k.py',
- '../../configs/_base_/default_runtime.py',
- '../../configs/_base_/schedules/schedule_160k.py'
-]
-model = dict(
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[5, 8, 20, 7],
- head_dim=64,
- drop_path_rate=0.4,
- windows=True,
- hybrid=False,
- window_size=32,
- ),
- decode_head=dict(
- in_channels=[64, 128, 320, 512],
- num_classes=150
- ),
- auxiliary_head=dict(
- in_channels=320,
- num_classes=150
- ))
-
-# AdamW optimizer, no weight decay for position embedding & layer norm in backbone
-optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-
-lr_config = dict(_delete_=True, policy='poly',
- warmup='linear',
- warmup_iters=1500,
- warmup_ratio=1e-6,
- power=1.0, min_lr=0.0, by_epoch=False)
-
-data=dict(samples_per_gpu=2)
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/__init__.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/__init__.py
deleted file mode 100644
index 3d3bdd349b9f2ae499a2fcb2ac1d2e3c77befebe..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .drop import DropPath
-from .inverted_residual import InvertedResidual, InvertedResidualV3
-from .make_divisible import make_divisible
-from .res_layer import ResLayer
-from .se_layer import SELayer
-from .self_attention_block import SelfAttentionBlock
-from .up_conv_block import UpConvBlock
-from .weight_init import trunc_normal_
-
-__all__ = [
- 'ResLayer', 'SelfAttentionBlock', 'make_divisible', 'InvertedResidual',
- 'UpConvBlock', 'InvertedResidualV3', 'SELayer', 'DropPath', 'trunc_normal_'
-]
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/data/test_audio_dataset.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/data/test_audio_dataset.py
deleted file mode 100644
index b591ea6137f48d0d97fcd1243c5f5d258670a474..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/data/test_audio_dataset.py
+++ /dev/null
@@ -1,352 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from functools import partial
-from itertools import product
-import json
-import math
-import os
-import random
-import typing as tp
-
-import pytest
-import torch
-from torch.utils.data import DataLoader
-
-from audiocraft.data.audio_dataset import (
- AudioDataset,
- AudioMeta,
- _get_audio_meta,
- load_audio_meta,
- save_audio_meta
-)
-from audiocraft.data.zip import PathInZip
-
-from ..common_utils import TempDirMixin, get_white_noise, save_wav
-
-
-class TestAudioMeta(TempDirMixin):
-
- def test_get_audio_meta(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(duration * sample_rate)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path('sample.wav')
- save_wav(path, wav, sample_rate)
- m = _get_audio_meta(path, minimal=True)
- assert m.path == path, 'path does not match'
- assert m.sample_rate == sample_rate, 'sample rate does not match'
- assert m.duration == duration, 'duration does not match'
- assert m.amplitude is None
- assert m.info_path is None
-
- def test_save_audio_meta(self):
- audio_meta = [
- AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')),
- AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json'))
- ]
- empty_audio_meta = []
- for idx, meta in enumerate([audio_meta, empty_audio_meta]):
- path = self.get_temp_path(f'data_{idx}_save.jsonl')
- save_audio_meta(path, meta)
- with open(path, 'r') as f:
- lines = f.readlines()
- read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines]
- assert len(read_meta) == len(meta)
- for m, read_m in zip(meta, read_meta):
- assert m == read_m
-
- def test_load_audio_meta(self):
- try:
- import dora
- except ImportError:
- dora = None # type: ignore
-
- audio_meta = [
- AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')),
- AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json'))
- ]
- empty_meta = []
- for idx, meta in enumerate([audio_meta, empty_meta]):
- path = self.get_temp_path(f'data_{idx}_load.jsonl')
- with open(path, 'w') as f:
- for m in meta:
- json_str = json.dumps(m.to_dict()) + '\n'
- f.write(json_str)
- read_meta = load_audio_meta(path)
- assert len(read_meta) == len(meta)
- for m, read_m in zip(meta, read_meta):
- if dora:
- m.path = dora.git_save.to_absolute_path(m.path)
- assert m == read_m, f'original={m}, read={read_m}'
-
-
-class TestAudioDataset(TempDirMixin):
-
- def _create_audio_files(self,
- root_name: str,
- num_examples: int,
- durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.),
- sample_rate: int = 16_000,
- channels: int = 1):
- root_dir = self.get_temp_dir(root_name)
- for i in range(num_examples):
- if isinstance(durations, float):
- duration = durations
- elif isinstance(durations, tuple) and len(durations) == 1:
- duration = durations[0]
- elif isinstance(durations, tuple) and len(durations) == 2:
- duration = random.uniform(durations[0], durations[1])
- else:
- assert False
- n_frames = int(duration * sample_rate)
- wav = get_white_noise(channels, n_frames)
- path = os.path.join(root_dir, f'example_{i}.wav')
- save_wav(path, wav, sample_rate)
- return root_dir
-
- def _create_audio_dataset(self,
- root_name: str,
- total_num_examples: int,
- durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.),
- sample_rate: int = 16_000,
- channels: int = 1,
- segment_duration: tp.Optional[float] = None,
- num_examples: int = 10,
- shuffle: bool = True,
- return_info: bool = False):
- root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels)
- dataset = AudioDataset.from_path(root_dir,
- minimal_meta=True,
- segment_duration=segment_duration,
- num_samples=num_examples,
- sample_rate=sample_rate,
- channels=channels,
- shuffle=shuffle,
- return_info=return_info)
- return dataset
-
- def test_dataset_full(self):
- total_examples = 10
- min_duration, max_duration = 1., 4.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration),
- sample_rate=sample_rate, channels=channels, segment_duration=None)
- assert len(dataset) == total_examples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] <= int(max_duration * sample_rate)
- assert sample.shape[1] >= int(min_duration * sample_rate)
-
- def test_dataset_segment(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples)
- assert len(dataset) == num_samples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == int(segment_duration * sample_rate)
-
- def test_dataset_equal_audio_and_segment_durations(self):
- total_examples = 1
- num_samples = 2
- audio_duration = 1.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples)
- assert len(dataset) == num_samples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == int(segment_duration * sample_rate)
- # the random seek_time adds variability on audio read
- sample_1 = dataset[0]
- sample_2 = dataset[1]
- assert not torch.allclose(sample_1, sample_2)
-
- def test_dataset_samples(self):
- total_examples = 1
- num_samples = 2
- audio_duration = 1.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
-
- create_dataset = partial(
- self._create_audio_dataset,
- 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples,
- )
-
- dataset = create_dataset(shuffle=True)
- # when shuffle = True, we have different inputs for the same index across epoch
- sample_1 = dataset[0]
- sample_2 = dataset[0]
- assert not torch.allclose(sample_1, sample_2)
-
- dataset_noshuffle = create_dataset(shuffle=False)
- # when shuffle = False, we have same inputs for the same index across epoch
- sample_1 = dataset_noshuffle[0]
- sample_2 = dataset_noshuffle[0]
- assert torch.allclose(sample_1, sample_2)
-
- def test_dataset_return_info(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
- assert len(dataset) == num_samples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample, segment_info = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == int(segment_duration * sample_rate)
- assert segment_info.sample_rate == sample_rate
- assert segment_info.total_frames == int(segment_duration * sample_rate)
- assert segment_info.n_frames <= int(segment_duration * sample_rate)
- assert segment_info.seek_time >= 0
-
- def test_dataset_return_info_no_segment_duration(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = None
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
- assert len(dataset) == total_examples
- assert dataset.sample_rate == sample_rate
- assert dataset.channels == channels
- for idx in range(len(dataset)):
- sample, segment_info = dataset[idx]
- assert sample.shape[0] == channels
- assert sample.shape[1] == segment_info.total_frames
- assert segment_info.sample_rate == sample_rate
- assert segment_info.n_frames <= segment_info.total_frames
-
- def test_dataset_collate_fn(self):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False)
- batch_size = 4
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- num_workers=0
- )
- for idx, batch in enumerate(dataloader):
- assert batch.shape[0] == batch_size
-
- @pytest.mark.parametrize("segment_duration", [1.0, None])
- def test_dataset_with_meta_collate_fn(self, segment_duration):
- total_examples = 10
- num_samples = 20
- min_duration, max_duration = 1., 4.
- segment_duration = 1.
- sample_rate = 16_000
- channels = 1
- dataset = self._create_audio_dataset(
- 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate,
- channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True)
- batch_size = 4
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- collate_fn=dataset.collater,
- num_workers=0
- )
- for idx, batch in enumerate(dataloader):
- wav, infos = batch
- assert wav.shape[0] == batch_size
- assert len(infos) == batch_size
-
- @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [
- [1, True, True, 0.5, 0.5, 0.0],
- [1, False, True, 0.25, 0.5, 0.25],
- [1, True, False, 0.666, 0.333, 0.0],
- [1, False, False, 0.333, 0.333, 0.333],
- [None, False, False, 0.333, 0.333, 0.333]])
- def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist):
- random.seed(1234)
- rng = torch.Generator()
- rng.manual_seed(1234)
-
- def _get_histogram(dataset, repetitions=20_000):
- counts = {file_meta.path: 0. for file_meta in meta}
- for _ in range(repetitions):
- file_meta = dataset.sample_file(0, rng)
- counts[file_meta.path] += 1
- return {name: count / repetitions for name, count in counts.items()}
-
- meta = [
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
- ]
- dataset = AudioDataset(
- meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight,
- sample_on_duration=sample_on_duration)
- hist = _get_histogram(dataset)
- assert math.isclose(hist['a'], a_hist, abs_tol=0.01)
- assert math.isclose(hist['b'], b_hist, abs_tol=0.01)
- assert math.isclose(hist['c'], c_hist, abs_tol=0.01)
-
- def test_meta_duration_filter_all(self):
- meta = [
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
- ]
- try:
- AudioDataset(meta, segment_duration=11, min_segment_ratio=1)
- assert False
- except AssertionError:
- assert True
-
- def test_meta_duration_filter_long(self):
- meta = [
- AudioMeta(path='a', duration=5, sample_rate=1, weight=2),
- AudioMeta(path='b', duration=10, sample_rate=1, weight=None),
- AudioMeta(path='c', duration=5, sample_rate=1, weight=0),
- ]
- dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7)
- assert len(dataset) == 2
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/test_fn.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/test_fn.py
deleted file mode 100644
index 039a73bf99b03ebcc827a79c4e3e1dc5736306ce..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/test_fn.py
+++ /dev/null
@@ -1,68 +0,0 @@
-from typing import List, Optional
-from torch.utils.data import DataLoader
-from torch import Tensor
-import random
-import torch, os
-
-from .utils import plot_batch_of_pairs
-from .configs.base_config import base_cfg
-from .dataset_fn import TrainDataset
-
-def test_rgbd_dataset(cfg: base_cfg) -> None:
- train_dataset = TrainDataset(
- cfg.train_dataset_working_dir_path, cfg.image_size,
- inputs=cfg.inputs, outputs=cfg.outputs,
- )
- train_dataloader = DataLoader(
- train_dataset, batch_size=cfg.batch_size,
- shuffle=True, num_workers=cfg.num_workers
- )
-
- for i_batch, (images, depths, gts, indices) in enumerate(train_dataloader):
- print(f'{i_batch}, {images.shape}, {depths.shape}, {indices.shape}')
- plot_batch_of_pairs(images, depths, gts)
- break
-
-def test_data_augmentation(
- cfg: base_cfg,
- data_augmentation_version: int,
- index: Optional[int] = None,
-) -> None:
- cfg.data_augmentation_version = data_augmentation_version
- dataset = TrainDataset(cfg)
-
- images: List[Tensor] = []
- depths: List[Tensor] = []
- gts: List[Tensor] = []
-
- if index is None:
- index = random.randrange(len(dataset))
- print(index)
-
- # No transformation
- image, depth, gt, i = dataset.__getitem__(index, False)
- print(torch.max(depth), torch.min(depth))
- images.append(torch.unsqueeze(image, 0))
- depths.append(torch.unsqueeze(depth, 0))
- gts.append(torch.unsqueeze(gt, 0))
-
- # Random 3 times transformation
- for _ in range(5):
- image, depth, gt, i = dataset[index]
- print(torch.max(depth), torch.min(depth))
- images.append(torch.unsqueeze(image, 0))
- depths.append(torch.unsqueeze(depth, 0))
- gts.append(torch.unsqueeze(gt, 0))
-
- os.makedirs(cfg.latex_dir_path, exist_ok=True)
- plot_batch_of_pairs(
- torch.cat(images),
- torch.cat(depths),
- torch.cat(gts),
- os.path.join(
- cfg.latex_dir_path,
- f'data_augmentation_v{cfg.data_augmentation_version}.png'
- )
- )
-
-# test_data_augmentation(cfg, index=10)
diff --git a/spaces/Haokko/AronaTTS/transforms.py b/spaces/Haokko/AronaTTS/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Haokko/AronaTTS/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/language_pair_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/language_pair_dataset.py
deleted file mode 100644
index ff3e14bf14770638524ef6067b558e455dbe5f2b..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/language_pair_dataset.py
+++ /dev/null
@@ -1,471 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import numpy as np
-import torch
-from fairseq.data import FairseqDataset, data_utils
-
-
-logger = logging.getLogger(__name__)
-
-
-def collate(
- samples,
- pad_idx,
- eos_idx,
- left_pad_source=True,
- left_pad_target=False,
- input_feeding=True,
- pad_to_length=None,
- pad_to_multiple=1,
-):
- if len(samples) == 0:
- return {}
-
- def merge(key, left_pad, move_eos_to_beginning=False, pad_to_length=None):
- return data_utils.collate_tokens(
- [s[key] for s in samples],
- pad_idx,
- eos_idx,
- left_pad,
- move_eos_to_beginning,
- pad_to_length=pad_to_length,
- pad_to_multiple=pad_to_multiple,
- )
-
- def check_alignment(alignment, src_len, tgt_len):
- if alignment is None or len(alignment) == 0:
- return False
- if (
- alignment[:, 0].max().item() >= src_len - 1
- or alignment[:, 1].max().item() >= tgt_len - 1
- ):
- logger.warning("alignment size mismatch found, skipping alignment!")
- return False
- return True
-
- def compute_alignment_weights(alignments):
- """
- Given a tensor of shape [:, 2] containing the source-target indices
- corresponding to the alignments, a weight vector containing the
- inverse frequency of each target index is computed.
- For e.g. if alignments = [[5, 7], [2, 3], [1, 3], [4, 2]], then
- a tensor containing [1., 0.5, 0.5, 1] should be returned (since target
- index 3 is repeated twice)
- """
- align_tgt = alignments[:, 1]
- _, align_tgt_i, align_tgt_c = torch.unique(
- align_tgt, return_inverse=True, return_counts=True
- )
- align_weights = align_tgt_c[align_tgt_i[np.arange(len(align_tgt))]]
- return 1.0 / align_weights.float()
-
- id = torch.LongTensor([s["id"] for s in samples])
- src_tokens = merge(
- "source",
- left_pad=left_pad_source,
- pad_to_length=pad_to_length["source"] if pad_to_length is not None else None,
- )
- # sort by descending source length
- src_lengths = torch.LongTensor(
- [s["source"].ne(pad_idx).long().sum() for s in samples]
- )
- src_lengths, sort_order = src_lengths.sort(descending=True)
- id = id.index_select(0, sort_order)
- src_tokens = src_tokens.index_select(0, sort_order)
-
- prev_output_tokens = None
- target = None
- if samples[0].get("target", None) is not None:
- target = merge(
- "target",
- left_pad=left_pad_target,
- pad_to_length=pad_to_length["target"]
- if pad_to_length is not None
- else None,
- )
- target = target.index_select(0, sort_order)
- tgt_lengths = torch.LongTensor(
- [s["target"].ne(pad_idx).long().sum() for s in samples]
- ).index_select(0, sort_order)
- ntokens = tgt_lengths.sum().item()
-
- if samples[0].get("prev_output_tokens", None) is not None:
- prev_output_tokens = merge("prev_output_tokens", left_pad=left_pad_target)
- elif input_feeding:
- # we create a shifted version of targets for feeding the
- # previous output token(s) into the next decoder step
- prev_output_tokens = merge(
- "target",
- left_pad=left_pad_target,
- move_eos_to_beginning=True,
- pad_to_length=pad_to_length["target"]
- if pad_to_length is not None
- else None,
- )
- else:
- ntokens = src_lengths.sum().item()
-
- batch = {
- "id": id,
- "nsentences": len(samples),
- "ntokens": ntokens,
- "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths,},
- "target": target,
- }
- if prev_output_tokens is not None:
- batch["net_input"]["prev_output_tokens"] = prev_output_tokens.index_select(
- 0, sort_order
- )
-
- if samples[0].get("alignment", None) is not None:
- bsz, tgt_sz = batch["target"].shape
- src_sz = batch["net_input"]["src_tokens"].shape[1]
-
- offsets = torch.zeros((len(sort_order), 2), dtype=torch.long)
- offsets[:, 1] += torch.arange(len(sort_order), dtype=torch.long) * tgt_sz
- if left_pad_source:
- offsets[:, 0] += src_sz - src_lengths
- if left_pad_target:
- offsets[:, 1] += tgt_sz - tgt_lengths
-
- alignments = [
- alignment + offset
- for align_idx, offset, src_len, tgt_len in zip(
- sort_order, offsets, src_lengths, tgt_lengths
- )
- for alignment in [samples[align_idx]["alignment"].view(-1, 2)]
- if check_alignment(alignment, src_len, tgt_len)
- ]
-
- if len(alignments) > 0:
- alignments = torch.cat(alignments, dim=0)
- align_weights = compute_alignment_weights(alignments)
-
- batch["alignments"] = alignments
- batch["align_weights"] = align_weights
-
- if samples[0].get("constraints", None) is not None:
- # Collate the packed constraints across the samples, padding to
- # the length of the longest sample.
- lens = [sample.get("constraints").size(0) for sample in samples]
- max_len = max(lens)
- constraints = torch.zeros((len(samples), max(lens))).long()
- for i, sample in enumerate(samples):
- constraints[i, 0 : lens[i]] = samples[i].get("constraints")
- batch["constraints"] = constraints.index_select(0, sort_order)
-
- return batch
-
-
-class LanguagePairDataset(FairseqDataset):
- """
- A pair of torch.utils.data.Datasets.
-
- Args:
- src (torch.utils.data.Dataset): source dataset to wrap
- src_sizes (List[int]): source sentence lengths
- src_dict (~fairseq.data.Dictionary): source vocabulary
- tgt (torch.utils.data.Dataset, optional): target dataset to wrap
- tgt_sizes (List[int], optional): target sentence lengths
- tgt_dict (~fairseq.data.Dictionary, optional): target vocabulary
- left_pad_source (bool, optional): pad source tensors on the left side
- (default: True).
- left_pad_target (bool, optional): pad target tensors on the left side
- (default: False).
- shuffle (bool, optional): shuffle dataset elements before batching
- (default: True).
- input_feeding (bool, optional): create a shifted version of the targets
- to be passed into the model for teacher forcing (default: True).
- remove_eos_from_source (bool, optional): if set, removes eos from end
- of source if it's present (default: False).
- append_eos_to_target (bool, optional): if set, appends eos to end of
- target if it's absent (default: False).
- align_dataset (torch.utils.data.Dataset, optional): dataset
- containing alignments.
- constraints (Tensor, optional): 2d tensor with a concatenated, zero-
- delimited list of constraints for each sentence.
- append_bos (bool, optional): if set, appends bos to the beginning of
- source/target sentence.
- num_buckets (int, optional): if set to a value greater than 0, then
- batches will be bucketed into the given number of batch shapes.
- src_lang_id (int, optional): source language ID, if set, the collated batch
- will contain a field 'src_lang_id' in 'net_input' which indicates the
- source language of the samples.
- tgt_lang_id (int, optional): target language ID, if set, the collated batch
- will contain a field 'tgt_lang_id' which indicates the target language
- of the samples.
- """
-
- def __init__(
- self,
- src,
- src_sizes,
- src_dict,
- tgt=None,
- tgt_sizes=None,
- tgt_dict=None,
- left_pad_source=True,
- left_pad_target=False,
- shuffle=True,
- input_feeding=True,
- remove_eos_from_source=False,
- append_eos_to_target=False,
- align_dataset=None,
- constraints=None,
- append_bos=False,
- eos=None,
- num_buckets=0,
- src_lang_id=None,
- tgt_lang_id=None,
- pad_to_multiple=1,
- ):
- if tgt_dict is not None:
- assert src_dict.pad() == tgt_dict.pad()
- assert src_dict.eos() == tgt_dict.eos()
- assert src_dict.unk() == tgt_dict.unk()
- if tgt is not None:
- assert len(src) == len(
- tgt
- ), "Source and target must contain the same number of examples"
- self.src = src
- self.tgt = tgt
- self.src_sizes = np.array(src_sizes)
- self.tgt_sizes = np.array(tgt_sizes) if tgt_sizes is not None else None
- self.sizes = (
- np.vstack((self.src_sizes, self.tgt_sizes)).T
- if self.tgt_sizes is not None
- else self.src_sizes
- )
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
- self.left_pad_source = left_pad_source
- self.left_pad_target = left_pad_target
- self.shuffle = shuffle
- self.input_feeding = input_feeding
- self.remove_eos_from_source = remove_eos_from_source
- self.append_eos_to_target = append_eos_to_target
- self.align_dataset = align_dataset
- if self.align_dataset is not None:
- assert (
- self.tgt_sizes is not None
- ), "Both source and target needed when alignments are provided"
- self.constraints = constraints
- self.append_bos = append_bos
- self.eos = eos if eos is not None else src_dict.eos()
- self.src_lang_id = src_lang_id
- self.tgt_lang_id = tgt_lang_id
- if num_buckets > 0:
- from fairseq.data import BucketPadLengthDataset
-
- self.src = BucketPadLengthDataset(
- self.src,
- sizes=self.src_sizes,
- num_buckets=num_buckets,
- pad_idx=self.src_dict.pad(),
- left_pad=self.left_pad_source,
- )
- self.src_sizes = self.src.sizes
- logger.info("bucketing source lengths: {}".format(list(self.src.buckets)))
- if self.tgt is not None:
- self.tgt = BucketPadLengthDataset(
- self.tgt,
- sizes=self.tgt_sizes,
- num_buckets=num_buckets,
- pad_idx=self.tgt_dict.pad(),
- left_pad=self.left_pad_target,
- )
- self.tgt_sizes = self.tgt.sizes
- logger.info(
- "bucketing target lengths: {}".format(list(self.tgt.buckets))
- )
-
- # determine bucket sizes using self.num_tokens, which will return
- # the padded lengths (thanks to BucketPadLengthDataset)
- num_tokens = np.vectorize(self.num_tokens, otypes=[np.compat.long])
- self.bucketed_num_tokens = num_tokens(np.arange(len(self.src)))
- self.buckets = [
- (None, num_tokens) for num_tokens in np.unique(self.bucketed_num_tokens)
- ]
- else:
- self.buckets = None
- self.pad_to_multiple = pad_to_multiple
-
- def get_batch_shapes(self):
- return self.buckets
-
- def __getitem__(self, index):
- tgt_item = self.tgt[index] if self.tgt is not None else None
- src_item = self.src[index]
- # Append EOS to end of tgt sentence if it does not have an EOS and remove
- # EOS from end of src sentence if it exists. This is useful when we use
- # use existing datasets for opposite directions i.e., when we want to
- # use tgt_dataset as src_dataset and vice versa
- if self.append_eos_to_target:
- eos = self.tgt_dict.eos() if self.tgt_dict else self.src_dict.eos()
- if self.tgt and self.tgt[index][-1] != eos:
- tgt_item = torch.cat([self.tgt[index], torch.LongTensor([eos])])
-
- if self.append_bos:
- bos = self.tgt_dict.bos() if self.tgt_dict else self.src_dict.bos()
- if self.tgt and self.tgt[index][0] != bos:
- tgt_item = torch.cat([torch.LongTensor([bos]), self.tgt[index]])
-
- bos = self.src_dict.bos()
- if self.src[index][0] != bos:
- src_item = torch.cat([torch.LongTensor([bos]), self.src[index]])
-
- if self.remove_eos_from_source:
- eos = self.src_dict.eos()
- if self.src[index][-1] == eos:
- src_item = self.src[index][:-1]
-
- example = {
- "id": index,
- "source": src_item,
- "target": tgt_item,
- }
- if self.align_dataset is not None:
- example["alignment"] = self.align_dataset[index]
- if self.constraints is not None:
- example["constraints"] = self.constraints[index]
- return example
-
- def __len__(self):
- return len(self.src)
-
- def collater(self, samples, pad_to_length=None):
- """Merge a list of samples to form a mini-batch.
-
- Args:
- samples (List[dict]): samples to collate
- pad_to_length (dict, optional): a dictionary of
- {'source': source_pad_to_length, 'target': target_pad_to_length}
- to indicate the max length to pad to in source and target respectively.
-
- Returns:
- dict: a mini-batch with the following keys:
-
- - `id` (LongTensor): example IDs in the original input order
- - `ntokens` (int): total number of tokens in the batch
- - `net_input` (dict): the input to the Model, containing keys:
-
- - `src_tokens` (LongTensor): a padded 2D Tensor of tokens in
- the source sentence of shape `(bsz, src_len)`. Padding will
- appear on the left if *left_pad_source* is ``True``.
- - `src_lengths` (LongTensor): 1D Tensor of the unpadded
- lengths of each source sentence of shape `(bsz)`
- - `prev_output_tokens` (LongTensor): a padded 2D Tensor of
- tokens in the target sentence, shifted right by one
- position for teacher forcing, of shape `(bsz, tgt_len)`.
- This key will not be present if *input_feeding* is
- ``False``. Padding will appear on the left if
- *left_pad_target* is ``True``.
- - `src_lang_id` (LongTensor): a long Tensor which contains source
- language IDs of each sample in the batch
-
- - `target` (LongTensor): a padded 2D Tensor of tokens in the
- target sentence of shape `(bsz, tgt_len)`. Padding will appear
- on the left if *left_pad_target* is ``True``.
- - `tgt_lang_id` (LongTensor): a long Tensor which contains target language
- IDs of each sample in the batch
- """
- res = collate(
- samples,
- pad_idx=self.src_dict.pad(),
- eos_idx=self.eos,
- left_pad_source=self.left_pad_source,
- left_pad_target=self.left_pad_target,
- input_feeding=self.input_feeding,
- pad_to_length=pad_to_length,
- pad_to_multiple=self.pad_to_multiple,
- )
- if self.src_lang_id is not None or self.tgt_lang_id is not None:
- src_tokens = res["net_input"]["src_tokens"]
- bsz = src_tokens.size(0)
- if self.src_lang_id is not None:
- res["net_input"]["src_lang_id"] = (
- torch.LongTensor([[self.src_lang_id]]).expand(bsz, 1).to(src_tokens)
- )
- if self.tgt_lang_id is not None:
- res["tgt_lang_id"] = (
- torch.LongTensor([[self.tgt_lang_id]]).expand(bsz, 1).to(src_tokens)
- )
- return res
-
- def num_tokens(self, index):
- """Return the number of tokens in a sample. This value is used to
- enforce ``--max-tokens`` during batching."""
- return max(
- self.src_sizes[index],
- self.tgt_sizes[index] if self.tgt_sizes is not None else 0,
- )
-
- def num_tokens_vec(self, indices):
- """Return the number of tokens for a set of positions defined by indices.
- This value is used to enforce ``--max-tokens`` during batching."""
- sizes = self.src_sizes[indices]
- if self.tgt_sizes is not None:
- sizes = np.maximum(sizes, self.tgt_sizes[indices])
- return sizes
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used when
- filtering a dataset with ``--max-positions``."""
- return (
- self.src_sizes[index],
- self.tgt_sizes[index] if self.tgt_sizes is not None else 0,
- )
-
- def ordered_indices(self):
- """Return an ordered list of indices. Batches will be constructed based
- on this order."""
- if self.shuffle:
- indices = np.random.permutation(len(self)).astype(np.int64)
- else:
- indices = np.arange(len(self), dtype=np.int64)
- if self.buckets is None:
- # sort by target length, then source length
- if self.tgt_sizes is not None:
- indices = indices[np.argsort(self.tgt_sizes[indices], kind="mergesort")]
- return indices[np.argsort(self.src_sizes[indices], kind="mergesort")]
- else:
- # sort by bucketed_num_tokens, which is:
- # max(padded_src_len, padded_tgt_len)
- return indices[
- np.argsort(self.bucketed_num_tokens[indices], kind="mergesort")
- ]
-
- @property
- def supports_prefetch(self):
- return getattr(self.src, "supports_prefetch", False) and (
- getattr(self.tgt, "supports_prefetch", False) or self.tgt is None
- )
-
- def prefetch(self, indices):
- self.src.prefetch(indices)
- if self.tgt is not None:
- self.tgt.prefetch(indices)
- if self.align_dataset is not None:
- self.align_dataset.prefetch(indices)
-
- def filter_indices_by_size(self, indices, max_sizes):
- """Filter a list of sample indices. Remove those that are longer
- than specified in max_sizes.
-
- Args:
- indices (np.array): original array of sample indices
- max_sizes (int or list[int] or tuple[int]): max sample size,
- can be defined separately for src and tgt (then list or tuple)
-
- Returns:
- np.array: filtered sample array
- list: list of removed indices
- """
- return data_utils.filter_paired_dataset_indices_by_size(
- self.src_sizes, self.tgt_sizes, indices, max_sizes,
- )
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/strip_token_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/strip_token_dataset.py
deleted file mode 100644
index cae39ba4d2f8106398eccd7eb0cf5c2194ec0db5..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/strip_token_dataset.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import BaseWrapperDataset
-
-
-class StripTokenDataset(BaseWrapperDataset):
- def __init__(self, dataset, id_to_strip):
- super().__init__(dataset)
- self.id_to_strip = id_to_strip
-
- def __getitem__(self, index):
- item = self.dataset[index]
- while len(item) > 0 and item[-1] == self.id_to_strip:
- item = item[:-1]
- while len(item) > 0 and item[0] == self.id_to_strip:
- item = item[1:]
- return item
diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/util.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/util.py
deleted file mode 100644
index 06053e5defb87977f9ab07e69bf4da12201de9b7..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/util.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import os, hashlib
-import requests
-from tqdm import tqdm
-
-URL_MAP = {
- "vgg_lpips": "https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1"
-}
-
-CKPT_MAP = {
- "vgg_lpips": "vgg.pth"
-}
-
-MD5_MAP = {
- "vgg_lpips": "d507d7349b931f0638a25a48a722f98a"
-}
-
-
-def download(url, local_path, chunk_size=1024):
- os.makedirs(os.path.split(local_path)[0], exist_ok=True)
- with requests.get(url, stream=True) as r:
- total_size = int(r.headers.get("content-length", 0))
- with tqdm(total=total_size, unit="B", unit_scale=True) as pbar:
- with open(local_path, "wb") as f:
- for data in r.iter_content(chunk_size=chunk_size):
- if data:
- f.write(data)
- pbar.update(chunk_size)
-
-
-def md5_hash(path):
- with open(path, "rb") as f:
- content = f.read()
- return hashlib.md5(content).hexdigest()
-
-
-def get_ckpt_path(name, root, check=False):
- assert name in URL_MAP
- path = os.path.join(root, CKPT_MAP[name])
- if not os.path.exists(path) or (check and not md5_hash(path) == MD5_MAP[name]):
- print("Downloading {} model from {} to {}".format(name, URL_MAP[name], path))
- download(URL_MAP[name], path)
- md5 = md5_hash(path)
- assert md5 == MD5_MAP[name], md5
- return path
-
-
-class KeyNotFoundError(Exception):
- def __init__(self, cause, keys=None, visited=None):
- self.cause = cause
- self.keys = keys
- self.visited = visited
- messages = list()
- if keys is not None:
- messages.append("Key not found: {}".format(keys))
- if visited is not None:
- messages.append("Visited: {}".format(visited))
- messages.append("Cause:\n{}".format(cause))
- message = "\n".join(messages)
- super().__init__(message)
-
-
-def retrieve(
- list_or_dict, key, splitval="/", default=None, expand=True, pass_success=False
-):
- """Given a nested list or dict return the desired value at key expanding
- callable nodes if necessary and :attr:`expand` is ``True``. The expansion
- is done in-place.
-
- Parameters
- ----------
- list_or_dict : list or dict
- Possibly nested list or dictionary.
- key : str
- key/to/value, path like string describing all keys necessary to
- consider to get to the desired value. List indices can also be
- passed here.
- splitval : str
- String that defines the delimiter between keys of the
- different depth levels in `key`.
- default : obj
- Value returned if :attr:`key` is not found.
- expand : bool
- Whether to expand callable nodes on the path or not.
-
- Returns
- -------
- The desired value or if :attr:`default` is not ``None`` and the
- :attr:`key` is not found returns ``default``.
-
- Raises
- ------
- Exception if ``key`` not in ``list_or_dict`` and :attr:`default` is
- ``None``.
- """
-
- keys = key.split(splitval)
-
- success = True
- try:
- visited = []
- parent = None
- last_key = None
- for key in keys:
- if callable(list_or_dict):
- if not expand:
- raise KeyNotFoundError(
- ValueError(
- "Trying to get past callable node with expand=False."
- ),
- keys=keys,
- visited=visited,
- )
- list_or_dict = list_or_dict()
- parent[last_key] = list_or_dict
-
- last_key = key
- parent = list_or_dict
-
- try:
- if isinstance(list_or_dict, dict):
- list_or_dict = list_or_dict[key]
- else:
- list_or_dict = list_or_dict[int(key)]
- except (KeyError, IndexError, ValueError) as e:
- raise KeyNotFoundError(e, keys=keys, visited=visited)
-
- visited += [key]
- # final expansion of retrieved value
- if expand and callable(list_or_dict):
- list_or_dict = list_or_dict()
- parent[last_key] = list_or_dict
- except KeyNotFoundError as e:
- if default is None:
- raise e
- else:
- list_or_dict = default
- success = False
-
- if not pass_success:
- return list_or_dict
- else:
- return list_or_dict, success
-
-
-if __name__ == "__main__":
- config = {"keya": "a",
- "keyb": "b",
- "keyc":
- {"cc1": 1,
- "cc2": 2,
- }
- }
- from omegaconf import OmegaConf
- config = OmegaConf.create(config)
- print(config)
- retrieve(config, "keya")
-
diff --git a/spaces/IVentureISB/Gen-AI/scrape_create_context.py b/spaces/IVentureISB/Gen-AI/scrape_create_context.py
deleted file mode 100644
index cf43c36d554ce9b1203ed01dfc56054f278a795c..0000000000000000000000000000000000000000
--- a/spaces/IVentureISB/Gen-AI/scrape_create_context.py
+++ /dev/null
@@ -1,331 +0,0 @@
- # -*- coding: utf-8 -*-
-"""ISB chatbot.ipynb
-
-
-
-Original file is located at
- https://colab.research.google.com/drive/1GYmsZSR4MWuvORNpSWFWrXz79lQKb6oc
-"""
-
-"""# Scrape"""
-
-# Regex to match a URL
-# HTTP_URL_PATTERN = r'^http[s]{0,1}://.+$'
-
-# Define root domain to crawl
-domain = "i-venture.org"
-sitemap_url = "https://i-venture.org/sitemap.xml"
-full_url = "https://i-venture.org/"
-
-import os
-
-RESULTS_DIR = "scraped_files/"
-os.makedirs(RESULTS_DIR, exist_ok=True)
-
-import requests
-import re
-import urllib.request
-from bs4 import BeautifulSoup
-from collections import deque
-from html.parser import HTMLParser
-from urllib.parse import urlparse
-import os
-import pandas as pd
-import numpy as np
-
-def get_sitemap(url=sitemap_url):
- try:
- with urllib.request.urlopen(url) as response:
- xml = BeautifulSoup(response,
- 'lxml-xml',
- from_encoding=response.info().get_param('charset'))
-
- urls = xml.find_all("url")
- locs = []
-
- for url in urls:
-
- if xml.find("loc"):
- loc = url.findNext("loc").text
- locs.append(loc)
-
- return locs
- except Exception as e:
- print(e)
- return []
-
-
-def crawl(url):
- # Parse the URL and get the domain
- # local_domain = urlparse(url).netloc
-
- queue = deque(get_sitemap())
-
- os.makedirs(RESULTS_DIR + "text/", exist_ok=True)
- os.makedirs(RESULTS_DIR + "processed", exist_ok=True)
-
- # While the queue is not empty, continue crawling
- while queue:
- # Get the next URL from the queue
- url = queue.pop()
- print(url) # for debugging and to see the progress
-
- # Save text from the url to a .txt file
- with open(f'{RESULTS_DIR}text/'+ url.strip("/").replace("/", "_") + ".txt", "w", encoding="UTF-8") as f:
-
- soup = BeautifulSoup(requests.get(url).text, "html.parser")
- text = soup.get_text()
-
- # If the crawler gets to a page that requires JavaScript, it will stop the crawl
- if ("You need to enable JavaScript to run this app." in text):
- print("Unable to parse page " + url + " due to JavaScript being required")
-
- f.write(text)
-
- # # Get the hyperlinks from the URL and add them to the queue
- # for link in get_domain_hyperlinks(local_domain, url):
- # if link not in seen:
- # queue.append(link)
- # seen.add(link)
-
-def remove_newlines(serie):
- serie = serie.str.replace('\n', ' ')
- serie = serie.str.replace('\\n', ' ')
- serie = serie.str.replace(' ', ' ')
- serie = serie.str.replace(' ', ' ')
- return serie
-
-
-def get_df():
- # Create a list to store the text files
- texts=[]
-
- for file in os.listdir(RESULTS_DIR + "text/"):
- with open(RESULTS_DIR + "text/" + "/" + file, "r", encoding="UTF-8") as f:
- text = f.read()
-
- # Omit the first 11 lines and the last 4 lines, then replace -, _, and #update with spaces.
- texts.append((file.replace('#update',''), text))
-
- # Create a dataframe from the list of texts
- df = pd.DataFrame(texts, columns = ['fname', 'text'])
-
- # Set the text column to be the raw text with the newlines removed
- df['text'] = df.fname + ". " + remove_newlines(df.text)
- return df
-
-SCRAPING_DONE = False
-if not SCRAPING_DONE:
- crawl(full_url)
- df = get_df()
- df.to_csv(RESULTS_DIR + 'processed/scraped.csv')
- df.head()
- !zip -r iventure_scrape.zip scraped_files
-else:
- !unzip iventure_scrape.zip
-
-"""# Create Embeddings
-
-## Clean
-"""
-
-
-import tiktoken
-from openai.embeddings_utils import distances_from_embeddings, cosine_similarity
-
-# Load the cl100k_base tokenizer which is designed to work with the ada-002 model
-tokenizer = tiktoken.get_encoding("cl100k_base")
-
-df = pd.read_csv(RESULTS_DIR + 'processed/scraped.csv', index_col=0)
-df.columns = ['title', 'text']
-
-# Tokenize the text and save the number of tokens to a new column
-df['n_tokens'] = df.text.apply(lambda x: len(tokenizer.encode(x)))
-
-# Visualize the distribution of the number of tokens per row using a histogram
-df.n_tokens.hist()
-
-max_tokens = 500
-
-# Function to split the text into chunks of a maximum number of tokens
-def split_into_many(text, max_tokens = max_tokens):
-
- # Split the text into sentences
- sentences = text.split('. ')
-
- # Get the number of tokens for each sentence
- n_tokens = [len(tokenizer.encode(" " + sentence)) for sentence in sentences]
-
- chunks = []
- tokens_so_far = 0
- chunk = []
-
- # Loop through the sentences and tokens joined together in a tuple
- for sentence, token in zip(sentences, n_tokens):
-
- # If the number of tokens so far plus the number of tokens in the current sentence is greater
- # than the max number of tokens, then add the chunk to the list of chunks and reset
- # the chunk and tokens so far
- if tokens_so_far + token > max_tokens:
- chunks.append(". ".join(chunk) + ".")
- chunk = []
- tokens_so_far = 0
-
- # If the number of tokens in the current sentence is greater than the max number of
- # tokens, go to the next sentence
- if token > max_tokens:
- continue
-
- # Otherwise, add the sentence to the chunk and add the number of tokens to the total
- chunk.append(sentence)
- tokens_so_far += token + 1
-
- # Add the last chunk to the list of chunks
- if chunk:
- chunks.append(". ".join(chunk) + ".")
-
- return chunks
-
-def shorten(df):
- shortened = []
-
- # Loop through the dataframe
- for row in df.iterrows():
-
- # If the text is None, go to the next row
- if row[1]['text'] is None:
- continue
-
- # If the number of tokens is greater than the max number of tokens, split the text into chunks
- if row[1]['n_tokens'] > max_tokens:
- shortened += split_into_many(row[1]['text'])
-
- # Otherwise, add the text to the list of shortened texts
- else:
- shortened.append( row[1]['text'] )
-
- new_df = pd.DataFrame(shortened, columns = ['text'])
- new_df['n_tokens'] = new_df.text.apply(lambda x: len(tokenizer.encode(x)))
- return new_df
-
-df = shorten(df)
-df.n_tokens.hist()
-
-"""## Create embeds"""
-
-
-
-import openai
-from dotenv import load_dotenv
-load_dotenv()
-
-SECRET_IN_ENV = False
-
-import os
-SECRET_TOKEN = os.getenv("SECRET_TOKEN")
-
-
-def load_api_key():
- with open("secret.txt", "r") as f:
- return f.read()
-
-if SECRET_IN_ENV:
- SECRET_TOKEN = os.getenv("SECRET_TOKEN")
-else:
- SECRET_TOKEN = load_api_key()
-
-openai.api_key = SECRET_TOKEN
-
-# Note that you may run into rate limit issues depending on how many files you try to embed
-# Please check rate limit guide to learn more on how to handle this: https://platform.openai.com/docs/guides/rate-limits
-
-df['embeddings'] = df.text.apply(lambda x: openai.Embedding.create(input=x, engine='text-embedding-ada-002')['data'][0]['embedding'])
-df.to_csv('processed/embeddings.csv')
-df.head()
-
-"""# QnA"""
-
-from ast import literal_eval
-
-df = pd.read_csv('processed/embeddings.csv', index_col=0)
-df['embeddings'] = df['embeddings'].apply(literal_eval).apply(np.array)
-
-
-def create_context(
- question, df, max_len=1800, size="ada"
-):
- """
- Create a context for a question by finding the most similar context from the dataframe
- """
-
- # Get the embeddings for the question
- q_embeddings = openai.Embedding.create(input=question, engine='text-embedding-ada-002')['data'][0]['embedding']
-
- # Get the distances from the embeddings
- df['distances'] = distances_from_embeddings(q_embeddings, df['embeddings'].values, distance_metric='cosine')
-
-
- returns = []
- cur_len = 0
-
- # Sort by distance and add the text to the context until the context is too long
- for i, row in df.sort_values('distances', ascending=True).iterrows():
-
- # Add the length of the text to the current length
- cur_len += row['n_tokens'] + 4
-
- # If the context is too long, break
- if cur_len > max_len:
- break
-
- # Else add it to the text that is being returned
- returns.append(row["text"])
-
- # Return the context
- return "\n\n###\n\n".join(returns)
-
-def answer_question(
- df,
- model="text-davinci-003",
- question="Am I allowed to publish model outputs to Twitter, without a human review?",
- max_len=1800,
- size="ada",
- debug=False,
- max_tokens=150,
- stop_sequence=None
-):
- """
- Answer a question based on the most similar context from the dataframe texts
- """
- context = create_context(
- question,
- df,
- max_len=max_len,
- size=size,
- )
- # If debug, print the raw model response
- if debug:
- print("Context:\n" + context)
- print("\n\n")
-
- try:
- # Create a completions using the questin and context
- response = openai.Completion.create(
- prompt=f"Answer the question based on the context below, and if the question can't be answered based on the context, say \"I don't know\"\n\nContext: {context}\n\n---\n\nQuestion: {question}\nAnswer:",
- temperature=0,
- max_tokens=max_tokens,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0,
- stop=stop_sequence,
- model=model,
- )
- return response["choices"][0]["text"].strip()
- except Exception as e:
- print(e)
- return ""
-
-print(answer_question(df, question="What day is it?", debug=False))
-
-print(answer_question(df, question="What is our newest embeddings model?"))
-
diff --git a/spaces/Illumotion/Koboldcpp/convert-gptneox-hf-to-gguf.py b/spaces/Illumotion/Koboldcpp/convert-gptneox-hf-to-gguf.py
deleted file mode 100644
index 782410e44f2d1d92d832c699d749f830e0334434..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/convert-gptneox-hf-to-gguf.py
+++ /dev/null
@@ -1,251 +0,0 @@
-#!/usr/bin/env python3
-# HF gptneox--> gguf conversion
-
-from __future__ import annotations
-
-import argparse
-import json
-import os
-import struct
-import sys
-from pathlib import Path
-from typing import Any
-
-import numpy as np
-import torch
-from transformers import AutoTokenizer # type: ignore[import]
-
-if 'NO_LOCAL_GGUF' not in os.environ:
- sys.path.insert(1, str(Path(__file__).parent / 'gguf-py' / 'gguf'))
-import gguf
-
-# ref: https://github.com/openai/gpt-2/blob/master/src/encoder.py
-
-
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a significant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- return dict(zip(bs, (chr(n) for n in cs)))
-
-
-def count_model_parts(dir_model: Path) -> int:
- num_parts = 0
- for filename in os.listdir(dir_model):
- if filename.startswith("pytorch_model-"):
- num_parts += 1
-
- if num_parts > 0:
- print("gguf: found " + str(num_parts) + " model parts")
- return num_parts
-
-
-def parse_args() -> argparse.Namespace:
- parser = argparse.ArgumentParser(description="Convert a GPT-NeoX model to a GGML compatible file")
- parser.add_argument(
- "--vocab-only", action="store_true",
- help="extract only the vocab",
- )
- parser.add_argument(
- "--outfile", type=Path,
- help="path to write to; default: based on input",
- )
- parser.add_argument(
- "model", type=Path,
- help="directory containing model file, or model file itself (*.bin)",
- )
- parser.add_argument(
- "ftype", type=int, choices=[0, 1], default=1, nargs='?',
- help="output format - use 0 for float32, 1 for float16",
- )
- return parser.parse_args()
-
-args = parse_args()
-
-dir_model = args.model
-ftype = args.ftype
-if not dir_model.is_dir():
- print(f'Error: {args.model} is not a directory', file = sys.stderr)
- sys.exit(1)
-
-# possible tensor data types
-# ftype == 0 -> float32
-# ftype == 1 -> float16
-
-# map from ftype to string
-ftype_str = ["f32", "f16"]
-
-if args.outfile is not None:
- fname_out = args.outfile
-else:
- # output in the same directory as the model by default
- fname_out = dir_model / f'ggml-model-{ftype_str[ftype]}.gguf'
-
-print("gguf: loading model "+dir_model.name)
-
-with open(dir_model / "config.json", "r", encoding="utf-8") as f:
- hparams = json.load(f)
-
-if hparams["architectures"][0] != "GPTNeoXForCausalLM":
- print("Model architecture not supported: " + hparams["architectures"][0])
-
- sys.exit()
-
-# get number of model parts
-num_parts = count_model_parts(dir_model)
-
-ARCH=gguf.MODEL_ARCH.GPTNEOX
-gguf_writer = gguf.GGUFWriter(fname_out, gguf.MODEL_ARCH_NAMES[ARCH])
-
-print("gguf: get model metadata")
-
-block_count = hparams["num_hidden_layers"]
-
-gguf_writer.add_name(dir_model.name)
-gguf_writer.add_context_length(hparams["max_position_embeddings"])
-gguf_writer.add_embedding_length(hparams["hidden_size"])
-gguf_writer.add_block_count(block_count)
-gguf_writer.add_feed_forward_length(hparams["intermediate_size"])
-gguf_writer.add_rope_dimension_count(int(hparams["rotary_pct"]*(hparams["hidden_size"]//hparams["num_attention_heads"])))
-gguf_writer.add_head_count(hparams["num_attention_heads"])
-gguf_writer.add_parallel_residual(hparams["use_parallel_residual"] if "use_parallel_residual" in hparams else True)
-gguf_writer.add_layer_norm_eps(hparams["layer_norm_eps"])
-
-# TOKENIZATION
-
-print("gguf: get tokenizer metadata")
-
-tokens: list[bytearray] = []
-
-tokenizer_json_file = dir_model / 'tokenizer.json'
-if not tokenizer_json_file.is_file():
- print(f'Error: Missing {tokenizer_json_file}', file = sys.stderr)
- sys.exit(1)
-
-# gpt2 tokenizer
-gguf_writer.add_tokenizer_model("gpt2")
-
-with open(tokenizer_json_file, "r", encoding="utf-8") as f:
- tokenizer_json = json.load(f)
-
-print("gguf: get gpt2 tokenizer vocab")
-
-vocab_size = len(tokenizer_json["model"]["vocab"])
-
-# ref: https://github.com/cmp-nct/ggllm.cpp/blob/master/falcon_convert.py
-tokenizer = AutoTokenizer.from_pretrained(dir_model)
-
-reverse_vocab = {id: encoded_tok for encoded_tok, id in tokenizer.vocab.items()}
-byte_encoder = bytes_to_unicode()
-byte_decoder = {v: k for k, v in byte_encoder.items()}
-
-for i in range(vocab_size):
- if i in reverse_vocab:
- try:
- text = bytearray([byte_decoder[c] for c in reverse_vocab[i]])
- except KeyError:
- text = bytearray()
- for c in reverse_vocab[i]:
- if ord(c) < 256: # single byte character
- text.append(byte_decoder[ord(c)])
- else: # multibyte special token character
- text.extend(c.encode('utf-8'))
- else:
- print(f"Key {i} not in tokenizer vocabulary. Padding with an arbitrary token.")
- pad_token = f"[PAD{i}]".encode("utf8")
- text = bytearray(pad_token)
-
- tokens.append(text)
-
-gguf_writer.add_token_list(tokens)
-
-special_vocab = gguf.SpecialVocab(dir_model, load_merges = True)
-special_vocab.add_to_gguf(gguf_writer)
-
-# TENSORS
-
-tensor_map = gguf.get_tensor_name_map(ARCH,block_count)
-
-# tensor info
-print("gguf: get tensor metadata")
-
-if num_parts == 0:
- part_names = iter(("pytorch_model.bin",))
-else:
- part_names = (
- f"pytorch_model-{n:05}-of-{num_parts:05}.bin" for n in range(1, num_parts + 1)
- )
-
-for part_name in part_names:
- if args.vocab_only:
- break
- print("gguf: loading model part '" + part_name + "'")
- model_part = torch.load(f"{dir_model}/{part_name}", map_location="cpu")
-
- for name in model_part.keys():
- data = model_part[name]
-
- # we don't need these
- if name.endswith(".attention.masked_bias") or name.endswith(".attention.bias") or name.endswith(".attention.rotary_emb.inv_freq"):
- continue
-
- old_dtype = data.dtype
-
- # convert any unsupported data types to float32
- if data.dtype != torch.float16 and data.dtype != torch.float32:
- data = data.to(torch.float32)
-
- data = data.squeeze().numpy()
-
- # map tensor names
- new_name = tensor_map.get_name(name, try_suffixes = (".weight", ".bias"))
- if new_name is None:
- print("Can not map tensor '" + name + "'")
- sys.exit()
-
- n_dims = len(data.shape)
- data_dtype = data.dtype
-
- # if f32 desired, convert any float16 to float32
- if ftype == 0 and data_dtype == np.float16:
- data = data.astype(np.float32)
-
- # TODO: Why cant we use these float16 as-is? There should be not reason to store float16 as float32
- if ftype == 1 and data_dtype == np.float16 and n_dims == 1:
- data = data.astype(np.float32)
-
- # if f16 desired, convert any float32 2-dim weight tensors to float16
- if ftype == 1 and data_dtype == np.float32 and name.endswith(".weight") and n_dims == 2:
- data = data.astype(np.float16)
-
- print(new_name + ", n_dims = " + str(n_dims) + ", " + str(old_dtype) + " --> " + str(data.dtype))
-
- gguf_writer.add_tensor(new_name, data)
-
-
-print("gguf: write header")
-gguf_writer.write_header_to_file()
-print("gguf: write metadata")
-gguf_writer.write_kv_data_to_file()
-if not args.vocab_only:
- print("gguf: write tensors")
- gguf_writer.write_tensors_to_file()
-
-gguf_writer.close()
-
-print(f"gguf: model successfully exported to '{fname_out}'")
-print("")
diff --git a/spaces/Jaehan/Text-Summarization-1/README.md b/spaces/Jaehan/Text-Summarization-1/README.md
deleted file mode 100644
index b9d05b4ac248fcb83ca0f5064d9bd670151c113b..0000000000000000000000000000000000000000
--- a/spaces/Jaehan/Text-Summarization-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Summary 1
-emoji: 📉
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer/utils/plot.py b/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer/utils/plot.py
deleted file mode 100644
index f47d2713d4daa6cf387b37970fd879548abc8d88..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/synthesizer/utils/plot.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import matplotlib
-matplotlib.use("Agg")
-import matplotlib.pyplot as plt
-import numpy as np
-
-
-def split_title_line(title_text, max_words=5):
- """
- A function that splits any string based on specific character
- (returning it with the string), with maximum number of words on it
- """
- seq = title_text.split()
- return "\n".join([" ".join(seq[i:i + max_words]) for i in range(0, len(seq), max_words)])
-
-def plot_alignment(alignment, path, title=None, split_title=False, max_len=None):
- if max_len is not None:
- alignment = alignment[:, :max_len]
-
- fig = plt.figure(figsize=(8, 6))
- ax = fig.add_subplot(111)
-
- im = ax.imshow(
- alignment,
- aspect="auto",
- origin="lower",
- interpolation="none")
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
-
- if split_title:
- title = split_title_line(title)
-
- plt.xlabel(xlabel)
- plt.title(title)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
- plt.savefig(path, format="png")
- plt.close()
-
-
-def plot_spectrogram(pred_spectrogram, path, title=None, split_title=False, target_spectrogram=None, max_len=None, auto_aspect=False):
- if max_len is not None:
- target_spectrogram = target_spectrogram[:max_len]
- pred_spectrogram = pred_spectrogram[:max_len]
-
- if split_title:
- title = split_title_line(title)
-
- fig = plt.figure(figsize=(10, 8))
- # Set common labels
- fig.text(0.5, 0.18, title, horizontalalignment="center", fontsize=16)
-
- #target spectrogram subplot
- if target_spectrogram is not None:
- ax1 = fig.add_subplot(311)
- ax2 = fig.add_subplot(312)
-
- if auto_aspect:
- im = ax1.imshow(np.rot90(target_spectrogram), aspect="auto", interpolation="none")
- else:
- im = ax1.imshow(np.rot90(target_spectrogram), interpolation="none")
- ax1.set_title("Target Mel-Spectrogram")
- fig.colorbar(mappable=im, shrink=0.65, orientation="horizontal", ax=ax1)
- ax2.set_title("Predicted Mel-Spectrogram")
- else:
- ax2 = fig.add_subplot(211)
-
- if auto_aspect:
- im = ax2.imshow(np.rot90(pred_spectrogram), aspect="auto", interpolation="none")
- else:
- im = ax2.imshow(np.rot90(pred_spectrogram), interpolation="none")
- fig.colorbar(mappable=im, shrink=0.65, orientation="horizontal", ax=ax2)
-
- plt.tight_layout()
- plt.savefig(path, format="png")
- plt.close()
diff --git a/spaces/Kororinpa/Amadeus_Project/train.py b/spaces/Kororinpa/Amadeus_Project/train.py
deleted file mode 100644
index ef1cf02759017bbf972aef36369231e1d6ea85b6..0000000000000000000000000000000000000000
--- a/spaces/Kororinpa/Amadeus_Project/train.py
+++ /dev/null
@@ -1,295 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-
-import librosa
-import logging
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-import commons
-import utils
-from data_utils import (
- TextAudioLoader,
- TextAudioCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-
-torch.backends.cudnn.benchmark = True
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '80000'
-
- hps = utils.get_hparams()
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32,300,400,500,600,700,800,900,1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioCollate()
- train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- if rank == 0:
- eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False,
- batch_size=hps.train.batch_size, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model).cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- net_g = DDP(net_g, device_ids=[rank],find_unused_parameters = True)
- net_d = DDP(net_d, device_ids=[rank],find_unused_parameters = True)
-
- try:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d)
- global_step = (epoch_str - 1) * len(train_loader)
- except:
- epoch_str = 1
- global_step = 0
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank==0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d = nets
- optim_g, optim_d = optims
- scheduler_g, scheduler_d = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(train_loader):
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask,\
- (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths)
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank==0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
-
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader):
- x, x_lengths = x.cuda(0), x_lengths.cuda(0)
- spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0)
- y, y_lengths = y.cuda(0), y_lengths.cuda(0)
-
- # remove else
- x = x[:1]
- x_lengths = x_lengths[:1]
- spec = spec[:1]
- spec_lengths = spec_lengths[:1]
- y = y[:1]
- y_lengths = y_lengths[:1]
- break
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000)
- y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict = {
- "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- }
- audio_dict = {
- "gen/audio": y_hat[0,:,:y_hat_lengths[0]]
- }
- if global_step == 0:
- image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/app.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/app.py
deleted file mode 100644
index a608d6602c5f1021332766c371e3240671281e85..0000000000000000000000000000000000000000
--- a/spaces/Lewislou/Lewislou-cell-seg-sribd/app.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import gradio as gr
-from skimage import io, segmentation, morphology, measure, exposure
-from sribd_cellseg_models import MultiStreamCellSegModel,ModelConfig
-import numpy as np
-import tifffile as tif
-import requests
-import torch
-from PIL import Image
-from overlay import visualize_instances_map
-import cv2
-
-
-def normalize_channel(img, lower=1, upper=99):
- non_zero_vals = img[np.nonzero(img)]
- percentiles = np.percentile(non_zero_vals, [lower, upper])
- if percentiles[1] - percentiles[0] > 0.001:
- img_norm = exposure.rescale_intensity(img, in_range=(percentiles[0], percentiles[1]), out_range='uint8')
- else:
- img_norm = img
- return img_norm.astype(np.uint8)
-
-def predict(img_name, model=None, device=None, reduce_labels=True):
- if img_name.endswith('.tif') or img_name.endswith('.tiff'):
- img_data = tif.imread(img_name)
- else:
- img_data = io.imread(img_name)
- # normalize image data
- if len(img_data.shape) == 2:
- img_data = np.repeat(np.expand_dims(img_data, axis=-1), 3, axis=-1)
- elif len(img_data.shape) == 3 and img_data.shape[-1] > 3:
- img_data = img_data[:,:, :3]
- else:
- pass
- pre_img_data = np.zeros(img_data.shape, dtype=np.uint8)
- for i in range(3):
- img_channel_i = img_data[:,:,i]
- if len(img_channel_i[np.nonzero(img_channel_i)])>0:
- pre_img_data[:,:,i] = normalize_channel(img_channel_i, lower=1, upper=99)
-
- my_model = MultiStreamCellSegModel.from_pretrained("Lewislou/cellseg_sribd")
- checkpoints = torch.load('model.pt',map_location=torch.device('cpu'))
- my_model.__init__(ModelConfig())
- my_model.load_checkpoints(checkpoints)
- with torch.no_grad():
- output = my_model(pre_img_data)
- print(output.shape)
- overlay = visualize_instances_map(pre_img_data,output)
- print(pre_img_data.shape,overlay.shape)
-#cv2.imwrite('prediction.png', cv2.cvtColor(overlay, cv2.COLOR_RGB2BGR))
- return pre_img_data,overlay
-gr.Interface(
- predict,
- inputs=[gr.components.Image(label="Upload Input Image", type="filepath"),
- gr.components.Textbox(label='Model Name', value='sribd_med', max_lines=1)],
- outputs=[gr.Image(label="Processed Image"),
- gr.Image(label="Pred Image"),
- ],
- title="Cell Segmentation Results",
-).launch()
\ No newline at end of file
diff --git a/spaces/LinkSoul/Chinese-Llama-2-7b/USE_POLICY.md b/spaces/LinkSoul/Chinese-Llama-2-7b/USE_POLICY.md
deleted file mode 100644
index abbcc199b2d1e4feb5d7e40c0bd67e1b0ce29e97..0000000000000000000000000000000000000000
--- a/spaces/LinkSoul/Chinese-Llama-2-7b/USE_POLICY.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Llama 2 Acceptable Use Policy
-
-Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
-
-## Prohibited Uses
-We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to:
-
-1. Violate the law or others’ rights, including to:
- 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
- 1. Violence or terrorism
- 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
- 3. Human trafficking, exploitation, and sexual violence
- 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
- 5. Sexual solicitation
- 6. Any other criminal activity
- 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
- 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
- 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
- 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
- 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
- 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
-
-
-
-2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following:
- 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
- 2. Guns and illegal weapons (including weapon development)
- 3. Illegal drugs and regulated/controlled substances
- 4. Operation of critical infrastructure, transportation technologies, or heavy machinery
- 5. Self-harm or harm to others, including suicide, cutting, and eating disorders
- 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
-
-
-
-3. Intentionally deceive or mislead others, including use of Llama 2 related to the following:
- 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
- 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
- 3. Generating, promoting, or further distributing spam
- 4. Impersonating another individual without consent, authorization, or legal right
- 5. Representing that the use of Llama 2 or outputs are human-generated
- 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
-4. Fail to appropriately disclose to end users any known dangers of your AI system
-
-Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
-
-* Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
-* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
-* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
-* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com)
-
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/seg_pipeline.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/seg_pipeline.py
deleted file mode 100644
index 378474dfb5341ec93e73bb61047c43ba72d5e127..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/seg_pipeline.py
+++ /dev/null
@@ -1,66 +0,0 @@
-img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
-gt_label_convertor = dict(
- type='SegConvertor', dict_type='DICT36', with_unknown=True, lower=True)
-
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='RandomPaddingOCR',
- max_ratio=[0.15, 0.2, 0.15, 0.2],
- box_type='char_quads'),
- dict(type='OpencvToPil'),
- dict(
- type='RandomRotateImageBox',
- min_angle=-17,
- max_angle=17,
- box_type='char_quads'),
- dict(type='PilToOpencv'),
- dict(
- type='ResizeOCR',
- height=64,
- min_width=64,
- max_width=512,
- keep_aspect_ratio=True),
- dict(
- type='OCRSegTargets',
- label_convertor=gt_label_convertor,
- box_type='char_quads'),
- dict(type='RandomRotateTextDet', rotate_ratio=0.5, max_angle=15),
- dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4),
- dict(type='ToTensorOCR'),
- dict(type='FancyPCA'),
- dict(type='NormalizeOCR', **img_norm_cfg),
- dict(
- type='CustomFormatBundle',
- keys=['gt_kernels'],
- visualize=dict(flag=False, boundary_key=None),
- call_super=False),
- dict(
- type='Collect',
- keys=['img', 'gt_kernels'],
- meta_keys=['filename', 'ori_shape', 'resize_shape'])
-]
-
-test_img_norm_cfg = dict(
- mean=[x * 255 for x in img_norm_cfg['mean']],
- std=[x * 255 for x in img_norm_cfg['std']])
-
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='ResizeOCR',
- height=64,
- min_width=64,
- max_width=None,
- keep_aspect_ratio=True),
- dict(type='Normalize', **test_img_norm_cfg),
- dict(type='DefaultFormatBundle'),
- dict(
- type='Collect',
- keys=['img'],
- meta_keys=[
- 'filename', 'resize_shape', 'img_norm_cfg', 'ori_filename',
- 'img_shape', 'ori_shape'
- ])
-]
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_100k_iters.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_100k_iters.py
deleted file mode 100644
index df2a3300f057145757b5164ec062b58e9d2f96c6..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/schedules/schedule_sgd_100k_iters.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.007, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-7, by_epoch=False)
-# running settings
-runner = dict(type='IterBasedRunner', max_iters=100000)
-checkpoint_config = dict(interval=10000)
diff --git a/spaces/MSLAB/PaperGPT/src/edit.py b/spaces/MSLAB/PaperGPT/src/edit.py
deleted file mode 100644
index 07477ceebc8e0d0f7c1a2143b8e9fbd97d447b48..0000000000000000000000000000000000000000
--- a/spaces/MSLAB/PaperGPT/src/edit.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import logging
-import tiktoken
-import gradio as gr
-from langchain.text_splitter import CharacterTextSplitter
-from utils import fetch_chat
-from typing import List
-
-
-class Editor():
-
- def __init__(self, model: str = "gpt-3.5-turbo"):
- self.encoder = tiktoken.encoding_for_model(model)
- self.model = model
- with open("./sample/sample_abstract.tex", "r") as f:
- self.sample_content = f.read()
-
- def split_chunk(self, text, chunk_size: int = 2000) -> List[str]:
- text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
- chunk_size=100, chunk_overlap=0
- )
- text_list = text_splitter.split_text(text)
- return text_list
-
- def generate(self, text: str, openai_key: str):
-
- logging.info("start editing")
-
- try:
- prompt = f"""
- I am a computer science student.
- I am writing my research paper.
- You are my editor.
- Your goal is to improve my paper quality at your best.
- Please edit the following paragraph and return the modified paragraph.
- If the paragraph is written in latex, return the modified paragraph in latex.
-
- ```
- {text}
- ```
- """
- return fetch_chat(prompt, openai_key, model=self.model)
- except Exception as e:
- raise gr.Error(str(e))
\ No newline at end of file
diff --git a/spaces/MacYang/Diamond-Sutra/ingest.py b/spaces/MacYang/Diamond-Sutra/ingest.py
deleted file mode 100644
index 4dc66b602e0559a0dabee70c76fe1e47e01cdd3e..0000000000000000000000000000000000000000
--- a/spaces/MacYang/Diamond-Sutra/ingest.py
+++ /dev/null
@@ -1,54 +0,0 @@
-"""ingeste the jinggang book and save it to a vector store"""
-from typing import List
-import logging
-import pickle
-import os.path
-import re
-from langchain.document_loaders import PDFMinerLoader
-from langchain.docstore.document import Document
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.embeddings import OpenAIEmbeddings
-from langchain.vectorstores.faiss import FAISS
-
-BOOK_PATH = "jinggang.pdf"
-VECTOR_STORE_PATH = "jinggang_embeddings.pkl"
-
-logging.basicConfig(level=logging.INFO)
-
-def _load_book(book_path: str) -> List[Document]:
- loader = PDFMinerLoader(book_path)
- docs = loader.load()
- logging.info("document loaded")
- return docs
-
-def _split_docs(docs: List[Document]) -> List[Document]:
- separators=["。\n\n", "」\n\n", "\n\n", "\n", ""]
- text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=10,
- separators=separators)
- docs = text_splitter.split_documents(docs)
- _rm_redundant_newline(docs)
- logging.info("document splitted to chunks")
- return docs
-
-def _rm_redundant_newline(docs: List[Document]):
- #del "\n\n", "\n" "\x0c"(page breaker), but keep "。\n\n"
- pattern = r"(? 0)
- # Compose image
- im_overlay[binary_mask] = foreground[binary_mask]
- if fade:
- im_overlay[~binary_mask] = im_overlay[~binary_mask] * 0.6
- return im_overlay.astype(image.dtype)
-
-def overlay_popup(image, mask, target_object):
- # Keep foreground colored. Convert background to grayscale.
- im_overlay = image.copy()
-
- binary_mask = ~(np.isin(mask, target_object))
- colored_region = (im_overlay[binary_mask]*grayscale_weights).sum(-1, keepdims=-1)
- im_overlay[binary_mask] = colored_region
- return im_overlay.astype(image.dtype)
-
-def overlay_layer(image, mask, layer, target_object):
- # insert a layer between foreground and background
- # The CPU version is less accurate because we are using the hard mask
- # The GPU version has softer edges as it uses soft probabilities
- obj_mask = (np.isin(mask, target_object)).astype(np.float32)
- layer_alpha = layer[:, :, 3].astype(np.float32) / 255
- layer_rgb = layer[:, :, :3]
- background_alpha = np.maximum(obj_mask, layer_alpha)[:,:,np.newaxis]
- obj_mask = obj_mask[:,:,np.newaxis]
- im_overlay = (image*(1-background_alpha) + layer_rgb*(1-obj_mask) + image*obj_mask).clip(0, 255)
- return im_overlay.astype(image.dtype)
-
-def overlay_davis_torch(image, mask, alpha=0.5, fade=False):
- """ Overlay segmentation on top of RGB image. from davis official"""
- # Changes the image in-place to avoid copying
- image = image.permute(1, 2, 0)
- im_overlay = image
- mask = torch.argmax(mask, dim=0)
-
- colored_mask = color_map_torch[mask]
- foreground = image*alpha + (1-alpha)*colored_mask
- binary_mask = (mask > 0)
- # Compose image
- im_overlay[binary_mask] = foreground[binary_mask]
- if fade:
- im_overlay[~binary_mask] = im_overlay[~binary_mask] * 0.6
-
- im_overlay = (im_overlay*255).cpu().numpy()
- im_overlay = im_overlay.astype(np.uint8)
-
- return im_overlay
-
-def overlay_popup_torch(image, mask, target_object):
- # Keep foreground colored. Convert background to grayscale.
- image = image.permute(1, 2, 0)
-
- if len(target_object) == 0:
- obj_mask = torch.zeros_like(mask[0]).unsqueeze(2)
- else:
- # I should not need to convert this to numpy.
- # uUsing list works most of the time but consistently fails
- # if I include first object -> exclude it -> include it again.
- # I check everywhere and it makes absolutely no sense.
- # I am blaming this on PyTorch and calling it a day
- obj_mask = mask[np.array(target_object,dtype=np.int32)].sum(0).unsqueeze(2)
- gray_image = (image*grayscale_weights_torch).sum(-1, keepdim=True)
- im_overlay = obj_mask*image + (1-obj_mask)*gray_image
-
- im_overlay = (im_overlay*255).cpu().numpy()
- im_overlay = im_overlay.astype(np.uint8)
-
- return im_overlay
-
-def overlay_layer_torch(image, mask, layer, target_object):
- # insert a layer between foreground and background
- # The CPU version is less accurate because we are using the hard mask
- # The GPU version has softer edges as it uses soft probabilities
- image = image.permute(1, 2, 0)
-
- if len(target_object) == 0:
- obj_mask = torch.zeros_like(mask[0])
- else:
- # I should not need to convert this to numpy.
- # uUsing list works most of the time but consistently fails
- # if I include first object -> exclude it -> include it again.
- # I check everywhere and it makes absolutely no sense.
- # I am blaming this on PyTorch and calling it a day
- obj_mask = mask[np.array(target_object,dtype=np.int32)].sum(0)
- layer_alpha = layer[:, :, 3]
- layer_rgb = layer[:, :, :3]
- background_alpha = torch.maximum(obj_mask, layer_alpha).unsqueeze(2)
- obj_mask = obj_mask.unsqueeze(2)
- im_overlay = (image*(1-background_alpha) + layer_rgb*(1-obj_mask) + image*obj_mask).clip(0, 1)
-
- im_overlay = (im_overlay*255).cpu().numpy()
- im_overlay = im_overlay.astype(np.uint8)
-
- return im_overlay
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/checkpoint.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/checkpoint.py
deleted file mode 100644
index b29ca320679164432f446adad893e33fb2b4b29e..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/checkpoint.py
+++ /dev/null
@@ -1,707 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import io
-import os
-import os.path as osp
-import pkgutil
-import re
-import time
-import warnings
-from collections import OrderedDict
-from importlib import import_module
-from tempfile import TemporaryDirectory
-
-import torch
-import torchvision
-from torch.optim import Optimizer
-from torch.utils import model_zoo
-
-import annotator.uniformer.mmcv as mmcv
-from ..fileio import FileClient
-from ..fileio import load as load_file
-from ..parallel import is_module_wrapper
-from ..utils import mkdir_or_exist
-from .dist_utils import get_dist_info
-
-ENV_MMCV_HOME = 'MMCV_HOME'
-ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME'
-DEFAULT_CACHE_DIR = '~/.cache'
-
-
-def _get_mmcv_home():
- mmcv_home = os.path.expanduser(
- os.getenv(
- ENV_MMCV_HOME,
- os.path.join(
- os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv')))
-
- mkdir_or_exist(mmcv_home)
- return mmcv_home
-
-
-def load_state_dict(module, state_dict, strict=False, logger=None):
- """Load state_dict to a module.
-
- This method is modified from :meth:`torch.nn.Module.load_state_dict`.
- Default value for ``strict`` is set to ``False`` and the message for
- param mismatch will be shown even if strict is False.
-
- Args:
- module (Module): Module that receives the state_dict.
- state_dict (OrderedDict): Weights.
- strict (bool): whether to strictly enforce that the keys
- in :attr:`state_dict` match the keys returned by this module's
- :meth:`~torch.nn.Module.state_dict` function. Default: ``False``.
- logger (:obj:`logging.Logger`, optional): Logger to log the error
- message. If not specified, print function will be used.
- """
- unexpected_keys = []
- all_missing_keys = []
- err_msg = []
-
- metadata = getattr(state_dict, '_metadata', None)
- state_dict = state_dict.copy()
- if metadata is not None:
- state_dict._metadata = metadata
-
- # use _load_from_state_dict to enable checkpoint version control
- def load(module, prefix=''):
- # recursively check parallel module in case that the model has a
- # complicated structure, e.g., nn.Module(nn.Module(DDP))
- if is_module_wrapper(module):
- module = module.module
- local_metadata = {} if metadata is None else metadata.get(
- prefix[:-1], {})
- module._load_from_state_dict(state_dict, prefix, local_metadata, True,
- all_missing_keys, unexpected_keys,
- err_msg)
- for name, child in module._modules.items():
- if child is not None:
- load(child, prefix + name + '.')
-
- load(module)
- load = None # break load->load reference cycle
-
- # ignore "num_batches_tracked" of BN layers
- missing_keys = [
- key for key in all_missing_keys if 'num_batches_tracked' not in key
- ]
-
- if unexpected_keys:
- err_msg.append('unexpected key in source '
- f'state_dict: {", ".join(unexpected_keys)}\n')
- if missing_keys:
- err_msg.append(
- f'missing keys in source state_dict: {", ".join(missing_keys)}\n')
-
- rank, _ = get_dist_info()
- if len(err_msg) > 0 and rank == 0:
- err_msg.insert(
- 0, 'The model and loaded state dict do not match exactly\n')
- err_msg = '\n'.join(err_msg)
- if strict:
- raise RuntimeError(err_msg)
- elif logger is not None:
- logger.warning(err_msg)
- else:
- print(err_msg)
-
-
-def get_torchvision_models():
- model_urls = dict()
- for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__):
- if ispkg:
- continue
- _zoo = import_module(f'torchvision.models.{name}')
- if hasattr(_zoo, 'model_urls'):
- _urls = getattr(_zoo, 'model_urls')
- model_urls.update(_urls)
- return model_urls
-
-
-def get_external_models():
- mmcv_home = _get_mmcv_home()
- default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json')
- default_urls = load_file(default_json_path)
- assert isinstance(default_urls, dict)
- external_json_path = osp.join(mmcv_home, 'open_mmlab.json')
- if osp.exists(external_json_path):
- external_urls = load_file(external_json_path)
- assert isinstance(external_urls, dict)
- default_urls.update(external_urls)
-
- return default_urls
-
-
-def get_mmcls_models():
- mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json')
- mmcls_urls = load_file(mmcls_json_path)
-
- return mmcls_urls
-
-
-def get_deprecated_model_names():
- deprecate_json_path = osp.join(mmcv.__path__[0],
- 'model_zoo/deprecated.json')
- deprecate_urls = load_file(deprecate_json_path)
- assert isinstance(deprecate_urls, dict)
-
- return deprecate_urls
-
-
-def _process_mmcls_checkpoint(checkpoint):
- state_dict = checkpoint['state_dict']
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- if k.startswith('backbone.'):
- new_state_dict[k[9:]] = v
- new_checkpoint = dict(state_dict=new_state_dict)
-
- return new_checkpoint
-
-
-class CheckpointLoader:
- """A general checkpoint loader to manage all schemes."""
-
- _schemes = {}
-
- @classmethod
- def _register_scheme(cls, prefixes, loader, force=False):
- if isinstance(prefixes, str):
- prefixes = [prefixes]
- else:
- assert isinstance(prefixes, (list, tuple))
- for prefix in prefixes:
- if (prefix not in cls._schemes) or force:
- cls._schemes[prefix] = loader
- else:
- raise KeyError(
- f'{prefix} is already registered as a loader backend, '
- 'add "force=True" if you want to override it')
- # sort, longer prefixes take priority
- cls._schemes = OrderedDict(
- sorted(cls._schemes.items(), key=lambda t: t[0], reverse=True))
-
- @classmethod
- def register_scheme(cls, prefixes, loader=None, force=False):
- """Register a loader to CheckpointLoader.
-
- This method can be used as a normal class method or a decorator.
-
- Args:
- prefixes (str or list[str] or tuple[str]):
- The prefix of the registered loader.
- loader (function, optional): The loader function to be registered.
- When this method is used as a decorator, loader is None.
- Defaults to None.
- force (bool, optional): Whether to override the loader
- if the prefix has already been registered. Defaults to False.
- """
-
- if loader is not None:
- cls._register_scheme(prefixes, loader, force=force)
- return
-
- def _register(loader_cls):
- cls._register_scheme(prefixes, loader_cls, force=force)
- return loader_cls
-
- return _register
-
- @classmethod
- def _get_checkpoint_loader(cls, path):
- """Finds a loader that supports the given path. Falls back to the local
- loader if no other loader is found.
-
- Args:
- path (str): checkpoint path
-
- Returns:
- loader (function): checkpoint loader
- """
-
- for p in cls._schemes:
- if path.startswith(p):
- return cls._schemes[p]
-
- @classmethod
- def load_checkpoint(cls, filename, map_location=None, logger=None):
- """load checkpoint through URL scheme path.
-
- Args:
- filename (str): checkpoint file name with given prefix
- map_location (str, optional): Same as :func:`torch.load`.
- Default: None
- logger (:mod:`logging.Logger`, optional): The logger for message.
- Default: None
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
-
- checkpoint_loader = cls._get_checkpoint_loader(filename)
- class_name = checkpoint_loader.__name__
- mmcv.print_log(
- f'load checkpoint from {class_name[10:]} path: {filename}', logger)
- return checkpoint_loader(filename, map_location)
-
-
-@CheckpointLoader.register_scheme(prefixes='')
-def load_from_local(filename, map_location):
- """load checkpoint by local file path.
-
- Args:
- filename (str): local checkpoint file path
- map_location (str, optional): Same as :func:`torch.load`.
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
-
- if not osp.isfile(filename):
- raise IOError(f'{filename} is not a checkpoint file')
- checkpoint = torch.load(filename, map_location=map_location)
- return checkpoint
-
-
-@CheckpointLoader.register_scheme(prefixes=('http://', 'https://'))
-def load_from_http(filename, map_location=None, model_dir=None):
- """load checkpoint through HTTP or HTTPS scheme path. In distributed
- setting, this function only download checkpoint at local rank 0.
-
- Args:
- filename (str): checkpoint file path with modelzoo or
- torchvision prefix
- map_location (str, optional): Same as :func:`torch.load`.
- model_dir (string, optional): directory in which to save the object,
- Default: None
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
- rank, world_size = get_dist_info()
- rank = int(os.environ.get('LOCAL_RANK', rank))
- if rank == 0:
- checkpoint = model_zoo.load_url(
- filename, model_dir=model_dir, map_location=map_location)
- if world_size > 1:
- torch.distributed.barrier()
- if rank > 0:
- checkpoint = model_zoo.load_url(
- filename, model_dir=model_dir, map_location=map_location)
- return checkpoint
-
-
-@CheckpointLoader.register_scheme(prefixes='pavi://')
-def load_from_pavi(filename, map_location=None):
- """load checkpoint through the file path prefixed with pavi. In distributed
- setting, this function download ckpt at all ranks to different temporary
- directories.
-
- Args:
- filename (str): checkpoint file path with pavi prefix
- map_location (str, optional): Same as :func:`torch.load`.
- Default: None
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
- assert filename.startswith('pavi://'), \
- f'Expected filename startswith `pavi://`, but get {filename}'
- model_path = filename[7:]
-
- try:
- from pavi import modelcloud
- except ImportError:
- raise ImportError(
- 'Please install pavi to load checkpoint from modelcloud.')
-
- model = modelcloud.get(model_path)
- with TemporaryDirectory() as tmp_dir:
- downloaded_file = osp.join(tmp_dir, model.name)
- model.download(downloaded_file)
- checkpoint = torch.load(downloaded_file, map_location=map_location)
- return checkpoint
-
-
-@CheckpointLoader.register_scheme(prefixes='s3://')
-def load_from_ceph(filename, map_location=None, backend='petrel'):
- """load checkpoint through the file path prefixed with s3. In distributed
- setting, this function download ckpt at all ranks to different temporary
- directories.
-
- Args:
- filename (str): checkpoint file path with s3 prefix
- map_location (str, optional): Same as :func:`torch.load`.
- backend (str, optional): The storage backend type. Options are 'ceph',
- 'petrel'. Default: 'petrel'.
-
- .. warning::
- :class:`mmcv.fileio.file_client.CephBackend` will be deprecated,
- please use :class:`mmcv.fileio.file_client.PetrelBackend` instead.
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
- allowed_backends = ['ceph', 'petrel']
- if backend not in allowed_backends:
- raise ValueError(f'Load from Backend {backend} is not supported.')
-
- if backend == 'ceph':
- warnings.warn(
- 'CephBackend will be deprecated, please use PetrelBackend instead')
-
- # CephClient and PetrelBackend have the same prefix 's3://' and the latter
- # will be chosen as default. If PetrelBackend can not be instantiated
- # successfully, the CephClient will be chosen.
- try:
- file_client = FileClient(backend=backend)
- except ImportError:
- allowed_backends.remove(backend)
- file_client = FileClient(backend=allowed_backends[0])
-
- with io.BytesIO(file_client.get(filename)) as buffer:
- checkpoint = torch.load(buffer, map_location=map_location)
- return checkpoint
-
-
-@CheckpointLoader.register_scheme(prefixes=('modelzoo://', 'torchvision://'))
-def load_from_torchvision(filename, map_location=None):
- """load checkpoint through the file path prefixed with modelzoo or
- torchvision.
-
- Args:
- filename (str): checkpoint file path with modelzoo or
- torchvision prefix
- map_location (str, optional): Same as :func:`torch.load`.
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
- model_urls = get_torchvision_models()
- if filename.startswith('modelzoo://'):
- warnings.warn('The URL scheme of "modelzoo://" is deprecated, please '
- 'use "torchvision://" instead')
- model_name = filename[11:]
- else:
- model_name = filename[14:]
- return load_from_http(model_urls[model_name], map_location=map_location)
-
-
-@CheckpointLoader.register_scheme(prefixes=('open-mmlab://', 'openmmlab://'))
-def load_from_openmmlab(filename, map_location=None):
- """load checkpoint through the file path prefixed with open-mmlab or
- openmmlab.
-
- Args:
- filename (str): checkpoint file path with open-mmlab or
- openmmlab prefix
- map_location (str, optional): Same as :func:`torch.load`.
- Default: None
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
-
- model_urls = get_external_models()
- prefix_str = 'open-mmlab://'
- if filename.startswith(prefix_str):
- model_name = filename[13:]
- else:
- model_name = filename[12:]
- prefix_str = 'openmmlab://'
-
- deprecated_urls = get_deprecated_model_names()
- if model_name in deprecated_urls:
- warnings.warn(f'{prefix_str}{model_name} is deprecated in favor '
- f'of {prefix_str}{deprecated_urls[model_name]}')
- model_name = deprecated_urls[model_name]
- model_url = model_urls[model_name]
- # check if is url
- if model_url.startswith(('http://', 'https://')):
- checkpoint = load_from_http(model_url, map_location=map_location)
- else:
- filename = osp.join(_get_mmcv_home(), model_url)
- if not osp.isfile(filename):
- raise IOError(f'{filename} is not a checkpoint file')
- checkpoint = torch.load(filename, map_location=map_location)
- return checkpoint
-
-
-@CheckpointLoader.register_scheme(prefixes='mmcls://')
-def load_from_mmcls(filename, map_location=None):
- """load checkpoint through the file path prefixed with mmcls.
-
- Args:
- filename (str): checkpoint file path with mmcls prefix
- map_location (str, optional): Same as :func:`torch.load`.
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
-
- model_urls = get_mmcls_models()
- model_name = filename[8:]
- checkpoint = load_from_http(
- model_urls[model_name], map_location=map_location)
- checkpoint = _process_mmcls_checkpoint(checkpoint)
- return checkpoint
-
-
-def _load_checkpoint(filename, map_location=None, logger=None):
- """Load checkpoint from somewhere (modelzoo, file, url).
-
- Args:
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
- details.
- map_location (str, optional): Same as :func:`torch.load`.
- Default: None.
- logger (:mod:`logging.Logger`, optional): The logger for error message.
- Default: None
-
- Returns:
- dict or OrderedDict: The loaded checkpoint. It can be either an
- OrderedDict storing model weights or a dict containing other
- information, which depends on the checkpoint.
- """
- return CheckpointLoader.load_checkpoint(filename, map_location, logger)
-
-
-def _load_checkpoint_with_prefix(prefix, filename, map_location=None):
- """Load partial pretrained model with specific prefix.
-
- Args:
- prefix (str): The prefix of sub-module.
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
- details.
- map_location (str | None): Same as :func:`torch.load`. Default: None.
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
-
- checkpoint = _load_checkpoint(filename, map_location=map_location)
-
- if 'state_dict' in checkpoint:
- state_dict = checkpoint['state_dict']
- else:
- state_dict = checkpoint
- if not prefix.endswith('.'):
- prefix += '.'
- prefix_len = len(prefix)
-
- state_dict = {
- k[prefix_len:]: v
- for k, v in state_dict.items() if k.startswith(prefix)
- }
-
- assert state_dict, f'{prefix} is not in the pretrained model'
- return state_dict
-
-
-def load_checkpoint(model,
- filename,
- map_location=None,
- strict=False,
- logger=None,
- revise_keys=[(r'^module\.', '')]):
- """Load checkpoint from a file or URI.
-
- Args:
- model (Module): Module to load checkpoint.
- filename (str): Accept local filepath, URL, ``torchvision://xxx``,
- ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
- details.
- map_location (str): Same as :func:`torch.load`.
- strict (bool): Whether to allow different params for the model and
- checkpoint.
- logger (:mod:`logging.Logger` or None): The logger for error message.
- revise_keys (list): A list of customized keywords to modify the
- state_dict in checkpoint. Each item is a (pattern, replacement)
- pair of the regular expression operations. Default: strip
- the prefix 'module.' by [(r'^module\\.', '')].
-
- Returns:
- dict or OrderedDict: The loaded checkpoint.
- """
- checkpoint = _load_checkpoint(filename, map_location, logger)
- # OrderedDict is a subclass of dict
- if not isinstance(checkpoint, dict):
- raise RuntimeError(
- f'No state_dict found in checkpoint file {filename}')
- # get state_dict from checkpoint
- if 'state_dict' in checkpoint:
- state_dict = checkpoint['state_dict']
- else:
- state_dict = checkpoint
-
- # strip prefix of state_dict
- metadata = getattr(state_dict, '_metadata', OrderedDict())
- for p, r in revise_keys:
- state_dict = OrderedDict(
- {re.sub(p, r, k): v
- for k, v in state_dict.items()})
- # Keep metadata in state_dict
- state_dict._metadata = metadata
-
- # load state_dict
- load_state_dict(model, state_dict, strict, logger)
- return checkpoint
-
-
-def weights_to_cpu(state_dict):
- """Copy a model state_dict to cpu.
-
- Args:
- state_dict (OrderedDict): Model weights on GPU.
-
- Returns:
- OrderedDict: Model weights on GPU.
- """
- state_dict_cpu = OrderedDict()
- for key, val in state_dict.items():
- state_dict_cpu[key] = val.cpu()
- # Keep metadata in state_dict
- state_dict_cpu._metadata = getattr(state_dict, '_metadata', OrderedDict())
- return state_dict_cpu
-
-
-def _save_to_state_dict(module, destination, prefix, keep_vars):
- """Saves module state to `destination` dictionary.
-
- This method is modified from :meth:`torch.nn.Module._save_to_state_dict`.
-
- Args:
- module (nn.Module): The module to generate state_dict.
- destination (dict): A dict where state will be stored.
- prefix (str): The prefix for parameters and buffers used in this
- module.
- """
- for name, param in module._parameters.items():
- if param is not None:
- destination[prefix + name] = param if keep_vars else param.detach()
- for name, buf in module._buffers.items():
- # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d
- if buf is not None:
- destination[prefix + name] = buf if keep_vars else buf.detach()
-
-
-def get_state_dict(module, destination=None, prefix='', keep_vars=False):
- """Returns a dictionary containing a whole state of the module.
-
- Both parameters and persistent buffers (e.g. running averages) are
- included. Keys are corresponding parameter and buffer names.
-
- This method is modified from :meth:`torch.nn.Module.state_dict` to
- recursively check parallel module in case that the model has a complicated
- structure, e.g., nn.Module(nn.Module(DDP)).
-
- Args:
- module (nn.Module): The module to generate state_dict.
- destination (OrderedDict): Returned dict for the state of the
- module.
- prefix (str): Prefix of the key.
- keep_vars (bool): Whether to keep the variable property of the
- parameters. Default: False.
-
- Returns:
- dict: A dictionary containing a whole state of the module.
- """
- # recursively check parallel module in case that the model has a
- # complicated structure, e.g., nn.Module(nn.Module(DDP))
- if is_module_wrapper(module):
- module = module.module
-
- # below is the same as torch.nn.Module.state_dict()
- if destination is None:
- destination = OrderedDict()
- destination._metadata = OrderedDict()
- destination._metadata[prefix[:-1]] = local_metadata = dict(
- version=module._version)
- _save_to_state_dict(module, destination, prefix, keep_vars)
- for name, child in module._modules.items():
- if child is not None:
- get_state_dict(
- child, destination, prefix + name + '.', keep_vars=keep_vars)
- for hook in module._state_dict_hooks.values():
- hook_result = hook(module, destination, prefix, local_metadata)
- if hook_result is not None:
- destination = hook_result
- return destination
-
-
-def save_checkpoint(model,
- filename,
- optimizer=None,
- meta=None,
- file_client_args=None):
- """Save checkpoint to file.
-
- The checkpoint will have 3 fields: ``meta``, ``state_dict`` and
- ``optimizer``. By default ``meta`` will contain version and time info.
-
- Args:
- model (Module): Module whose params are to be saved.
- filename (str): Checkpoint filename.
- optimizer (:obj:`Optimizer`, optional): Optimizer to be saved.
- meta (dict, optional): Metadata to be saved in checkpoint.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
- `New in version 1.3.16.`
- """
- if meta is None:
- meta = {}
- elif not isinstance(meta, dict):
- raise TypeError(f'meta must be a dict or None, but got {type(meta)}')
- meta.update(mmcv_version=mmcv.__version__, time=time.asctime())
-
- if is_module_wrapper(model):
- model = model.module
-
- if hasattr(model, 'CLASSES') and model.CLASSES is not None:
- # save class name to the meta
- meta.update(CLASSES=model.CLASSES)
-
- checkpoint = {
- 'meta': meta,
- 'state_dict': weights_to_cpu(get_state_dict(model))
- }
- # save optimizer state dict in the checkpoint
- if isinstance(optimizer, Optimizer):
- checkpoint['optimizer'] = optimizer.state_dict()
- elif isinstance(optimizer, dict):
- checkpoint['optimizer'] = {}
- for name, optim in optimizer.items():
- checkpoint['optimizer'][name] = optim.state_dict()
-
- if filename.startswith('pavi://'):
- if file_client_args is not None:
- raise ValueError(
- 'file_client_args should be "None" if filename starts with'
- f'"pavi://", but got {file_client_args}')
- try:
- from pavi import modelcloud
- from pavi import exception
- except ImportError:
- raise ImportError(
- 'Please install pavi to load checkpoint from modelcloud.')
- model_path = filename[7:]
- root = modelcloud.Folder()
- model_dir, model_name = osp.split(model_path)
- try:
- model = modelcloud.get(model_dir)
- except exception.NodeNotFoundError:
- model = root.create_training_model(model_dir)
- with TemporaryDirectory() as tmp_dir:
- checkpoint_file = osp.join(tmp_dir, model_name)
- with open(checkpoint_file, 'wb') as f:
- torch.save(checkpoint, f)
- f.flush()
- model.create_file(checkpoint_file, name=model_name)
- else:
- file_client = FileClient.infer_client(file_client_args, filename)
- with io.BytesIO() as f:
- torch.save(checkpoint, f)
- file_client.put(f.getvalue(), filename)
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/logger/base.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/logger/base.py
deleted file mode 100644
index f845256729458ced821762a1b8ef881e17ff9955..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/logger/base.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numbers
-from abc import ABCMeta, abstractmethod
-
-import numpy as np
-import torch
-
-from ..hook import Hook
-
-
-class LoggerHook(Hook):
- """Base class for logger hooks.
-
- Args:
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging.
- by_epoch (bool): Whether EpochBasedRunner is used.
- """
-
- __metaclass__ = ABCMeta
-
- def __init__(self,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- by_epoch=True):
- self.interval = interval
- self.ignore_last = ignore_last
- self.reset_flag = reset_flag
- self.by_epoch = by_epoch
-
- @abstractmethod
- def log(self, runner):
- pass
-
- @staticmethod
- def is_scalar(val, include_np=True, include_torch=True):
- """Tell the input variable is a scalar or not.
-
- Args:
- val: Input variable.
- include_np (bool): Whether include 0-d np.ndarray as a scalar.
- include_torch (bool): Whether include 0-d torch.Tensor as a scalar.
-
- Returns:
- bool: True or False.
- """
- if isinstance(val, numbers.Number):
- return True
- elif include_np and isinstance(val, np.ndarray) and val.ndim == 0:
- return True
- elif include_torch and isinstance(val, torch.Tensor) and len(val) == 1:
- return True
- else:
- return False
-
- def get_mode(self, runner):
- if runner.mode == 'train':
- if 'time' in runner.log_buffer.output:
- mode = 'train'
- else:
- mode = 'val'
- elif runner.mode == 'val':
- mode = 'val'
- else:
- raise ValueError(f"runner mode should be 'train' or 'val', "
- f'but got {runner.mode}')
- return mode
-
- def get_epoch(self, runner):
- if runner.mode == 'train':
- epoch = runner.epoch + 1
- elif runner.mode == 'val':
- # normal val mode
- # runner.epoch += 1 has been done before val workflow
- epoch = runner.epoch
- else:
- raise ValueError(f"runner mode should be 'train' or 'val', "
- f'but got {runner.mode}')
- return epoch
-
- def get_iter(self, runner, inner_iter=False):
- """Get the current training iteration step."""
- if self.by_epoch and inner_iter:
- current_iter = runner.inner_iter + 1
- else:
- current_iter = runner.iter + 1
- return current_iter
-
- def get_lr_tags(self, runner):
- tags = {}
- lrs = runner.current_lr()
- if isinstance(lrs, dict):
- for name, value in lrs.items():
- tags[f'learning_rate/{name}'] = value[0]
- else:
- tags['learning_rate'] = lrs[0]
- return tags
-
- def get_momentum_tags(self, runner):
- tags = {}
- momentums = runner.current_momentum()
- if isinstance(momentums, dict):
- for name, value in momentums.items():
- tags[f'momentum/{name}'] = value[0]
- else:
- tags['momentum'] = momentums[0]
- return tags
-
- def get_loggable_tags(self,
- runner,
- allow_scalar=True,
- allow_text=False,
- add_mode=True,
- tags_to_skip=('time', 'data_time')):
- tags = {}
- for var, val in runner.log_buffer.output.items():
- if var in tags_to_skip:
- continue
- if self.is_scalar(val) and not allow_scalar:
- continue
- if isinstance(val, str) and not allow_text:
- continue
- if add_mode:
- var = f'{self.get_mode(runner)}/{var}'
- tags[var] = val
- tags.update(self.get_lr_tags(runner))
- tags.update(self.get_momentum_tags(runner))
- return tags
-
- def before_run(self, runner):
- for hook in runner.hooks[::-1]:
- if isinstance(hook, LoggerHook):
- hook.reset_flag = True
- break
-
- def before_epoch(self, runner):
- runner.log_buffer.clear() # clear logs of last epoch
-
- def after_train_iter(self, runner):
- if self.by_epoch and self.every_n_inner_iters(runner, self.interval):
- runner.log_buffer.average(self.interval)
- elif not self.by_epoch and self.every_n_iters(runner, self.interval):
- runner.log_buffer.average(self.interval)
- elif self.end_of_epoch(runner) and not self.ignore_last:
- # not precise but more stable
- runner.log_buffer.average(self.interval)
-
- if runner.log_buffer.ready:
- self.log(runner)
- if self.reset_flag:
- runner.log_buffer.clear_output()
-
- def after_train_epoch(self, runner):
- if runner.log_buffer.ready:
- self.log(runner)
- if self.reset_flag:
- runner.log_buffer.clear_output()
-
- def after_val_epoch(self, runner):
- runner.log_buffer.average()
- self.log(runner)
- if self.reset_flag:
- runner.log_buffer.clear_output()
diff --git a/spaces/MestikonAgency/README/download.sh b/spaces/MestikonAgency/README/download.sh
deleted file mode 100644
index 8625963e02dd580ebe8cd13fbbc2c7fe7e2ff66b..0000000000000000000000000000000000000000
--- a/spaces/MestikonAgency/README/download.sh
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/bin/bash
-
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
-
-set -e
-
-read -p "Enter the URL from email: " PRESIGNED_URL
-echo ""
-read -p "Enter the list of models to download without spaces (7B,13B,70B,7B-chat,13B-chat,70B-chat), or press Enter for all: " MODEL_SIZE
-TARGET_FOLDER="." # where all files should end up
-mkdir -p ${TARGET_FOLDER}
-
-if [[ $MODEL_SIZE == "" ]]; then
- MODEL_SIZE="7B,13B,70B,7B-chat,13B-chat,70B-chat"
-fi
-
-echo "Downloading LICENSE and Acceptable Usage Policy"
-wget --continue ${PRESIGNED_URL/'*'/"LICENSE"} -O ${TARGET_FOLDER}"/LICENSE"
-wget --continue ${PRESIGNED_URL/'*'/"USE_POLICY.md"} -O ${TARGET_FOLDER}"/USE_POLICY.md"
-
-echo "Downloading tokenizer"
-wget --continue ${PRESIGNED_URL/'*'/"tokenizer.model"} -O ${TARGET_FOLDER}"/tokenizer.model"
-wget --continue ${PRESIGNED_URL/'*'/"tokenizer_checklist.chk"} -O ${TARGET_FOLDER}"/tokenizer_checklist.chk"
-CPU_ARCH=$(uname -m)
- if [ "$CPU_ARCH" = "arm64" ]; then
- (cd ${TARGET_FOLDER} && md5 tokenizer_checklist.chk)
- else
- (cd ${TARGET_FOLDER} && md5sum -c tokenizer_checklist.chk)
- fi
-
-for m in ${MODEL_SIZE//,/ }
-do
- if [[ $m == "7B" ]]; then
- SHARD=0
- MODEL_PATH="llama-2-7b"
- elif [[ $m == "7B-chat" ]]; then
- SHARD=0
- MODEL_PATH="llama-2-7b-chat"
- elif [[ $m == "13B" ]]; then
- SHARD=1
- MODEL_PATH="llama-2-13b"
- elif [[ $m == "13B-chat" ]]; then
- SHARD=1
- MODEL_PATH="llama-2-13b-chat"
- elif [[ $m == "70B" ]]; then
- SHARD=7
- MODEL_PATH="llama-2-70b"
- elif [[ $m == "70B-chat" ]]; then
- SHARD=7
- MODEL_PATH="llama-2-70b-chat"
- fi
-
- echo "Downloading ${MODEL_PATH}"
- mkdir -p ${TARGET_FOLDER}"/${MODEL_PATH}"
-
- for s in $(seq -f "0%g" 0 ${SHARD})
- do
- wget ${PRESIGNED_URL/'*'/"${MODEL_PATH}/consolidated.${s}.pth"} -O ${TARGET_FOLDER}"/${MODEL_PATH}/consolidated.${s}.pth"
- done
-
- wget --continue ${PRESIGNED_URL/'*'/"${MODEL_PATH}/params.json"} -O ${TARGET_FOLDER}"/${MODEL_PATH}/params.json"
- wget --continue ${PRESIGNED_URL/'*'/"${MODEL_PATH}/checklist.chk"} -O ${TARGET_FOLDER}"/${MODEL_PATH}/checklist.chk"
- echo "Checking checksums"
- if [ "$CPU_ARCH" = "arm64" ]; then
- (cd ${TARGET_FOLDER}"/${MODEL_PATH}" && md5 checklist.chk)
- else
- (cd ${TARGET_FOLDER}"/${MODEL_PATH}" && md5sum -c checklist.chk)
- fi
-done
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/README.md b/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/README.md
deleted file mode 100644
index 921af5310e46803c937168c6e1c0bdf17a372798..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# SDMGR
-
-> [Spatial Dual-Modality Graph Reasoning for Key Information Extraction](https://arxiv.org/abs/2103.14470)
-
-
-
-## Abstract
-
-Key information extraction from document images is of paramount importance in office automation. Conventional template matching based approaches fail to generalize well to document images of unseen templates, and are not robust against text recognition errors. In this paper, we propose an end-to-end Spatial Dual-Modality Graph Reasoning method (SDMG-R) to extract key information from unstructured document images. We model document images as dual-modality graphs, nodes of which encode both the visual and textual features of detected text regions, and edges of which represent the spatial relations between neighboring text regions. The key information extraction is solved by iteratively propagating messages along graph edges and reasoning the categories of graph nodes. In order to roundly evaluate our proposed method as well as boost the future research, we release a new dataset named WildReceipt, which is collected and annotated tailored for the evaluation of key information extraction from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes, and is about 2 times larger than the existing public datasets. Extensive experiments validate that all information including visual features, textual features and spatial relations can benefit key information extraction. It has been shown that SDMG-R can effectively extract key information from document images of unseen templates, and obtain new state-of-the-art results on the recent popular benchmark SROIE and our WildReceipt. Our code and dataset will be publicly released.
-
-
-
-
-
-## Results and models
-
-### WildReceipt
-
-| Method | Modality | Macro F1-Score | Download |
-| :--------------------------------------------------------------------: | :--------------: | :------------: | :--------------------------------------------------------------------------------------------------: |
-| [sdmgr_unet16](/configs/kie/sdmgr/sdmgr_unet16_60e_wildreceipt.py) | Visual + Textual | 0.890 | [model](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_unet16_60e_wildreceipt/sdmgr_unet16_60e_wildreceipt_20220825_151648-22419f37.pth) \| [log](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_unet16_60e_wildreceipt/20220825_151648.log) |
-| [sdmgr_novisual](/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt.py) | Textual | 0.873 | [model](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_novisual_60e_wildreceipt/sdmgr_novisual_60e_wildreceipt_20220831_193317-827649d8.pth) \| [log](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_novisual_60e_wildreceipt/20220831_193317.log) |
-
-### WildReceiptOpenset
-
-| Method | Modality | Edge F1-Score | Node Macro F1-Score | Node Micro F1-Score | Download |
-| :-------------------------------------------------------------------: | :------: | :-----------: | :-----------------: | :-----------------: | :----------------------------------------------------------------------: |
-| [sdmgr_novisual_openset](/configs/kie/sdmgr/sdmgr_novisual_60e_wildreceipt-openset.py) | Textual | 0.792 | 0.931 | 0.940 | [model](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_novisual_60e_wildreceipt-openset/sdmgr_novisual_60e_wildreceipt-openset_20220831_200807-dedf15ec.pth) \| [log](https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_novisual_60e_wildreceipt-openset/20220831_200807.log) |
-
-## Citation
-
-```bibtex
-@misc{sun2021spatial,
- title={Spatial Dual-Modality Graph Reasoning for Key Information Extraction},
- author={Hongbin Sun and Zhanghui Kuang and Xiaoyu Yue and Chenhao Lin and Wayne Zhang},
- year={2021},
- eprint={2103.14470},
- archivePrefix={arXiv},
- primaryClass={cs.CV}
-}
-```
diff --git a/spaces/NATSpeech/DiffSpeech/data_gen/tts/wav_processors/base_processor.py b/spaces/NATSpeech/DiffSpeech/data_gen/tts/wav_processors/base_processor.py
deleted file mode 100644
index e8200dc58a9388ac94a5ec34b8a65f75e380255b..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/data_gen/tts/wav_processors/base_processor.py
+++ /dev/null
@@ -1,25 +0,0 @@
-REGISTERED_WAV_PROCESSORS = {}
-
-
-def register_wav_processors(name):
- def _f(cls):
- REGISTERED_WAV_PROCESSORS[name] = cls
- return cls
-
- return _f
-
-
-def get_wav_processor_cls(name):
- return REGISTERED_WAV_PROCESSORS.get(name, None)
-
-
-class BaseWavProcessor:
- @property
- def name(self):
- raise NotImplementedError
-
- def output_fn(self, input_fn):
- return f'{input_fn[:-4]}_{self.name}.wav'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- raise NotImplementedError
diff --git a/spaces/NATSpeech/PortaSpeech/modules/tts/portaspeech/portaspeech.py b/spaces/NATSpeech/PortaSpeech/modules/tts/portaspeech/portaspeech.py
deleted file mode 100644
index 589aa108fdde0ea062b06f2dc3aeac28b364c17b..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/modules/tts/portaspeech/portaspeech.py
+++ /dev/null
@@ -1,226 +0,0 @@
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn
-from torch.nn import Linear
-
-from modules.commons.conv import ConvBlocks, ConditionalConvBlocks
-from modules.commons.layers import Embedding
-from modules.commons.rel_transformer import RelTransformerEncoder
-from modules.commons.transformer import MultiheadAttention, FFTBlocks
-from modules.tts.commons.align_ops import clip_mel2token_to_multiple, build_word_mask, expand_states, mel2ph_to_mel2word
-from modules.tts.fs import FS_DECODERS, FastSpeech
-from modules.tts.portaspeech.fvae import FVAE
-from utils.commons.meters import Timer
-from utils.nn.seq_utils import group_hidden_by_segs
-
-
-class SinusoidalPosEmb(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.dim = dim
-
- def forward(self, x):
- """
-
- :param x: [B, T]
- :return: [B, T, H]
- """
- device = x.device
- half_dim = self.dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
- emb = x[:, :, None] * emb[None, :]
- emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
- return emb
-
-
-class PortaSpeech(FastSpeech):
- def __init__(self, ph_dict_size, word_dict_size, hparams, out_dims=None):
- super().__init__(ph_dict_size, hparams, out_dims)
- # build linguistic encoder
- if hparams['use_word_encoder']:
- self.word_encoder = RelTransformerEncoder(
- word_dict_size, self.hidden_size, self.hidden_size, self.hidden_size, 2,
- hparams['word_enc_layers'], hparams['enc_ffn_kernel_size'])
- if hparams['dur_level'] == 'word':
- if hparams['word_encoder_type'] == 'rel_fft':
- self.ph2word_encoder = RelTransformerEncoder(
- 0, self.hidden_size, self.hidden_size, self.hidden_size, 2,
- hparams['word_enc_layers'], hparams['enc_ffn_kernel_size'])
- if hparams['word_encoder_type'] == 'fft':
- self.ph2word_encoder = FFTBlocks(
- self.hidden_size, hparams['word_enc_layers'], 1, num_heads=hparams['num_heads'])
- self.sin_pos = SinusoidalPosEmb(self.hidden_size)
- self.enc_pos_proj = nn.Linear(2 * self.hidden_size, self.hidden_size)
- self.dec_query_proj = nn.Linear(2 * self.hidden_size, self.hidden_size)
- self.dec_res_proj = nn.Linear(2 * self.hidden_size, self.hidden_size)
- self.attn = MultiheadAttention(self.hidden_size, 1, encoder_decoder_attention=True, bias=False)
- self.attn.enable_torch_version = False
- if hparams['text_encoder_postnet']:
- self.text_encoder_postnet = ConvBlocks(
- self.hidden_size, self.hidden_size, [1] * 3, 5, layers_in_block=2)
- else:
- self.sin_pos = SinusoidalPosEmb(self.hidden_size)
- # build VAE decoder
- if hparams['use_fvae']:
- del self.decoder
- del self.mel_out
- self.fvae = FVAE(
- c_in_out=self.out_dims,
- hidden_size=hparams['fvae_enc_dec_hidden'], c_latent=hparams['latent_size'],
- kernel_size=hparams['fvae_kernel_size'],
- enc_n_layers=hparams['fvae_enc_n_layers'],
- dec_n_layers=hparams['fvae_dec_n_layers'],
- c_cond=self.hidden_size,
- use_prior_flow=hparams['use_prior_flow'],
- flow_hidden=hparams['prior_flow_hidden'],
- flow_kernel_size=hparams['prior_flow_kernel_size'],
- flow_n_steps=hparams['prior_flow_n_blocks'],
- strides=[hparams['fvae_strides']],
- encoder_type=hparams['fvae_encoder_type'],
- decoder_type=hparams['fvae_decoder_type'],
- )
- else:
- self.decoder = FS_DECODERS[hparams['decoder_type']](hparams)
- self.mel_out = Linear(self.hidden_size, self.out_dims, bias=True)
- if hparams['use_pitch_embed']:
- self.pitch_embed = Embedding(300, self.hidden_size, 0)
- if self.hparams['add_word_pos']:
- self.word_pos_proj = Linear(self.hidden_size, self.hidden_size)
-
- def build_embedding(self, dictionary, embed_dim):
- num_embeddings = len(dictionary)
- emb = Embedding(num_embeddings, embed_dim, self.padding_idx)
- return emb
-
- def forward(self, txt_tokens, word_tokens, ph2word, word_len, mel2word=None, mel2ph=None,
- spk_embed=None, spk_id=None, pitch=None, infer=False, tgt_mels=None,
- global_step=None, *args, **kwargs):
- ret = {}
- x, tgt_nonpadding = self.run_text_encoder(
- txt_tokens, word_tokens, ph2word, word_len, mel2word, mel2ph, ret)
- style_embed = self.forward_style_embed(spk_embed, spk_id)
- x = x + style_embed
- x = x * tgt_nonpadding
- ret['nonpadding'] = tgt_nonpadding
- if self.hparams['use_pitch_embed']:
- x = x + self.pitch_embed(pitch)
- ret['decoder_inp'] = x
- ret['mel_out_fvae'] = ret['mel_out'] = self.run_decoder(x, tgt_nonpadding, ret, infer, tgt_mels, global_step)
- return ret
-
- def run_text_encoder(self, txt_tokens, word_tokens, ph2word, word_len, mel2word, mel2ph, ret):
- word2word = torch.arange(word_len)[None, :].to(ph2word.device) + 1 # [B, T_mel, T_word]
- src_nonpadding = (txt_tokens > 0).float()[:, :, None]
- ph_encoder_out = self.encoder(txt_tokens) * src_nonpadding
- if self.hparams['use_word_encoder']:
- word_encoder_out = self.word_encoder(word_tokens)
- ph_encoder_out = ph_encoder_out + expand_states(word_encoder_out, ph2word)
- if self.hparams['dur_level'] == 'word':
- word_encoder_out = 0
- h_ph_gb_word = group_hidden_by_segs(ph_encoder_out, ph2word, word_len)[0]
- word_encoder_out = word_encoder_out + self.ph2word_encoder(h_ph_gb_word)
- if self.hparams['use_word_encoder']:
- word_encoder_out = word_encoder_out + self.word_encoder(word_tokens)
- mel2word = self.forward_dur(ph_encoder_out, mel2word, ret, ph2word=ph2word, word_len=word_len)
- mel2word = clip_mel2token_to_multiple(mel2word, self.hparams['frames_multiple'])
- tgt_nonpadding = (mel2word > 0).float()[:, :, None]
- enc_pos = self.get_pos_embed(word2word, ph2word) # [B, T_ph, H]
- dec_pos = self.get_pos_embed(word2word, mel2word) # [B, T_mel, H]
- dec_word_mask = build_word_mask(mel2word, ph2word) # [B, T_mel, T_ph]
- x, weight = self.attention(ph_encoder_out, enc_pos, word_encoder_out, dec_pos, mel2word, dec_word_mask)
- if self.hparams['add_word_pos']:
- x = x + self.word_pos_proj(dec_pos)
- ret['attn'] = weight
- else:
- mel2ph = self.forward_dur(ph_encoder_out, mel2ph, ret)
- mel2ph = clip_mel2token_to_multiple(mel2ph, self.hparams['frames_multiple'])
- mel2word = mel2ph_to_mel2word(mel2ph, ph2word)
- x = expand_states(ph_encoder_out, mel2ph)
- if self.hparams['add_word_pos']:
- dec_pos = self.get_pos_embed(word2word, mel2word) # [B, T_mel, H]
- x = x + self.word_pos_proj(dec_pos)
- tgt_nonpadding = (mel2ph > 0).float()[:, :, None]
- if self.hparams['use_word_encoder']:
- x = x + expand_states(word_encoder_out, mel2word)
- return x, tgt_nonpadding
-
- def attention(self, ph_encoder_out, enc_pos, word_encoder_out, dec_pos, mel2word, dec_word_mask):
- ph_kv = self.enc_pos_proj(torch.cat([ph_encoder_out, enc_pos], -1))
- word_enc_out_expend = expand_states(word_encoder_out, mel2word)
- word_enc_out_expend = torch.cat([word_enc_out_expend, dec_pos], -1)
- if self.hparams['text_encoder_postnet']:
- word_enc_out_expend = self.dec_res_proj(word_enc_out_expend)
- word_enc_out_expend = self.text_encoder_postnet(word_enc_out_expend)
- dec_q = x_res = word_enc_out_expend
- else:
- dec_q = self.dec_query_proj(word_enc_out_expend)
- x_res = self.dec_res_proj(word_enc_out_expend)
- ph_kv, dec_q = ph_kv.transpose(0, 1), dec_q.transpose(0, 1)
- x, (weight, _) = self.attn(dec_q, ph_kv, ph_kv, attn_mask=(1 - dec_word_mask) * -1e9)
- x = x.transpose(0, 1)
- x = x + x_res
- return x, weight
-
- def run_decoder(self, x, tgt_nonpadding, ret, infer, tgt_mels=None, global_step=0):
- if not self.hparams['use_fvae']:
- x = self.decoder(x)
- x = self.mel_out(x)
- ret['kl'] = 0
- return x * tgt_nonpadding
- else:
- decoder_inp = x
- x = x.transpose(1, 2) # [B, H, T]
- tgt_nonpadding_BHT = tgt_nonpadding.transpose(1, 2) # [B, H, T]
- if infer:
- z = self.fvae(cond=x, infer=True)
- else:
- tgt_mels = tgt_mels.transpose(1, 2) # [B, 80, T]
- z, ret['kl'], ret['z_p'], ret['m_q'], ret['logs_q'] = self.fvae(
- tgt_mels, tgt_nonpadding_BHT, cond=x)
- if global_step < self.hparams['posterior_start_steps']:
- z = torch.randn_like(z)
- x_recon = self.fvae.decoder(z, nonpadding=tgt_nonpadding_BHT, cond=x).transpose(1, 2)
- ret['pre_mel_out'] = x_recon
- return x_recon
-
- def forward_dur(self, dur_input, mel2word, ret, **kwargs):
- """
-
- :param dur_input: [B, T_txt, H]
- :param mel2ph: [B, T_mel]
- :param txt_tokens: [B, T_txt]
- :param ret:
- :return:
- """
- src_padding = dur_input.data.abs().sum(-1) == 0
- dur_input = dur_input.detach() + self.hparams['predictor_grad'] * (dur_input - dur_input.detach())
- dur = self.dur_predictor(dur_input, src_padding)
- if self.hparams['dur_level'] == 'word':
- word_len = kwargs['word_len']
- ph2word = kwargs['ph2word']
- B, T_ph = ph2word.shape
- dur = torch.zeros([B, word_len.max() + 1]).to(ph2word.device).scatter_add(1, ph2word, dur)
- dur = dur[:, 1:]
- ret['dur'] = dur
- if mel2word is None:
- mel2word = self.length_regulator(dur).detach()
- return mel2word
-
- def get_pos_embed(self, word2word, x2word):
- x_pos = build_word_mask(word2word, x2word).float() # [B, T_word, T_ph]
- x_pos = (x_pos.cumsum(-1) / x_pos.sum(-1).clamp(min=1)[..., None] * x_pos).sum(1)
- x_pos = self.sin_pos(x_pos.float()) # [B, T_ph, H]
- return x_pos
-
- def store_inverse_all(self):
- def remove_weight_norm(m):
- try:
- if hasattr(m, 'store_inverse'):
- m.store_inverse()
- nn.utils.remove_weight_norm(m)
- except ValueError: # this module didn't have weight norm
- return
-
- self.apply(remove_weight_norm)
diff --git a/spaces/NeuralInternet/Text-Generation_Playground/api-example.py b/spaces/NeuralInternet/Text-Generation_Playground/api-example.py
deleted file mode 100644
index 0306b7ab8a3fa3d6f57d8474ad74d67f13557b6d..0000000000000000000000000000000000000000
--- a/spaces/NeuralInternet/Text-Generation_Playground/api-example.py
+++ /dev/null
@@ -1,59 +0,0 @@
-'''
-
-This is an example on how to use the API for oobabooga/text-generation-webui.
-
-Make sure to start the web UI with the following flags:
-
-python server.py --model MODEL --listen --no-stream
-
-Optionally, you can also add the --share flag to generate a public gradio URL,
-allowing you to use the API remotely.
-
-'''
-import requests
-
-# Server address
-server = "127.0.0.1"
-
-# Generation parameters
-# Reference: https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig
-params = {
- 'max_new_tokens': 200,
- 'do_sample': True,
- 'temperature': 0.5,
- 'top_p': 0.9,
- 'typical_p': 1,
- 'repetition_penalty': 1.05,
- 'top_k': 0,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
-}
-
-# Input prompt
-prompt = "What I would like to say is the following: "
-
-response = requests.post(f"http://{server}:7860/run/textgen", json={
- "data": [
- prompt,
- params['max_new_tokens'],
- params['do_sample'],
- params['temperature'],
- params['top_p'],
- params['typical_p'],
- params['repetition_penalty'],
- params['top_k'],
- params['min_length'],
- params['no_repeat_ngram_size'],
- params['num_beams'],
- params['penalty_alpha'],
- params['length_penalty'],
- params['early_stopping'],
- ]
-}).json()
-
-reply = response["data"][0]
-print(reply)
diff --git a/spaces/NimaBoscarino/climategan/climategan/deeplab/mobilenet_v3.py b/spaces/NimaBoscarino/climategan/climategan/deeplab/mobilenet_v3.py
deleted file mode 100644
index 1ba08e60a2ab529627c93505b9a2cb81522ab518..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/climategan/climategan/deeplab/mobilenet_v3.py
+++ /dev/null
@@ -1,324 +0,0 @@
-"""
-from https://github.com/LikeLy-Journey/SegmenTron/blob/
-4bc605eedde7d680314f63d329277b73f83b1c5f/segmentron/modules/basic.py#L34
-"""
-
-from collections import OrderedDict
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-from climategan.blocks import InterpolateNearest2d
-
-
-class SeparableConv2d(nn.Module):
- def __init__(
- self,
- inplanes,
- planes,
- kernel_size=3,
- stride=1,
- dilation=1,
- relu_first=True,
- bias=False,
- norm_layer=nn.BatchNorm2d,
- ):
- super().__init__()
- depthwise = nn.Conv2d(
- inplanes,
- inplanes,
- kernel_size,
- stride=stride,
- padding=dilation,
- dilation=dilation,
- groups=inplanes,
- bias=bias,
- )
- bn_depth = norm_layer(inplanes)
- pointwise = nn.Conv2d(inplanes, planes, 1, bias=bias)
- bn_point = norm_layer(planes)
-
- if relu_first:
- self.block = nn.Sequential(
- OrderedDict(
- [
- ("relu", nn.ReLU()),
- ("depthwise", depthwise),
- ("bn_depth", bn_depth),
- ("pointwise", pointwise),
- ("bn_point", bn_point),
- ]
- )
- )
- else:
- self.block = nn.Sequential(
- OrderedDict(
- [
- ("depthwise", depthwise),
- ("bn_depth", bn_depth),
- ("relu1", nn.ReLU(inplace=True)),
- ("pointwise", pointwise),
- ("bn_point", bn_point),
- ("relu2", nn.ReLU(inplace=True)),
- ]
- )
- )
-
- def forward(self, x):
- return self.block(x)
-
-
-class _ConvBNReLU(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- relu6=False,
- norm_layer=nn.BatchNorm2d,
- ):
- super(_ConvBNReLU, self).__init__()
- self.conv = nn.Conv2d(
- in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- dilation,
- groups,
- bias=False,
- )
- self.bn = norm_layer(out_channels)
- self.relu = nn.ReLU6(True) if relu6 else nn.ReLU(True)
-
- def forward(self, x):
- x = self.conv(x)
- x = self.bn(x)
- x = self.relu(x)
- return x
-
-
-class _DepthwiseConv(nn.Module):
- """conv_dw in MobileNet"""
-
- def __init__(
- self, in_channels, out_channels, stride, norm_layer=nn.BatchNorm2d, **kwargs
- ):
- super(_DepthwiseConv, self).__init__()
- self.conv = nn.Sequential(
- _ConvBNReLU(
- in_channels,
- in_channels,
- 3,
- stride,
- 1,
- groups=in_channels,
- norm_layer=norm_layer,
- ),
- _ConvBNReLU(in_channels, out_channels, 1, norm_layer=norm_layer),
- )
-
- def forward(self, x):
- return self.conv(x)
-
-
-class InvertedResidual(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- stride,
- expand_ratio,
- dilation=1,
- norm_layer=nn.BatchNorm2d,
- ):
- super(InvertedResidual, self).__init__()
- assert stride in [1, 2]
- self.use_res_connect = stride == 1 and in_channels == out_channels
-
- layers = list()
- inter_channels = int(round(in_channels * expand_ratio))
- if expand_ratio != 1:
- # pw
- layers.append(
- _ConvBNReLU(
- in_channels, inter_channels, 1, relu6=True, norm_layer=norm_layer
- )
- )
- layers.extend(
- [
- # dw
- _ConvBNReLU(
- inter_channels,
- inter_channels,
- 3,
- stride,
- dilation,
- dilation,
- groups=inter_channels,
- relu6=True,
- norm_layer=norm_layer,
- ),
- # pw-linear
- nn.Conv2d(inter_channels, out_channels, 1, bias=False),
- norm_layer(out_channels),
- ]
- )
- self.conv = nn.Sequential(*layers)
-
- def forward(self, x):
- if self.use_res_connect:
- return x + self.conv(x)
- else:
- return self.conv(x)
-
-
-class MobileNetV2(nn.Module):
- def __init__(self, norm_layer=nn.BatchNorm2d, pretrained_path=None, no_init=False):
- super(MobileNetV2, self).__init__()
- output_stride = 16
- self.multiplier = 1.0
- if output_stride == 32:
- dilations = [1, 1]
- elif output_stride == 16:
- dilations = [1, 2]
- elif output_stride == 8:
- dilations = [2, 4]
- else:
- raise NotImplementedError
- inverted_residual_setting = [
- # t, c, n, s
- [1, 16, 1, 1],
- [6, 24, 2, 2],
- [6, 32, 3, 2],
- [6, 64, 4, 2],
- [6, 96, 3, 1],
- [6, 160, 3, 2],
- [6, 320, 1, 1],
- ]
- # building first layer
- input_channels = int(32 * self.multiplier) if self.multiplier > 1.0 else 32
- # last_channels = int(1280 * multiplier) if multiplier > 1.0 else 1280
- self.conv1 = _ConvBNReLU(
- 3, input_channels, 3, 2, 1, relu6=True, norm_layer=norm_layer
- )
-
- # building inverted residual blocks
- self.planes = input_channels
- self.block1 = self._make_layer(
- InvertedResidual,
- self.planes,
- inverted_residual_setting[0:1],
- norm_layer=norm_layer,
- )
- self.block2 = self._make_layer(
- InvertedResidual,
- self.planes,
- inverted_residual_setting[1:2],
- norm_layer=norm_layer,
- )
- self.block3 = self._make_layer(
- InvertedResidual,
- self.planes,
- inverted_residual_setting[2:3],
- norm_layer=norm_layer,
- )
- self.block4 = self._make_layer(
- InvertedResidual,
- self.planes,
- inverted_residual_setting[3:5],
- dilations[0],
- norm_layer=norm_layer,
- )
- self.block5 = self._make_layer(
- InvertedResidual,
- self.planes,
- inverted_residual_setting[5:],
- dilations[1],
- norm_layer=norm_layer,
- )
- self.last_inp_channels = self.planes
-
- self.up2 = InterpolateNearest2d()
-
- # weight initialization
- if not no_init:
- self.pretrained_path = pretrained_path
- if pretrained_path is not None:
- self._load_pretrained_model()
- else:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode="fan_out")
- if m.bias is not None:
- nn.init.zeros_(m.bias)
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.ones_(m.weight)
- nn.init.zeros_(m.bias)
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.zeros_(m.bias)
-
- def _make_layer(
- self,
- block,
- planes,
- inverted_residual_setting,
- dilation=1,
- norm_layer=nn.BatchNorm2d,
- ):
- features = list()
- for t, c, n, s in inverted_residual_setting:
- out_channels = int(c * self.multiplier)
- stride = s if dilation == 1 else 1
- features.append(
- block(planes, out_channels, stride, t, dilation, norm_layer)
- )
- planes = out_channels
- for i in range(n - 1):
- features.append(
- block(planes, out_channels, 1, t, norm_layer=norm_layer)
- )
- planes = out_channels
- self.planes = planes
- return nn.Sequential(*features)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.block1(x)
- c1 = self.block2(x)
- c2 = self.block3(c1)
- c3 = self.block4(c2)
- c4 = self.up2(self.block5(c3))
-
- # x = self.features(x)
- # x = self.classifier(x.view(x.size(0), x.size(1)))
- return c4, c1
-
- def _load_pretrained_model(self):
- assert self.pretrained_path is not None
- assert Path(self.pretrained_path).exists()
-
- pretrain_dict = torch.load(self.pretrained_path)
- pretrain_dict = {k.replace("encoder.", ""): v for k, v in pretrain_dict.items()}
- model_dict = {}
- state_dict = self.state_dict()
- ignored = []
- for k, v in pretrain_dict.items():
- if k in state_dict:
- model_dict[k] = v
- else:
- ignored.append(k)
- state_dict.update(model_dict)
- self.load_state_dict(state_dict)
- self.loaded_pre_trained = True
- print(
- " - Loaded pre-trained MobileNetV2: ignored {}/{} keys".format(
- len(ignored), len(pretrain_dict)
- )
- )
diff --git a/spaces/NiuTaipu/moe-tts-test01/transforms.py b/spaces/NiuTaipu/moe-tts-test01/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/NiuTaipu/moe-tts-test01/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py
deleted file mode 100644
index 9bdd25a8685bb7c7b32e1f02372aaeb26d8ba53a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class PQLinear(nn.Module):
- """
- Quantized counterpart of nn.Linear module. Stores the centroid, the assignments
- and the non-quantized biases. The full weight is re-instantiated at each forward
- pass.
-
- Args:
- - centroids: centroids of size n_centroids x block_size
- - assignments: assignments of the centroids to the subvectors
- of size self.out_features x n_blocks
- - bias: the non-quantized bias
-
- Remarks:
- - We refer the reader to the official documentation of the nn.Linear module
- for the other arguments and the behavior of the module
- - Performance tests on GPU show that this implementation is 15% slower than
- the non-quantized nn.Linear module for a standard training loop.
- """
-
- def __init__(self, centroids, assignments, bias, in_features, out_features):
- super(PQLinear, self).__init__()
- self.block_size = centroids.size(1)
- self.n_centroids = centroids.size(0)
- self.in_features = in_features
- self.out_features = out_features
- # check compatibility
- if self.in_features % self.block_size != 0:
- raise ValueError("Wrong PQ sizes")
- if len(assignments) % self.out_features != 0:
- raise ValueError("Wrong PQ sizes")
- # define parameters
- self.centroids = nn.Parameter(centroids, requires_grad=True)
- self.register_buffer("assignments", assignments)
- self.register_buffer("counts", torch.bincount(assignments).type_as(centroids))
- if bias is not None:
- self.bias = nn.Parameter(bias)
- else:
- self.register_parameter("bias", None)
-
- @property
- def weight(self):
- return (
- self.centroids[self.assignments]
- .reshape(-1, self.out_features, self.block_size)
- .permute(1, 0, 2)
- .flatten(1, 2)
- )
-
- def forward(self, x):
- return F.linear(
- x,
- self.weight,
- self.bias,
- )
-
- def extra_repr(self):
- return f"in_features={self.in_features},\
- out_features={self.out_features},\
- n_centroids={self.n_centroids},\
- block_size={self.block_size},\
- bias={self.bias is not None}"
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/num_samples_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/num_samples_dataset.py
deleted file mode 100644
index 99a17495c701d8a05e0268f98bf453905e11d078..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/num_samples_dataset.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import FairseqDataset
-
-
-class NumSamplesDataset(FairseqDataset):
- def __getitem__(self, index):
- return 1
-
- def __len__(self):
- return 0
-
- def collater(self, samples):
- return sum(samples)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/tasks/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/tasks/__init__.py
deleted file mode 100644
index 2d78ca98708121261aa365738a65c051b5b40626..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/discriminative_reranking_nmt/tasks/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .discriminative_reranking_task import DiscriminativeRerankingNMTTask
-
-
-__all__ = [
- "DiscriminativeRerankingNMTTask",
-]
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py
deleted file mode 100644
index a1f0d902acf0756580a1f4604feee8fc499a9a63..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import sys
-
-import fairseq
-import soundfile as sf
-import torch
-import torch.nn.functional as F
-
-from feature_utils import get_path_iterator, dump_feature
-
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("dump_w2v2_feature")
-
-
-class Wav2Vec2FeatureReader(object):
- def __init__(self, ckpt_path, layer, max_chunk=1600000):
- (
- model,
- cfg,
- task,
- ) = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path])
- self.model = model[0].eval().cuda()
- self.task = task
- self.layer = layer # assume this is 1-based like HuBERT
- self.max_chunk = max_chunk
- logger.info(f"TASK CONFIG:\n{self.task.cfg}")
- logger.info(f" max_chunk = {self.max_chunk}")
- logger.info(f" model:\n{self.model}")
-
- def read_audio(self, path, ref_len=None):
- wav, sr = sf.read(path)
- assert sr == self.task.cfg.sample_rate, sr
- if wav.ndim == 2:
- wav = wav.mean(-1)
- assert wav.ndim == 1, wav.ndim
- if ref_len is not None and abs(ref_len - len(wav)) > 160:
- logging.warning(f"ref {ref_len} != read {len(wav)} ({path})")
- return wav
-
- def get_feats(self, path, ref_len=None):
- x = self.read_audio(path, ref_len)
- with torch.no_grad():
- x = torch.from_numpy(x).float().cuda()
- if self.task.cfg.normalize:
- x = F.layer_norm(x, x.shape)
- x = x.view(1, -1)
-
- feat = []
- for start in range(0, x.size(1), self.max_chunk):
- x_chunk = x[:, start: start + self.max_chunk]
- res = self.model.extract_features(
- source=x_chunk,
- padding_mask=None,
- mask=False,
- layer=self.layer - 1,
- )
- feat_chunk = res["x"]
- feat.append(feat_chunk)
- return torch.cat(feat, 1).squeeze(0)
-
-
-def main(tsv_dir, split, ckpt_path, layer, nshard, rank, feat_dir, max_chunk):
- reader = Wav2Vec2FeatureReader(ckpt_path, layer, max_chunk)
- generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank)
- dump_feature(reader, generator, num, split, nshard, rank, feat_dir)
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("tsv_dir")
- parser.add_argument("split")
- parser.add_argument("ckpt_path")
- parser.add_argument("layer", type=int)
- parser.add_argument("nshard", type=int)
- parser.add_argument("rank", type=int)
- parser.add_argument("feat_dir")
- parser.add_argument("--max_chunk", type=int, default=1600000)
- args = parser.parse_args()
- logger.info(args)
-
- main(**vars(args))
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/decoders/decoder_config.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/decoders/decoder_config.py
deleted file mode 100644
index 659eb94a9b8187a7c126d7b439ac2742f9d72022..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/decoders/decoder_config.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-from typing import Optional
-
-from fairseq.dataclass.configs import FairseqDataclass
-from fairseq.dataclass.constants import ChoiceEnum
-from omegaconf import MISSING
-
-
-DECODER_CHOICES = ChoiceEnum(["viterbi", "kenlm", "fairseqlm"])
-
-
-@dataclass
-class DecoderConfig(FairseqDataclass):
- type: DECODER_CHOICES = field(
- default="viterbi",
- metadata={"help": "The type of decoder to use"},
- )
-
-
-@dataclass
-class FlashlightDecoderConfig(FairseqDataclass):
- nbest: int = field(
- default=1,
- metadata={"help": "Number of decodings to return"},
- )
- unitlm: bool = field(
- default=False,
- metadata={"help": "If set, use unit language model"},
- )
- lmpath: str = field(
- default=MISSING,
- metadata={"help": "Language model for KenLM decoder"},
- )
- lexicon: Optional[str] = field(
- default=None,
- metadata={"help": "Lexicon for Flashlight decoder"},
- )
- beam: int = field(
- default=50,
- metadata={"help": "Number of beams to use for decoding"},
- )
- beamthreshold: float = field(
- default=50.0,
- metadata={"help": "Threshold for beam search decoding"},
- )
- beamsizetoken: Optional[int] = field(
- default=None, metadata={"help": "Beam size to use"}
- )
- wordscore: float = field(
- default=-1,
- metadata={"help": "Word score for KenLM decoder"},
- )
- unkweight: float = field(
- default=-math.inf,
- metadata={"help": "Unknown weight for KenLM decoder"},
- )
- silweight: float = field(
- default=0,
- metadata={"help": "Silence weight for KenLM decoder"},
- )
- lmweight: float = field(
- default=2,
- metadata={"help": "Weight for LM while interpolating score"},
- )
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py
deleted file mode 100644
index 27792ebda842057e33fed3dc53dd9d8a594d0483..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py
+++ /dev/null
@@ -1,637 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-from enum import Enum, auto
-import math
-import numpy as np
-from typing import Tuple, List, Optional, Dict
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import autograd
-
-from fairseq import checkpoint_utils, utils
-from fairseq.dataclass import FairseqDataclass
-from fairseq.models import BaseFairseqModel, register_model
-from fairseq.modules import (
- SamePad,
- TransposeLast,
-)
-
-
-class SegmentationType(Enum):
- NONE = auto()
- RANDOM = auto()
- UNIFORM_RANDOM = auto()
- UNIFORM_RANDOM_JOIN = auto()
- JOIN = auto()
-
-
-@dataclass
-class SegmentationConfig(FairseqDataclass):
- type: SegmentationType = SegmentationType.NONE
- subsample_rate: float = 0.25
- mean_pool: bool = True
- mean_pool_join: bool = False
- remove_zeros: bool = False
-
-
-@dataclass
-class Wav2vec_UConfig(FairseqDataclass):
-
- discriminator_kernel: int = 3
- discriminator_dilation: int = 1
- discriminator_dim: int = 256
- discriminator_causal: bool = True
- discriminator_linear_emb: bool = False
- discriminator_depth: int = 1
- discriminator_max_pool: bool = False
- discriminator_act_after_linear: bool = False
- discriminator_dropout: float = 0.0
- discriminator_spectral_norm: bool = False
- discriminator_weight_norm: bool = False
-
- generator_kernel: int = 4
- generator_dilation: int = 1
- generator_stride: int = 1
- generator_bias: bool = False
- generator_dropout: float = 0.0
-
- blank_weight: float = 0
- blank_mode: str = "add"
- blank_is_sil: bool = False
- no_softmax: bool = False
-
- smoothness_weight: float = 0.0
- smoothing: float = 0.0
- smoothing_one_sided: bool = False
- gradient_penalty: float = 0.0
- probabilistic_grad_penalty_slicing: bool = False
- code_penalty: float = 0.0
- gumbel: bool = False
- hard_gumbel: bool = True
- temp: Tuple[float, float, float] = (2, 0.1, 0.99995)
- input_dim: int = 128
-
- segmentation: SegmentationConfig = SegmentationConfig()
-
-
-class Segmenter(nn.Module):
- cfg: SegmentationConfig
-
- def __init__(self, cfg: SegmentationConfig):
- super().__init__()
- self.cfg = cfg
- self.subsample_rate = cfg.subsample_rate
-
- def pre_segment(self, dense_x, dense_padding_mask):
- return dense_x, dense_padding_mask
-
- def logit_segment(self, logits, padding_mask):
- return logits, padding_mask
-
-
-class RandomSegmenter(Segmenter):
- def pre_segment(self, dense_x, dense_padding_mask):
- target_num = math.ceil(dense_x.size(1) * self.subsample_rate)
- ones = torch.ones(dense_x.shape[:-1], device=dense_x.device)
- indices, _ = ones.multinomial(target_num).sort(dim=-1)
- indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1))
- dense_x = dense_x.gather(1, indices_ld)
- dense_padding_mask = dense_padding_mask.gather(1, index=indices)
- return dense_x, dense_padding_mask
-
-
-class UniformRandomSegmenter(Segmenter):
- def pre_segment(self, dense_x, dense_padding_mask):
- bsz, tsz, fsz = dense_x.shape
-
- target_num = math.ceil(tsz * self.subsample_rate)
-
- rem = tsz % target_num
-
- if rem > 0:
- dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem])
- dense_padding_mask = F.pad(
- dense_padding_mask, [0, target_num - rem], value=True
- )
-
- dense_x = dense_x.view(bsz, target_num, -1, fsz)
- dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1)
-
- if self.cfg.mean_pool:
- dense_x = dense_x.mean(dim=-2)
- dense_padding_mask = dense_padding_mask.all(dim=-1)
- else:
- ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device)
- indices = ones.multinomial(1)
- indices = indices.unsqueeze(-1).expand(-1, target_num, -1)
- indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz)
- dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz)
- dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape(
- bsz, -1
- )
- return dense_x, dense_padding_mask
-
-
-class JoinSegmenter(Segmenter):
- def logit_segment(self, logits, padding_mask):
- preds = logits.argmax(dim=-1)
-
- if padding_mask.any():
- preds[padding_mask] = -1 # mark pad
- uniques = []
-
- bsz, tsz, csz = logits.shape
-
- for p in preds:
- uniques.append(
- p.cpu().unique_consecutive(return_inverse=True, return_counts=True)
- )
-
- new_tsz = max(u[0].numel() for u in uniques)
- new_logits = logits.new_zeros(bsz, new_tsz, csz)
- new_pad = padding_mask.new_zeros(bsz, new_tsz)
-
- for b in range(bsz):
- u, idx, c = uniques[b]
- keep = u != -1
-
- if self.cfg.remove_zeros:
- keep.logical_and_(u != 0)
-
- if self.training and not self.cfg.mean_pool_join:
- u[0] = 0
- u[1:] = c.cumsum(0)[:-1]
- m = c > 1
- r = torch.rand(m.sum())
- o = (c[m] * r).long()
- u[m] += o
- new_logits[b, : u.numel()] = logits[b, u]
- else:
- new_logits[b].index_add_(
- dim=0, index=idx.to(new_logits.device), source=logits[b]
- )
- new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device)
-
- new_sz = keep.sum()
- if not keep.all():
- kept_logits = new_logits[b, : c.numel()][keep]
- new_logits[b, :new_sz] = kept_logits
-
- if new_sz < new_tsz:
- pad = new_tsz - new_sz
- new_logits[b, -pad:] = 0
- new_pad[b, -pad:] = True
-
- return new_logits, new_pad
-
-
-class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter):
- pass
-
-
-SEGMENT_FACTORY = {
- SegmentationType.NONE: Segmenter,
- SegmentationType.RANDOM: RandomSegmenter,
- SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter,
- SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter,
- SegmentationType.JOIN: JoinSegmenter,
-}
-
-
-class Discriminator(nn.Module):
- def __init__(self, dim, cfg: Wav2vec_UConfig):
- super().__init__()
-
- inner_dim = cfg.discriminator_dim
- kernel = cfg.discriminator_kernel
- dilation = cfg.discriminator_dilation
- self.max_pool = cfg.discriminator_max_pool
-
- if cfg.discriminator_causal:
- padding = kernel - 1
- else:
- padding = kernel // 2
-
- def make_conv(in_d, out_d, k, p=0, has_dilation=True):
- conv = nn.Conv1d(
- in_d,
- out_d,
- kernel_size=k,
- padding=p,
- dilation=dilation if has_dilation else 1,
- )
- if cfg.discriminator_spectral_norm:
- conv = nn.utils.spectral_norm(conv)
- elif cfg.discriminator_weight_norm:
- conv = nn.utils.weight_norm(conv)
- return conv
-
- inner_net = [
- nn.Sequential(
- make_conv(inner_dim, inner_dim, kernel, padding),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- nn.Dropout(cfg.discriminator_dropout),
- nn.GELU(),
- )
- for _ in range(cfg.discriminator_depth - 1)
- ] + [
- make_conv(inner_dim, 1, kernel, padding, has_dilation=False),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- ]
-
- if cfg.discriminator_linear_emb:
- emb_net = [make_conv(dim, inner_dim, 1)]
- else:
- emb_net = [
- make_conv(dim, inner_dim, kernel, padding),
- SamePad(kernel_size=kernel, causal=cfg.discriminator_causal),
- ]
-
- if cfg.discriminator_act_after_linear:
- emb_net.append(nn.GELU())
-
- self.net = nn.Sequential(
- *emb_net,
- nn.Dropout(cfg.discriminator_dropout),
- *inner_net,
- )
-
- def forward(self, x, padding_mask):
- x = x.transpose(1, 2) # BTC -> BCT
- x = self.net(x)
- x = x.transpose(1, 2)
- x_sz = x.size(1)
- if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1:
- padding_mask = padding_mask[:, : x.size(1)]
- x[padding_mask] = float("-inf") if self.max_pool else 0
- x_sz = x_sz - padding_mask.sum(dim=-1)
- x = x.squeeze(-1)
- if self.max_pool:
- x, _ = x.max(dim=-1)
- else:
- x = x.sum(dim=-1)
- x = x / x_sz
- return x
-
-
-class Generator(nn.Module):
- def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig):
- super().__init__()
-
- self.cfg = cfg
- self.output_dim = output_dim
- self.stride = cfg.generator_stride
- self.dropout = nn.Dropout(cfg.generator_dropout)
-
- padding = cfg.generator_kernel // 2
- self.proj = nn.Sequential(
- TransposeLast(),
- nn.Conv1d(
- input_dim,
- output_dim,
- kernel_size=cfg.generator_kernel,
- stride=cfg.generator_stride,
- dilation=cfg.generator_dilation,
- padding=padding,
- bias=cfg.generator_bias,
- ),
- TransposeLast(),
- )
-
- def forward(self, dense_x, tokens, dense_padding_mask):
- dense_x = self.dropout(dense_x)
-
- dense_x = self.proj(dense_x)
- if self.stride > 1:
- dense_padding_mask = dense_padding_mask[:, :: self.stride]
-
- if dense_padding_mask.size(1) != dense_x.size(1):
- new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1])
- diff = new_padding.size(1) - dense_padding_mask.size(1)
- assert (
- diff > 0
- ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}"
- if diff > 0:
- new_padding[:, diff:] = dense_padding_mask
- else:
- assert diff < 0
- new_padding = dense_padding_mask[:, :diff]
-
- dense_padding_mask = new_padding
-
- result = {}
-
- token_x = None
- if tokens is not None:
- token_x = dense_x.new_zeros(tokens.numel(), self.output_dim)
- token_x.scatter_(1, tokens.view(-1, 1).long(), 1)
- token_x = token_x.view(tokens.shape + (self.output_dim,))
-
- result["dense_x"] = dense_x
- result["token_x"] = token_x
- result["dense_padding_mask"] = dense_padding_mask
-
- return result
-
-
-@register_model("wav2vec_u", dataclass=Wav2vec_UConfig)
-class Wav2vec_U(BaseFairseqModel):
- def calc_gradient_penalty(self, real_data, fake_data):
-
- b_size = min(real_data.size(0), fake_data.size(0))
- t_size = min(real_data.size(1), fake_data.size(1))
-
- if self.cfg.probabilistic_grad_penalty_slicing:
-
- def get_slice(data, dim, target_size):
-
- size = data.size(dim)
- diff = size - target_size
- if diff <= 0:
- return data
-
- start = np.random.randint(0, diff + 1)
- return data.narrow(dim=dim, start=start, length=target_size)
-
- real_data = get_slice(real_data, 0, b_size)
- real_data = get_slice(real_data, 1, t_size)
- fake_data = get_slice(fake_data, 0, b_size)
- fake_data = get_slice(fake_data, 1, t_size)
-
- else:
- real_data = real_data[:b_size, :t_size]
- fake_data = fake_data[:b_size, :t_size]
-
- alpha = torch.rand(real_data.size(0), 1, 1)
- alpha = alpha.expand(real_data.size())
- alpha = alpha.to(real_data.device)
-
- interpolates = alpha * real_data + ((1 - alpha) * fake_data)
-
- disc_interpolates = self.discriminator(interpolates, None)
-
- gradients = autograd.grad(
- outputs=disc_interpolates,
- inputs=interpolates,
- grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device),
- create_graph=True,
- retain_graph=True,
- only_inputs=True,
- )[0]
-
- gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2
- return gradient_penalty
-
- def set_num_updates(self, num_updates):
- super().set_num_updates(num_updates)
- self.update_num = num_updates
- self.curr_temp = max(
- self.max_temp * self.temp_decay ** num_updates, self.min_temp
- )
-
- def discrim_step(self, num_updates):
- return num_updates % 2 == 1
-
- def get_groups_for_update(self, num_updates):
- return "discriminator" if self.discrim_step(num_updates) else "generator"
-
- def __init__(self, cfg: Wav2vec_UConfig, target_dict):
- super().__init__()
-
- self.cfg = cfg
- self.zero_index = target_dict.index("") if "" in target_dict else 0
- self.smoothness_weight = cfg.smoothness_weight
-
- output_size = len(target_dict)
- self.pad = target_dict.pad()
- self.eos = target_dict.eos()
- self.smoothing = cfg.smoothing
- self.smoothing_one_sided = cfg.smoothing_one_sided
- self.no_softmax = cfg.no_softmax
- self.gumbel = cfg.gumbel
- self.hard_gumbel = cfg.hard_gumbel
- self.last_acc = None
-
- self.gradient_penalty = cfg.gradient_penalty
- self.code_penalty = cfg.code_penalty
- self.blank_weight = cfg.blank_weight
- self.blank_mode = cfg.blank_mode
- self.blank_index = target_dict.index("") if cfg.blank_is_sil else 0
- assert self.blank_index != target_dict.unk()
-
- self.discriminator = Discriminator(output_size, cfg)
- for p in self.discriminator.parameters():
- p.param_group = "discriminator"
-
- self.pca_A = self.pca_b = None
- d = cfg.input_dim
-
- self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation)
-
- self.generator = Generator(d, output_size, cfg)
-
- for p in self.generator.parameters():
- p.param_group = "generator"
-
- for p in self.segmenter.parameters():
- p.param_group = "generator"
-
- self.max_temp, self.min_temp, self.temp_decay = cfg.temp
- self.curr_temp = self.max_temp
- self.update_num = 0
-
- @classmethod
- def build_model(cls, cfg, task):
- return cls(cfg, task.target_dictionary)
-
- def get_logits(
- self,
- net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]],
- normalize: bool = False,
- ):
- logits = net_output["logits"]
-
- if self.blank_weight != 0:
- if self.blank_mode == "add":
- logits[..., self.blank_index] += self.blank_weight
- elif self.blank_mode == "set":
- logits[..., self.blank_index] = self.blank_weight
- else:
- raise Exception(f"invalid blank mode {self.blank_mode}")
-
- padding = net_output["padding_mask"]
- if padding.any():
- logits[padding] = float("-inf")
- logits[padding][..., self.blank_index] = float("inf")
-
- if normalize:
- logits = utils.log_softmax(logits.float(), dim=-1)
-
- return logits.transpose(0, 1)
-
- def get_normalized_probs(
- self,
- net_output: Tuple[
- torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]]
- ],
- log_probs: bool,
- sample: Optional[Dict[str, torch.Tensor]] = None,
- ):
- logits = self.get_logits(net_output)
-
- probs = super().get_normalized_probs(logits, log_probs, sample)
- # BTC -> TBC for ctc
- probs = probs.transpose(0, 1)
- return probs
-
- def normalize(self, dense_x):
-
- bsz, tsz, csz = dense_x.shape
-
- if dense_x.numel() == 0:
- raise Exception(dense_x.shape)
- _, k = dense_x.max(-1)
- hard_x = (
- dense_x.new_zeros(bsz * tsz, csz)
- .scatter_(-1, k.view(-1, 1), 1.0)
- .view(-1, csz)
- )
- hard_probs = torch.mean(hard_x.float(), dim=0)
- code_perplexity = torch.exp(
- -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1)
- )
-
- avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0)
- prob_perplexity = torch.exp(
- -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1)
- )
-
- if not self.no_softmax:
- if self.training and self.gumbel:
- dense_x = F.gumbel_softmax(
- dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel
- ).type_as(dense_x)
- else:
- dense_x = dense_x.softmax(-1)
-
- return dense_x, code_perplexity, prob_perplexity
-
- def forward(
- self,
- features,
- padding_mask,
- random_label=None,
- dense_x_only=False,
- segment=True,
- ):
- if segment:
- features, padding_mask = self.segmenter.pre_segment(features, padding_mask)
-
- orig_size = features.size(0) * features.size(1) - padding_mask.sum()
-
- gen_result = self.generator(features, random_label, padding_mask)
-
- orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"]
- orig_dense_padding_mask = gen_result["dense_padding_mask"]
-
- if segment:
- dense_x, dense_padding_mask = self.segmenter.logit_segment(
- orig_dense_x, orig_dense_padding_mask
- )
- else:
- dense_x = orig_dense_x
- dense_padding_mask = orig_dense_padding_mask
-
- dense_logits = dense_x
- prob_perplexity = None
- code_perplexity = None
-
- if not (self.no_softmax and dense_x_only):
- dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits)
-
- if dense_x_only or self.discriminator is None:
- return {
- "logits": dense_x,
- "padding_mask": dense_padding_mask,
- }
-
- token_padding_mask = random_label == self.pad
-
- dense_y = self.discriminator(dense_x, dense_padding_mask)
- token_y = self.discriminator(token_x, token_padding_mask)
-
- sample_size = features.size(0)
-
- d_step = self.discrim_step(self.update_num)
-
- fake_smooth = self.smoothing
- real_smooth = self.smoothing
- if self.smoothing_one_sided:
- fake_smooth = 0
-
- zero_loss = None
- smoothness_loss = None
- code_pen = None
-
- if d_step:
- loss_dense = F.binary_cross_entropy_with_logits(
- dense_y,
- dense_y.new_ones(dense_y.shape) - fake_smooth,
- reduction="sum",
- )
- loss_token = F.binary_cross_entropy_with_logits(
- token_y,
- token_y.new_zeros(token_y.shape) + real_smooth,
- reduction="sum",
- )
- if self.training and self.gradient_penalty > 0:
- grad_pen = self.calc_gradient_penalty(token_x, dense_x)
- grad_pen = grad_pen.sum() * self.gradient_penalty
- else:
- grad_pen = None
- else:
- grad_pen = None
- loss_token = None
- loss_dense = F.binary_cross_entropy_with_logits(
- dense_y,
- dense_y.new_zeros(dense_y.shape) + fake_smooth,
- reduction="sum",
- )
- num_vars = dense_x.size(-1)
- if prob_perplexity is not None:
- code_pen = (num_vars - prob_perplexity) / num_vars
- code_pen = code_pen * sample_size * self.code_penalty
-
- if self.smoothness_weight > 0:
- smoothness_loss = F.mse_loss(
- dense_logits[:, :-1], dense_logits[:, 1:], reduction="none"
- )
- smoothness_loss[dense_padding_mask[:, 1:]] = 0
- smoothness_loss = (
- smoothness_loss.mean() * sample_size * self.smoothness_weight
- )
-
- result = {
- "losses": {
- "grad_pen": grad_pen,
- "code_pen": code_pen,
- "smoothness": smoothness_loss,
- },
- "temp": self.curr_temp,
- "code_ppl": code_perplexity,
- "prob_ppl": prob_perplexity,
- "d_steps": int(d_step),
- "sample_size": sample_size,
- }
-
- suff = "_d" if d_step else "_g"
- result["losses"]["dense" + suff] = loss_dense
- result["losses"]["token" + suff] = loss_token
-
- return result
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/vggblock.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/vggblock.py
deleted file mode 100644
index ee5ee19a34816c7350c21fba7c4907fec8ca7a61..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/vggblock.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-from collections.abc import Iterable
-from itertools import repeat
-
-import torch
-import torch.nn as nn
-
-
-def _pair(v):
- if isinstance(v, Iterable):
- assert len(v) == 2, "len(v) != 2"
- return v
- return tuple(repeat(v, 2))
-
-
-def infer_conv_output_dim(conv_op, input_dim, sample_inchannel):
- sample_seq_len = 200
- sample_bsz = 10
- x = torch.randn(sample_bsz, sample_inchannel, sample_seq_len, input_dim)
- # N x C x H x W
- # N: sample_bsz, C: sample_inchannel, H: sample_seq_len, W: input_dim
- x = conv_op(x)
- # N x C x H x W
- x = x.transpose(1, 2)
- # N x H x C x W
- bsz, seq = x.size()[:2]
- per_channel_dim = x.size()[3]
- # bsz: N, seq: H, CxW the rest
- return x.contiguous().view(bsz, seq, -1).size(-1), per_channel_dim
-
-
-class VGGBlock(torch.nn.Module):
- """
- VGG motibated cnn module https://arxiv.org/pdf/1409.1556.pdf
-
- Args:
- in_channels: (int) number of input channels (typically 1)
- out_channels: (int) number of output channels
- conv_kernel_size: convolution channels
- pooling_kernel_size: the size of the pooling window to take a max over
- num_conv_layers: (int) number of convolution layers
- input_dim: (int) input dimension
- conv_stride: the stride of the convolving kernel.
- Can be a single number or a tuple (sH, sW) Default: 1
- padding: implicit paddings on both sides of the input.
- Can be a single number or a tuple (padH, padW). Default: None
- layer_norm: (bool) if layer norm is going to be applied. Default: False
-
- Shape:
- Input: BxCxTxfeat, i.e. (batch_size, input_size, timesteps, features)
- Output: BxCxTxfeat, i.e. (batch_size, input_size, timesteps, features)
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- conv_kernel_size,
- pooling_kernel_size,
- num_conv_layers,
- input_dim,
- conv_stride=1,
- padding=None,
- layer_norm=False,
- ):
- assert (
- input_dim is not None
- ), "Need input_dim for LayerNorm and infer_conv_output_dim"
- super(VGGBlock, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.conv_kernel_size = _pair(conv_kernel_size)
- self.pooling_kernel_size = _pair(pooling_kernel_size)
- self.num_conv_layers = num_conv_layers
- self.padding = (
- tuple(e // 2 for e in self.conv_kernel_size)
- if padding is None
- else _pair(padding)
- )
- self.conv_stride = _pair(conv_stride)
-
- self.layers = nn.ModuleList()
- for layer in range(num_conv_layers):
- conv_op = nn.Conv2d(
- in_channels if layer == 0 else out_channels,
- out_channels,
- self.conv_kernel_size,
- stride=self.conv_stride,
- padding=self.padding,
- )
- self.layers.append(conv_op)
- if layer_norm:
- conv_output_dim, per_channel_dim = infer_conv_output_dim(
- conv_op, input_dim, in_channels if layer == 0 else out_channels
- )
- self.layers.append(nn.LayerNorm(per_channel_dim))
- input_dim = per_channel_dim
- self.layers.append(nn.ReLU())
-
- if self.pooling_kernel_size is not None:
- pool_op = nn.MaxPool2d(kernel_size=self.pooling_kernel_size, ceil_mode=True)
- self.layers.append(pool_op)
- self.total_output_dim, self.output_dim = infer_conv_output_dim(
- pool_op, input_dim, out_channels
- )
-
- def forward(self, x):
- for i, _ in enumerate(self.layers):
- x = self.layers[i](x)
- return x
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/__init__.py
deleted file mode 100644
index 5b3dbc023aa4a6f7bfb8403b8204d71ca432f79c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/__init__.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""isort:skip_file"""
-
-import importlib
-import os
-
-from fairseq import registry
-from fairseq.optim.lr_scheduler.fairseq_lr_scheduler import ( # noqa
- FairseqLRScheduler,
- LegacyFairseqLRScheduler,
-)
-from omegaconf import DictConfig
-
-
-(
- build_lr_scheduler_,
- register_lr_scheduler,
- LR_SCHEDULER_REGISTRY,
- LR_SCHEDULER_DATACLASS_REGISTRY,
-) = registry.setup_registry(
- "--lr-scheduler", base_class=FairseqLRScheduler, default="fixed"
-)
-
-
-def build_lr_scheduler(cfg: DictConfig, optimizer):
- return build_lr_scheduler_(cfg, optimizer)
-
-
-# automatically import any Python files in the optim/lr_scheduler/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- file_name = file[: file.find(".py")]
- importlib.import_module("fairseq.optim.lr_scheduler." + file_name)
diff --git a/spaces/ORI-Muchim/NahidaTTS/monotonic_align/__init__.py b/spaces/ORI-Muchim/NahidaTTS/monotonic_align/__init__.py
deleted file mode 100644
index 40b6f64aa116c74cac2f6a33444c9eeea2fdb38c..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/NahidaTTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-
-def maximum_path(neg_cent, mask):
- """ numba optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
-
diff --git a/spaces/Omdena-Milan/milan-chapter-agrifoods/README.md b/spaces/Omdena-Milan/milan-chapter-agrifoods/README.md
deleted file mode 100644
index 89c77a839aa257094b0c3eac612ac52892c8ea01..0000000000000000000000000000000000000000
--- a/spaces/Omdena-Milan/milan-chapter-agrifoods/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Milan Chapter Agrifoods
-emoji: 📉
-colorFrom: red
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/pixel_decoder/ops/modules/__init__.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/pixel_decoder/ops/modules/__init__.py
deleted file mode 100644
index 855ca1095aacc2a0418722497a8fdcaa56427b45..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/pixel_decoder/ops/modules/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# ------------------------------------------------------------------------------------------------
-# Deformable DETR
-# Copyright (c) 2020 SenseTime. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------------------------------
-# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-# ------------------------------------------------------------------------------------------------
-
-
-from .ms_deform_attn import MSDeformAttn
diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/modules/distributions/__init__.py b/spaces/PAIR/PAIR-Diffusion/ldm/modules/distributions/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/midas/api.py b/spaces/PAIR/Text2Video-Zero/annotator/midas/api.py
deleted file mode 100644
index 1ab9f15bf96bbaffcee0e3e29fc9d3979d6c32e8..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/midas/api.py
+++ /dev/null
@@ -1,169 +0,0 @@
-# based on https://github.com/isl-org/MiDaS
-
-import cv2
-import os
-import torch
-import torch.nn as nn
-from torchvision.transforms import Compose
-
-from .midas.dpt_depth import DPTDepthModel
-from .midas.midas_net import MidasNet
-from .midas.midas_net_custom import MidasNet_small
-from .midas.transforms import Resize, NormalizeImage, PrepareForNet
-from annotator.util import annotator_ckpts_path
-
-
-ISL_PATHS = {
- "dpt_large": os.path.join(annotator_ckpts_path, "dpt_large-midas-2f21e586.pt"),
- "dpt_hybrid": os.path.join(annotator_ckpts_path, "dpt_hybrid-midas-501f0c75.pt"),
- "midas_v21": "",
- "midas_v21_small": "",
-}
-
-remote_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt"
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def load_midas_transform(model_type):
- # https://github.com/isl-org/MiDaS/blob/master/run.py
- # load transform only
- if model_type == "dpt_large": # DPT-Large
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "dpt_hybrid": # DPT-Hybrid
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "midas_v21":
- net_w, net_h = 384, 384
- resize_mode = "upper_bound"
- normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
- elif model_type == "midas_v21_small":
- net_w, net_h = 256, 256
- resize_mode = "upper_bound"
- normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
- else:
- assert False, f"model_type '{model_type}' not implemented, use: --model_type large"
-
- transform = Compose(
- [
- Resize(
- net_w,
- net_h,
- resize_target=None,
- keep_aspect_ratio=True,
- ensure_multiple_of=32,
- resize_method=resize_mode,
- image_interpolation_method=cv2.INTER_CUBIC,
- ),
- normalization,
- PrepareForNet(),
- ]
- )
-
- return transform
-
-
-def load_model(model_type):
- # https://github.com/isl-org/MiDaS/blob/master/run.py
- # load network
- model_path = ISL_PATHS[model_type]
- if model_type == "dpt_large": # DPT-Large
- model = DPTDepthModel(
- path=model_path,
- backbone="vitl16_384",
- non_negative=True,
- )
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "dpt_hybrid": # DPT-Hybrid
- if not os.path.exists(model_path):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path)
-
- model = DPTDepthModel(
- path=model_path,
- backbone="vitb_rn50_384",
- non_negative=True,
- )
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "midas_v21":
- model = MidasNet(model_path, non_negative=True)
- net_w, net_h = 384, 384
- resize_mode = "upper_bound"
- normalization = NormalizeImage(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- )
-
- elif model_type == "midas_v21_small":
- model = MidasNet_small(model_path, features=64, backbone="efficientnet_lite3", exportable=True,
- non_negative=True, blocks={'expand': True})
- net_w, net_h = 256, 256
- resize_mode = "upper_bound"
- normalization = NormalizeImage(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- )
-
- else:
- print(f"model_type '{model_type}' not implemented, use: --model_type large")
- assert False
-
- transform = Compose(
- [
- Resize(
- net_w,
- net_h,
- resize_target=None,
- keep_aspect_ratio=True,
- ensure_multiple_of=32,
- resize_method=resize_mode,
- image_interpolation_method=cv2.INTER_CUBIC,
- ),
- normalization,
- PrepareForNet(),
- ]
- )
-
- return model.eval(), transform
-
-
-class MiDaSInference(nn.Module):
- MODEL_TYPES_TORCH_HUB = [
- "DPT_Large",
- "DPT_Hybrid",
- "MiDaS_small"
- ]
- MODEL_TYPES_ISL = [
- "dpt_large",
- "dpt_hybrid",
- "midas_v21",
- "midas_v21_small",
- ]
-
- def __init__(self, model_type):
- super().__init__()
- assert (model_type in self.MODEL_TYPES_ISL)
- model, _ = load_model(model_type)
- self.model = model
- self.model.train = disabled_train
-
- def forward(self, x):
- with torch.no_grad():
- prediction = self.model(x)
- return prediction
-
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/image/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/image/__init__.py
deleted file mode 100644
index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/image/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr,
- gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert,
- rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb)
-from .geometric import (cutout, imcrop, imflip, imflip_, impad,
- impad_to_multiple, imrescale, imresize, imresize_like,
- imresize_to_multiple, imrotate, imshear, imtranslate,
- rescale_size)
-from .io import imfrombytes, imread, imwrite, supported_backends, use_backend
-from .misc import tensor2imgs
-from .photometric import (adjust_brightness, adjust_color, adjust_contrast,
- adjust_lighting, adjust_sharpness, auto_contrast,
- clahe, imdenormalize, imequalize, iminvert,
- imnormalize, imnormalize_, lut_transform, posterize,
- solarize)
-
-__all__ = [
- 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb',
- 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale',
- 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size',
- 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate',
- 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend',
- 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize',
- 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr',
- 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize',
- 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe',
- 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting'
-]
diff --git a/spaces/PKUWilliamYang/StyleGANEX/utils/train_utils.py b/spaces/PKUWilliamYang/StyleGANEX/utils/train_utils.py
deleted file mode 100644
index 0c55177f7442010bc1fcc64de3d142585c22adc0..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/StyleGANEX/utils/train_utils.py
+++ /dev/null
@@ -1,13 +0,0 @@
-
-def aggregate_loss_dict(agg_loss_dict):
- mean_vals = {}
- for output in agg_loss_dict:
- for key in output:
- mean_vals[key] = mean_vals.setdefault(key, []) + [output[key]]
- for key in mean_vals:
- if len(mean_vals[key]) > 0:
- mean_vals[key] = sum(mean_vals[key]) / len(mean_vals[key])
- else:
- print('{} has no value'.format(key))
- mean_vals[key] = 0
- return mean_vals
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/abc2ly.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/abc2ly.py
deleted file mode 100644
index 6893957051479fcfb5205ef6c99cb843079d8b84..0000000000000000000000000000000000000000
--- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/abc2ly.py
+++ /dev/null
@@ -1,1557 +0,0 @@
-#!/home/lily/lilypond-2.24.2/release/binaries/dependencies/install/Python-3.10.8/bin/python3.10
-# -*- coding: utf-8 -*-
-
-# once upon a rainy monday afternoon.
-
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 1999--2022 Han-Wen Nienhuys
-# Jan Nieuwenhuizen
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-
-#
-# ...
-#
-# (not finished.)
-# ABC standard v1.6: http://abcnotation.com/
-#
-# Enhancements (Roy R. Rankin)
-#
-# Header section moved to top of lilypond file
-# handle treble, treble-8, alto, and bass clef
-# Handle voices (V: headers) with clef and part names, multiple voices
-# Handle w: lyrics with multiple verses
-# Handle key mode names for minor, major, phrygian, ionian, locrian, aeolian,
-# mixolydian, lydian, dorian
-# Handle part names from V: header
-# Tuplets handling fixed up
-# Lines starting with |: not discarded as header lines
-# Multiple T: and C: header entries handled
-# Accidental maintained until next bar check
-# Silent rests supported
-# articulations fermata, upbow, downbow, ltoe, accent, tenuto supported
-# Chord strings([-^]"string") can contain a '#'
-# Header fields enclosed by [] in notes string processed
-# W: words output after tune as abc2ps does it (they failed before)
-
-# Enhancements (Laura Conrad)
-#
-# Barring now preserved between ABC and lilypond
-# the default placement for text in abc is above the staff.
-# %%LY now supported.
-# \breve and \longa supported.
-# M:none doesn't crash lily.
-# lilypond '--' supported.
-
-# Enhancements (Guy Gascoigne-Piggford)
-#
-# Add support for maintaining ABC's notion of beaming, this is selectable
-# from the command line with a -b or --beam option.
-# Fixd a problem where on cygwin empty lines weren't being correctly identifed
-# and so were complaining, but still generating the correct output.
-
-# Limitations
-#
-# Multiple tunes in single file not supported
-# Blank T: header lines should write score and open a new score
-# Not all header fields supported
-# ABC line breaks are ignored
-# Block comments generate error and are ignored
-# Postscript commands are ignored
-# lyrics not resynchronized by line breaks (lyrics must fully match notes)
-# %%LY slyrics can't be directly before a w: line.
-# ???
-
-
-# TODO:
-#
-# * coding style
-# * lilylib
-# * GNU style messages: warning:FILE:LINE:
-# * l10n
-#
-# Convert to new chord styles.
-#
-# UNDEF -> None
-#
-
-import __main__
-import getopt
-import gettext
-import os
-import re
-import sys
-
-"""
-
-# relocate-preamble.py.in
-#
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2007--2022 Han-Wen Nienhuys
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-#
-
-This is generic code, used for all python scripts.
-
-The quotes are to ensure that the source .py file can still be
-run as a python script, but does not include any sys.path handling.
-Otherwise, the lilypond-book calls inside the build
-might modify installed .pyc files.
-
-"""
-
-# This is needed for installations with a non-default layout, ie where share/
-# is not next to bin/.
-sys.path.insert (0, os.path.join ('/home/lily/lilypond-2.24.2/release/binaries/mingw/lilypond/install/share/lilypond/2.24.2', 'python'))
-
-# Dynamic relocation, for installations with a default layout including GUB,
-# but also for execution from the build directory.
-bindir = os.path.abspath (os.path.dirname (sys.argv[0]))
-topdir = os.path.dirname (bindir)
-if bindir.endswith (r'/scripts/out'):
- topdir = os.path.join (os.path.dirname (topdir), 'out')
-datadir = os.path.abspath (os.path.join (topdir, 'share', 'lilypond'))
-for v in [ 'current', '2.24.2' ]:
- sys.path.insert (0, os.path.join (datadir, v, 'python'))
-
-"""
-"""
-
-# Load translation and install _() into Python's builtins namespace.
-gettext.install('lilypond', '/home/lily/lilypond-2.24.2/release/binaries/mingw/lilypond/install/share/locale')
-
-import lilylib as ly
-
-version = '2.24.2'
-if version == '@' + 'TOPLEVEL_VERSION' + '@':
- version = '(unknown version)' # uGUHGUHGHGUGH
-
-UNDEF = 255
-state = UNDEF
-voice_idx_dict = {}
-header = {}
-header['footnotes'] = ''
-lyrics = []
-slyrics = []
-voices = []
-state_list = []
-repeat_state = [0] * 8
-current_voice_idx = -1
-current_lyric_idx = -1
-lyric_idx = -1
-part_names = 0
-default_len = 8
-length_specified = 0
-nobarlines = 0
-global_key = [0] * 7 # UGH
-names = ["One", "Two", "Three"]
-DIGITS = '0123456789'
-HSPACE = ' \t'
-midi_specs = ''
-
-
-def error(msg):
- sys.stderr.write(msg)
- if global_options.strict:
- sys.exit(1)
-
-
-def alphabet(i):
- return chr(i + ord('A'))
-
-
-def check_clef(s):
- # the number gives the base_octave
- clefs = [("treble", "treble", 0),
- ("treble1", "french", 0),
- ("bass3", "varbaritone", 0),
- ("bass", "bass", 0),
- ("alto4", "tenor", 0),
- ("alto2", "mezzosoprano", 0),
- ("alto1", "soprano", 0),
- ("alto", "alto", 0),
- ("perc", "percussion", 0)]
- modifier = [("-8va", "_8", -1),
- ("-8", "_8", -1),
- (r"\+8", "^8", +1),
- ("8", "_8", -1)]
-
- if not s:
- return ''
- clef = None
- octave = 0
- for c in clefs:
- m = re.match('^'+c[0], s)
- if m:
- (clef, octave) = (c[1], c[2])
- s = s[m.end():]
- break
- if not clef:
- return s
-
- mod = ""
- for md in modifier:
- m = re.match('^'+md[0], s)
- if m:
- mod = md[1]
- octave += md[2]
- s = s[m.end():]
- break
-
- state.base_octave = octave
- voices_append("\\clef \""+clef+mod+"\"\n")
- return s
-
-
-def select_voice(name, rol):
- if name not in voice_idx_dict:
- state_list.append(Parser_state())
- voices.append('')
- slyrics.append([])
- voice_idx_dict[name] = len(voices) - 1
- __main__.current_voice_idx = voice_idx_dict[name]
- __main__.state = state_list[current_voice_idx]
- while rol != '':
- m = re.match('^([^ \t=]*)=(.*)$', rol) # find keywork
- if m:
- keyword = m.group(1)
- rol = m.group(2)
- a = re.match('^("[^"]*"|[^ \t]*) *(.*)$', rol)
- if a:
- value = a.group(1)
- rol = a.group(2)
- if keyword == 'clef':
- check_clef(value)
- elif keyword == "name":
- value = re.sub('\\\\', '\\\\\\\\', value)
- # < 2.2
- voices_append("\\set Staff.instrument = %s\n" % value)
-
- __main__.part_names = 1
- elif keyword == "sname" or keyword == "snm":
- voices_append("\\set Staff.instr = %s\n" % value)
- else:
- break
-
-
-def dump_header(outf, hdr):
- outf.write('\\header {\n')
- ks = sorted(hdr.keys())
- for k in ks:
- hdr[k] = re.sub('"', '\\"', hdr[k])
- outf.write('\t%s = "%s"\n' % (k, hdr[k]))
- outf.write('}')
-
-
-def dump_lyrics(outf):
- if lyrics:
- outf.write("\n\\markup \\column {\n")
- for i in range(len(lyrics)):
- outf.write(lyrics[i])
- outf.write("\n")
- outf.write("}\n")
-
-
-def dump_default_bar(outf):
- """
- Nowadays abc2ly outputs explicits barlines (?)
- """
- # < 2.2
- outf.write("\n\\set Score.measureBarType = \"\"\n")
-
-
-def dump_slyrics(outf):
- ks = sorted(voice_idx_dict.keys())
- for k in ks:
- if re.match('[1-9]', k):
- m = alphabet(int(k))
- else:
- m = k
- for i in range(len(slyrics[voice_idx_dict[k]])):
- l = alphabet(i)
- outf.write("\nwords%sV%s = \\lyricmode {" % (m, l))
- outf.write("\n" + slyrics[voice_idx_dict[k]][i])
- outf.write("\n}")
-
-
-def dump_voices(outf):
- global doing_alternative, in_repeat
- ks = sorted(voice_idx_dict.keys())
- for k in ks:
- if re.match('[1-9]', k):
- m = alphabet(int(k))
- else:
- m = k
- outf.write("\nvoice%s = {" % m)
- dump_default_bar(outf)
- if repeat_state[voice_idx_dict[k]]:
- outf.write("\n\\repeat volta 2 {")
- outf.write("\n" + voices[voice_idx_dict[k]])
- if not using_old:
- if doing_alternative[voice_idx_dict[k]]:
- outf.write("}")
- if in_repeat[voice_idx_dict[k]]:
- outf.write("}")
- outf.write("\n}")
-
-
-def try_parse_q(a):
- # assume that Q takes the form "Q:'opt. description' 1/4=120"
- # There are other possibilities, but they are deprecated
- r = re.compile(r'^(.*) *([0-9]+) */ *([0-9]+) *=* *([0-9]+)\s*')
- m = r.match(a)
- if m:
- descr = m.group(1) # possibly empty
- numerator = int(m.group(2))
- denominator = int(m.group(3))
- tempo = m.group(4)
- dur = duration_to_lilypond_duration((numerator, denominator), 1, 0)
- voices_append("\\tempo " + descr + " " + dur + "=" + tempo + "\n")
- else:
- # Parsing of numeric tempi, as these are fairly
- # common. The spec says the number is a "beat" so using
- # a quarter note as the standard time
- numericQ = re.compile('[0-9]+')
- m = numericQ.match(a)
- if m:
- voices_append("\\tempo 4=" + m.group(0))
- else:
- sys.stderr.write(
- "abc2ly: Warning, unable to parse Q specification: %s\n" % a)
-
-
-def dump_score(outf):
- outf.write(r"""
-
-\score{
- <<
-""")
-
- ks = sorted(voice_idx_dict.keys())
- for k in ks:
- if re.match('[1-9]', k):
- m = alphabet(int(k))
- else:
- m = k
- if k == 'default' and len(voice_idx_dict) > 1:
- break
- outf.write("\n\t\\context Staff=\"%s\"\n\t{\n" % k)
- if k != 'default':
- outf.write("\t \\voicedefault\n")
- outf.write("\t \\voice%s " % m)
- outf.write("\n\t}\n")
-
- l = ord('A')
- for lyrics in slyrics[voice_idx_dict[k]]:
- outf.write("\n\t\\addlyrics {\n")
- if re.match('[1-9]', k):
- m = alphabet(int(k))
- else:
- m = k
-
- outf.write(" \\words%sV%s } " % (m, chr(l)))
- l += 1
-
- outf.write("\n >>")
- outf.write("\n\t\\layout {\n")
- outf.write("\t}\n\t\\midi {%s}\n}\n" % midi_specs)
-
-
-def set_default_length(s):
- global length_specified
- m = re.search('1/([0-9]+)', s)
- if m:
- __main__.default_len = int(m.group(1))
- length_specified = 1
-
-
-def set_default_len_from_time_sig(s):
- m = re.search('([0-9]+)/([0-9]+)', s)
- if m:
- n = int(m.group(1))
- d = int(m.group(2))
- if (n * 1.0)/(d * 1.0) < 0.75:
- __main__.default_len = 16
- else:
- __main__.default_len = 8
-
-
-def gulp_file(f):
- try:
- i = open(f, encoding="utf8")
- i.seek(0, 2)
- n = i.tell()
- i.seek(0, 0)
- except FileNotFoundError:
- sys.stderr.write("cannot open file: `%s'\n" % f)
- return ''
- s = i.read(n)
- if len(s) <= 0:
- sys.stderr.write("gulped empty file: `%s'\n" % f)
- i.close()
- return s
-
-
-# pitch manipulation. Tuples are (name, alteration).
-# 0 is (central) C. Alteration -1 is a flat, Alteration +1 is a sharp
-# pitch in semitones.
-def semitone_pitch(tup):
- p = 0
-
- t = tup[0]
- p = p + 12 * (t // 7)
- t = t % 7
-
- if t > 2:
- p = p - 1
-
- p = p + t * 2 + tup[1]
- return p
-
-
-def fifth_above_pitch(tup):
- (n, a) = (tup[0] + 4, tup[1])
-
- difference = 7 - (semitone_pitch((n, a)) - semitone_pitch(tup))
- a = a + difference
-
- return (n, a)
-
-
-def sharp_keys():
- p = (0, 0)
- l = []
- k = 0
- while True:
- l.append(p)
- (t, a) = fifth_above_pitch(p)
- if semitone_pitch((t, a)) % 12 == 0:
- break
-
- p = (t % 7, a)
- return l
-
-
-def flat_keys():
- p = (0, 0)
- l = []
- k = 0
- while True:
- l.append(p)
- (t, a) = quart_above_pitch(p)
- if semitone_pitch((t, a)) % 12 == 0:
- break
-
- p = (t % 7, a)
- return l
-
-
-def quart_above_pitch(tup):
- (n, a) = (tup[0] + 3, tup[1])
-
- difference = 5 - (semitone_pitch((n, a)) - semitone_pitch(tup))
- a = a + difference
-
- return (n, a)
-
-
-key_lookup = { # abc to lilypond key mode names
- 'm': 'minor',
- 'min': 'minor',
- 'maj': 'major',
- 'major': 'major',
- 'phr': 'phrygian',
- 'ion': 'ionian',
- 'loc': 'locrian',
- 'aeo': 'aeolian',
- 'mix': 'mixolydian',
- 'mixolydian': 'mixolydian',
- 'lyd': 'lydian',
- 'dor': 'dorian',
- 'dorian': 'dorian'
-}
-
-
-def lily_key(k):
- if k == 'none':
- return
- orig = "" + k
- # UGR
- k = k.lower()
- key = k[0]
- # UGH
- k = k[1:]
- if k and k[0] == '#':
- key = key + 'is'
- k = k[1:]
- elif k and k[0] == 'b':
- key = key + 'es'
- k = k[1:]
- if not k:
- return '%s \\major' % key
-
- type = k[0:3]
- if type not in key_lookup:
- # ugh, use lilylib, say WARNING:FILE:LINE:
- sys.stderr.write("abc2ly:warning:")
- sys.stderr.write("ignoring unknown key: `%s'" % orig)
- sys.stderr.write('\n')
- return 0
- return "%s \\%s" % (key, key_lookup[type])
-
-
-def shift_key(note, acc, shift):
- s = semitone_pitch((note, acc))
- s = (s + shift + 12) % 12
- if s <= 4:
- n = s // 2
- a = s % 2
- else:
- n = (s + 1) // 2
- a = (s + 1) % 2
- if a:
- n = n + 1
- a = -1
- return (n, a)
-
-
-key_shift = { # semitone shifts for key mode names
- 'm': 3,
- 'min': 3,
- 'minor': 3,
- 'maj': 0,
- 'major': 0,
- 'phr': -4,
- 'phrygian': -4,
- 'ion': 0,
- 'ionian': 0,
- 'loc': 1,
- 'locrian': 1,
- 'aeo': 3,
- 'aeolian': 3,
- 'mix': 5,
- 'mixolydian': 5,
- 'lyd': -5,
- 'lydian': -5,
- 'dor': -2,
- 'dorian': -2
-}
-
-
-def compute_key(k):
- k = k.lower()
- intkey = (ord(k[0]) - ord('a') + 5) % 7
- intkeyacc = 0
- k = k[1:]
-
- if k and k[0] == 'b':
- intkeyacc = -1
- k = k[1:]
- elif k and k[0] == '#':
- intkeyacc = 1
- k = k[1:]
- k = k[0:3]
- if k and k in key_shift:
- (intkey, intkeyacc) = shift_key(intkey, intkeyacc, key_shift[k])
- keytup = (intkey, intkeyacc)
-
- sharp_key_seq = sharp_keys()
- flat_key_seq = flat_keys()
-
- accseq = None
- accsign = 0
- if keytup in sharp_key_seq:
- accsign = 1
- key_count = sharp_key_seq.index(keytup)
- accseq = [(4*x - 1) % 7 for x in range(1, key_count + 1)]
-
- elif keytup in flat_key_seq:
- accsign = -1
- key_count = flat_key_seq.index(keytup)
- accseq = [(3*x + 3) % 7 for x in range(1, key_count + 1)]
- else:
- error("Huh?")
- raise Exception("Huh")
-
- key_table = [0] * 7
- for a in accseq:
- key_table[a] = key_table[a] + accsign
-
- return key_table
-
-
-tup_lookup = {
- '2': '3/2',
- '3': '2/3',
- '4': '4/3',
- '5': '4/5',
- '6': '4/6',
- '7': '6/7',
- '9': '8/9',
-}
-
-
-def try_parse_tuplet_begin(s, state):
- if re.match(r'\([2-9]', s):
- dig = s[1]
- s = s[2:]
- prev_tuplet_state = state.parsing_tuplet
- state.parsing_tuplet = int(dig[0])
- if prev_tuplet_state:
- voices_append("}")
- voices_append("\\times %s {" % tup_lookup[dig])
- return s
-
-
-def try_parse_group_end(s, state):
- if s and s[0] in HSPACE:
- s = s[1:]
- close_beam_state(state)
- return s
-
-
-def header_append(key, a):
- s = ''
- if key in header:
- s = header[key] + "\n"
- header[key] = s + a
-
-
-def wordwrap(a, v):
- linelen = len(v) - v.rfind('\n')
- if linelen + len(a) > 80:
- v = v + '\n'
- return v + a + ' '
-
-
-def stuff_append(stuff, idx, a):
- if not stuff:
- stuff.append(a)
- else:
- stuff[idx] = wordwrap(a, stuff[idx])
-
-# ignore wordwrap since we are adding to the previous word
-
-
-def stuff_append_back(stuff, idx, a):
- if not stuff:
- stuff.append(a)
- else:
- point = len(stuff[idx])-1
- while stuff[idx][point] == ' ':
- point = point - 1
- point = point + 1
- stuff[idx] = stuff[idx][:point] + a + stuff[idx][point:]
-
-
-def voices_append(a):
- if current_voice_idx < 0:
- select_voice('default', '')
- stuff_append(voices, current_voice_idx, a)
-
-# word wrap really makes it hard to bind beams to the end of notes since it
-# pushes out whitespace on every call. The _back functions do an append
-# prior to the last space, effectively tagging whatever they are given
-# onto the last note
-
-
-def voices_append_back(a):
- if current_voice_idx < 0:
- select_voice('default', '')
- stuff_append_back(voices, current_voice_idx, a)
-
-
-def repeat_prepend():
- global repeat_state
- if current_voice_idx < 0:
- select_voice('default', '')
- if not using_old:
- repeat_state[current_voice_idx] = 't'
-
-
-def lyrics_append(a):
- a = re.sub('#', '\\#', a) # latex does not like naked #'s
- a = re.sub('"', '\\"', a) # latex does not like naked "'s
- a = ' \\line { "' + a + '" }\n'
- stuff_append(lyrics, current_lyric_idx, a)
-
-# break lyrics to words and put "'s around words containing numbers and '"'s
-
-
-def fix_lyric(s):
- ret = ''
- while s != '':
- m = re.match('[ \t]*([^ \t]*)[ \t]*(.*$)', s)
- if m:
- word = m.group(1)
- s = m.group(2)
- word = re.sub('"', '\\"', word) # escape "
- if re.match(r'.*[0-9"\(]', word):
- word = re.sub('_', ' ', word) # _ causes probs inside ""
- ret = ret + '\"' + word + '\" '
- else:
- ret = ret + word + ' '
- else:
- return ret
- return ret
-
-
-def slyrics_append(a):
- a = re.sub('_', ' _ ', a) # _ to ' _ '
- # split words with "-" unless was originally "--"
- a = re.sub('([^-])-([^-])', '\\1- \\2', a)
- a = re.sub('\\\\- ', '-', a) # unless \-
- a = re.sub('~', '_', a) # ~ to space('_')
- a = re.sub(r'\*', '_ ', a) # * to to space
- a = re.sub('#', '\\#', a) # latex does not like naked #'s
- if re.match(r'.*[0-9"\(]', a): # put numbers and " and ( into quoted string
- a = fix_lyric(a)
- a = re.sub('$', ' ', a) # insure space between lines
- __main__.lyric_idx = lyric_idx + 1
- if len(slyrics[current_voice_idx]) <= lyric_idx:
- slyrics[current_voice_idx].append(a)
- else:
- v = slyrics[current_voice_idx][lyric_idx]
- slyrics[current_voice_idx][lyric_idx] = wordwrap(
- a, slyrics[current_voice_idx][lyric_idx])
-
-
-def try_parse_header_line(ln, state):
- global length_specified
- m = re.match('^([A-Za-z]): *(.*)$', ln)
-
- if m:
- g = m.group(1)
- a = m.group(2)
- if g == 'T': # title
- a = re.sub('[ \t]*$', '', a) # strip trailing blanks
- if 'title' in header:
- if a:
- if len(header['title']):
- # the non-ascii character
- # in the string below is a
- # punctuation dash. (TeX ---)
- header['title'] = header['title'] + ' — ' + a
- else:
- header['subtitle'] = a
- else:
- header['title'] = a
- if g == 'M': # Meter
- if a == 'C':
- if not state.common_time:
- state.common_time = 1
- voices_append(
- " \\override Staff.TimeSignature.style = #'C\n")
- a = '4/4'
- if a == 'C|':
- if not state.common_time:
- state.common_time = 1
- voices_append(
- "\\override Staff.TimeSignature.style = #'C\n")
- a = '2/2'
- if not length_specified:
- set_default_len_from_time_sig(a)
- else:
- length_specified = 0
- if not a == 'none':
- voices_append('\\time %s' % a)
- state.next_bar = ''
- if g == 'K': # KEY
- a = check_clef(a)
- if a:
- # separate clef info
- m = re.match('^([^ \t]*) *([^ ]*)( *)(.*)$', a)
- if m:
- # there may or may not be a space
- # between the key letter and the mode
- # convert the mode to lower-case before comparing
- mode = m.group(2)[0:3].lower()
- if mode in key_lookup:
- # use the full mode, not only the first three letters
- key_info = m.group(1) + m.group(2).lower()
- clef_info = a[m.start(4):]
- else:
- key_info = m.group(1)
- clef_info = a[m.start(2):]
- __main__.global_key = compute_key(key_info)
- k = lily_key(key_info)
- if k:
- voices_append('\\key %s' % k)
- check_clef(clef_info)
- else:
- __main__.global_key = compute_key(a)
- k = lily_key(a)
- if k:
- voices_append('\\key %s \\major' % k)
- if g == 'N': # Notes
- header['footnotes'] = header['footnotes'] + '\\\\\\\\' + a
- if g == 'O': # Origin
- header['origin'] = a
- if g == 'X': # Reference Number
- header['crossRefNumber'] = a
- if g == 'A': # Area
- header['area'] = a
- if g == 'H': # History
- header_append('history', a)
- if g == 'B': # Book
- header['book'] = a
- if g == 'C': # Composer
- if 'composer' in header:
- if a:
- header['composer'] = header['composer'] + '\\\\\\\\' + a
- else:
- header['composer'] = a
- if g == 'S':
- header['subtitle'] = a
- if g == 'L': # Default note length
- set_default_length(ln)
- if g == 'V': # Voice
- voice = re.sub(' .*$', '', a)
- rest = re.sub('^[^ \t]* *', '', a)
- if state.next_bar:
- voices_append(state.next_bar)
- state.next_bar = ''
- select_voice(voice, rest)
- if g == 'W': # Words
- lyrics_append(a)
- if g == 'w': # vocals
- slyrics_append(a)
- if g == 'Q': # tempo
- try_parse_q(a)
- if g == 'R': # Rhythm (e.g. jig, reel, hornpipe)
- header['meter'] = a
- if g == 'Z': # Transcription (e.g. Steve Mansfield 1/2/2000)
- header['transcription'] = a
- return ''
- return ln
-
-# we use in this order specified accidental, active accidental for bar,
-# active accidental for key
-
-
-def pitch_to_lilypond_name(name, acc, bar_acc, key):
- s = ''
- if acc == UNDEF:
- if not nobarlines:
- acc = bar_acc
- if acc == UNDEF:
- acc = key
- if acc == -1:
- s = 'es'
- elif acc == 1:
- s = 'is'
-
- if name > 4:
- name = name - 7
- return chr(name + ord('c')) + s
-
-
-def octave_to_lilypond_quotes(o):
- o = o + 2
- s = ''
- if o < 0:
- o = -o
- s = ','
- else:
- s = '\''
-
- return s * o
-
-
-def parse_num(s):
- durstr = ''
- while s and s[0] in DIGITS:
- durstr = durstr + s[0]
- s = s[1:]
-
- n = None
- if durstr:
- n = int(durstr)
- return (s, n)
-
-
-def duration_to_lilypond_duration(multiply_tup, defaultlen, dots):
- base = 1
- # (num / den) / defaultlen < 1/base
- while base * multiply_tup[0] < multiply_tup[1]:
- base = base * 2
- if base == 1:
- if (multiply_tup[0] / multiply_tup[1]) == 2:
- base = '\\breve'
- if (multiply_tup[0] / multiply_tup[1]) == 3:
- base = '\\breve'
- dots = 1
- if (multiply_tup[0] / multiply_tup[1]) == 4:
- base = '\\longa'
- return '%s%s' % (base, '.' * dots)
-
-
-class Parser_state:
- def __init__(self):
- self.in_acc = {}
- self.next_articulation = ''
- self.next_bar = ''
- self.next_dots = 0
- self.next_den = 1
- self.parsing_tuplet = 0
- self.plus_chord = 0
- self.base_octave = 0
- self.common_time = 0
- self.parsing_beam = 0
-
-
-# return (str, num,den,dots)
-def parse_duration(s, parser_state):
- num = 0
- den = parser_state.next_den
- parser_state.next_den = 1
-
- (s, num) = parse_num(s)
- if not num:
- num = 1
- if len(s):
- if s[0] == '/':
- if len(s[0]):
- while s[:1] == '/':
- s = s[1:]
- d = 2
- if s[0] in DIGITS:
- (s, d) = parse_num(s)
-
- den = den * d
-
- den = den * default_len
-
- current_dots = parser_state.next_dots
- parser_state.next_dots = 0
- if re.match('[ \t]*[<>]', s):
- while s[0] in HSPACE:
- s = s[1:]
- while s[0] == '>':
- s = s[1:]
- current_dots = current_dots + 1
- parser_state.next_den = parser_state.next_den * 2
-
- while s[0] == '<':
- s = s[1:]
- den = den * 2
- parser_state.next_dots = parser_state.next_dots + 1
-
- try_dots = [3, 2, 1]
- for d in try_dots:
- f = 1 << d
- multiplier = (2*f-1)
- if num % multiplier == 0 and den % f == 0:
- num = num / multiplier
- den = den / f
- current_dots = current_dots + d
-
- return (s, num, den, current_dots)
-
-
-def try_parse_rest(s, parser_state):
- if not s or s[0] != 'z' and s[0] != 'x':
- return s
-
- __main__.lyric_idx = -1
-
- if parser_state.next_bar:
- voices_append(parser_state.next_bar)
- parser_state.next_bar = ''
-
- if s[0] == 'z':
- rest = 'r'
- else:
- rest = 's'
- s = s[1:]
-
- (s, num, den, d) = parse_duration(s, parser_state)
- voices_append(
- '%s%s' % (rest, duration_to_lilypond_duration((num, den), default_len, d)))
- if parser_state.next_articulation:
- voices_append(parser_state.next_articulation)
- parser_state.next_articulation = ''
-
- return s
-
-
-artic_tbl = {
- '.': '-.',
- 'T': '^\\trill',
- 'H': '^\\fermata',
- 'u': '^\\upbow',
- 'K': '^\\ltoe',
- 'k': '^\\accent',
- 'M': '^\\tenuto',
- '~': '^"~" ',
- 'J': '', # ignore slide
- 'R': '', # ignore roll
- 'S': '^\\segno',
- 'O': '^\\coda',
- 'v': '^\\downbow'
-}
-
-
-def try_parse_articulation(s, state):
- while s and s[:1] in artic_tbl:
- state.next_articulation = state.next_articulation + artic_tbl[s[:1]]
- if not artic_tbl[s[:1]]:
- sys.stderr.write("Warning: ignoring `%s'\n" % s[:1])
-
- s = s[1:]
-
- # s7m2 input doesn't care about spaces
- if re.match(r'[ \t]*\(', s):
- s = s.lstrip()
-
- slur_begin = 0
- while s[:1] == '(' and s[1] not in DIGITS:
- slur_begin = slur_begin + 1
- state.next_articulation = state.next_articulation + '('
- s = s[1:]
-
- return s
-
-#
-# remember accidental for rest of bar
-#
-
-
-def set_bar_acc(note, octave, acc, state):
- if acc == UNDEF:
- return
- n_oct = note + octave * 7
- state.in_acc[n_oct] = acc
-
-# get accidental set in this bar or UNDEF if not set
-
-
-def get_bar_acc(note, octave, state):
- n_oct = note + octave * 7
- if n_oct in state.in_acc:
- return state.in_acc[n_oct]
- else:
- return UNDEF
-
-
-def clear_bar_acc(state):
- state.in_acc = {}
-
-
-# if we are parsing a beam, close it off
-def close_beam_state(state):
- if state.parsing_beam and global_options.beams:
- state.parsing_beam = 0
- voices_append_back(']')
-
-
-# WAT IS ABC EEN ONTZETTENDE PROGRAMMEERPOEP !
-def try_parse_note(s, parser_state):
- mud = ''
-
- slur_begin = 0
- if not s:
- return s
-
- articulation = ''
- acc = UNDEF
- if s[0] in '^=_':
- c = s[0]
- s = s[1:]
- if c == '^':
- acc = 1
- if c == '=':
- acc = 0
- if c == '_':
- acc = -1
-
- octave = parser_state.base_octave
- if s[0] in "ABCDEFG":
- s = s[0].lower() + s[1:]
- octave = octave - 1
-
- notename = 0
- if s[0] in "abcdefg":
- notename = (ord(s[0]) - ord('a') + 5) % 7
- s = s[1:]
- else:
- return s # failed; not a note!
-
- __main__.lyric_idx = -1
-
- if parser_state.next_bar:
- voices_append(parser_state.next_bar)
- parser_state.next_bar = ''
-
- while s[0] == ',':
- octave = octave - 1
- s = s[1:]
- while s[0] == '\'':
- octave = octave + 1
- s = s[1:]
-
- (s, num, den, current_dots) = parse_duration(s, parser_state)
-
- if re.match(r'[ \t]*\)', s):
- s = s.lstrip()
-
- slur_end = 0
- while s[:1] == ')':
- slur_end = slur_end + 1
- s = s[1:]
-
- bar_acc = get_bar_acc(notename, octave, parser_state)
- pit = pitch_to_lilypond_name(notename, acc, bar_acc, global_key[notename])
- oct = octave_to_lilypond_quotes(octave)
- if acc != UNDEF and (acc == global_key[notename] or acc == bar_acc):
- mod = '!'
- else:
- mod = ''
- voices_append("%s%s%s%s" %
- (pit, oct, mod,
- duration_to_lilypond_duration((num, den), default_len, current_dots)))
-
- set_bar_acc(notename, octave, acc, parser_state)
- if parser_state.next_articulation:
- articulation = articulation + parser_state.next_articulation
- parser_state.next_articulation = ''
-
- voices_append(articulation)
-
- if slur_begin:
- voices_append('-(' * slur_begin)
- if slur_end:
- voices_append('-)' * slur_end)
-
- if parser_state.parsing_tuplet:
- parser_state.parsing_tuplet = parser_state.parsing_tuplet - 1
- if not parser_state.parsing_tuplet:
- voices_append("}")
-
- if global_options.beams and \
- s[0] in '^=_ABCDEFGabcdefg' and \
- not parser_state.parsing_beam and \
- not parser_state.parsing_tuplet:
- parser_state.parsing_beam = 1
- voices_append_back('[')
-
- return s
-
-
-def junk_space(s, state):
- while s and s[0] in '\t\n\r ':
- s = s[1:]
- close_beam_state(state)
-
- return s
-
-
-def try_parse_guitar_chord(s, state):
- if s[:1] == '"':
- s = s[1:]
- gc = ''
- if s[0] == '_' or (s[0] == '^'):
- position = s[0]
- s = s[1:]
- else:
- position = '^'
- while s and s[0] != '"':
- gc = gc + s[0]
- s = s[1:]
-
- if s:
- s = s[1:]
- gc = re.sub('#', '\\#', gc) # escape '#'s
- state.next_articulation = ("%c\"%s\"" % (position, gc)) \
- + state.next_articulation
- return s
-
-
-def try_parse_escape(s):
- if not s or s[0] != '\\':
- return s
-
- s = s[1:]
- if s[:1] == 'K':
- key_table = compute_key()
- return s
-
-
-#
-# |] thin-thick double bar line
-# || thin-thin double bar line
-# [| thick-thin double bar line
-# :| left repeat
-# |: right repeat
-# :: left-right repeat
-# |1 volta 1
-# |2 volta 2
-old_bar_dict = {
- '|]': '|.',
- '||': '||',
- '[|': '||',
- ':|': ':|.',
- '|:': '|:',
- '::': ':|.|:',
- '|1': '|',
- '|2': '|',
- ':|2': ':|.',
- '|': '|'
-}
-bar_dict = {
- '|]': '\\bar "|."',
- '||': '\\bar "||"',
- '[|': '\\bar "||"',
- ':|': '}',
- '|:': '\\repeat volta 2 {',
- '::': '} \\repeat volta 2 {',
- '|1': '} \\alternative{{',
- '|2': '} {',
- ':|2': '} {',
- '|': '\\bar "|"'
-}
-
-
-warn_about = ['|:', '::', ':|', '|1', ':|2', '|2']
-alternative_opener = ['|1', '|2', ':|2']
-repeat_ender = ['::', ':|']
-repeat_opener = ['::', '|:']
-in_repeat = [''] * 8
-doing_alternative = [''] * 8
-using_old = ''
-
-def try_parse_bar(string, state):
- global in_repeat, doing_alternative, using_old
- do_curly = ''
- bs = None
- if current_voice_idx < 0:
- select_voice('default', '')
- # first try the longer one
- for trylen in [3, 2, 1]:
- if string[:trylen] and string[:trylen] in bar_dict:
- s = string[:trylen]
- if using_old:
- bs = "\\bar \"%s\"" % old_bar_dict[s]
- else:
- bs = "%s" % bar_dict[s]
- string = string[trylen:]
- if s in alternative_opener:
- if not in_repeat[current_voice_idx]:
- using_old = 't'
- bs = "\\bar \"%s\"" % old_bar_dict[s]
- else:
- doing_alternative[current_voice_idx] = 't'
-
- if s in repeat_ender:
- if not in_repeat[current_voice_idx]:
- sys.stderr.write(
- "Warning: inserting repeat to beginning of notes.\n")
- repeat_prepend()
- in_repeat[current_voice_idx] = ''
- else:
- if doing_alternative[current_voice_idx]:
- do_curly = 't'
- if using_old:
- bs = "\\bar \"%s\"" % old_bar_dict[s]
- else:
- bs = bar_dict[s]
- doing_alternative[current_voice_idx] = ''
- in_repeat[current_voice_idx] = ''
- if s in repeat_opener:
- in_repeat[current_voice_idx] = 't'
- if using_old:
- bs = "\\bar \"%s\"" % old_bar_dict[s]
- else:
- bs = bar_dict[s]
- break
- if string[:1] == '|':
- state.next_bar = '|\n'
- string = string[1:]
- clear_bar_acc(state)
- close_beam_state(state)
-
- if string[:1] == '}':
- close_beam_state(state)
-
- if bs is not None or state.next_bar != '':
- if state.parsing_tuplet:
- state.parsing_tuplet = 0
- voices_append('} ')
-
- if bs is not None:
- clear_bar_acc(state)
- close_beam_state(state)
- voices_append(bs)
- if do_curly != '':
- voices_append("} ")
- do_curly = ''
- return string
-
-
-def try_parse_tie(s):
- if s[:1] == '-':
- s = s[1:]
- voices_append(' ~ ')
- return s
-
-
-def bracket_escape(s, state):
- m = re.match(r'^([^\]]*)] *(.*)$', s)
- if m:
- cmd = m.group(1)
- s = m.group(2)
- try_parse_header_line(cmd, state)
- return s
-
-
-def try_parse_chord_delims(s, state):
- if s[:1] == '[':
- s = s[1:]
- if re.match('[A-Z]:', s): # bracket escape
- return bracket_escape(s, state)
- if state.next_bar:
- voices_append(state.next_bar)
- state.next_bar = ''
- voices_append('<<')
-
- if s[:1] == '+':
- s = s[1:]
- if state.plus_chord:
- voices_append('>>')
- state.plus_chord = 0
- else:
- if state.next_bar:
- voices_append(state.next_bar)
- state.next_bar = ''
- voices_append('<<')
- state.plus_chord = 1
-
- ch = ''
- if s[:1] == ']':
- s = s[1:]
- ch = '>>'
-
- end = 0
- while s[:1] == ')':
- end = end + 1
- s = s[1:]
-
- voices_append("\\spanrequest \\stop \"slur\"" * end)
- voices_append(ch)
- return s
-
-
-def try_parse_grace_delims(s, state):
- if s[:1] == '{':
- if state.next_bar:
- voices_append(state.next_bar)
- state.next_bar = ''
- s = s[1:]
- voices_append('\\grace { ')
-
- if s[:1] == '}':
- s = s[1:]
- voices_append('}')
-
- return s
-
-
-def try_parse_comment(s):
- global nobarlines
- if s[0] == '%':
- if s[0:5] == '%MIDI':
- # the nobarlines option is necessary for an abc to lilypond translator for
- # exactly the same reason abc2midi needs it: abc requires the user to enter
- # the note that will be printed, and MIDI and lilypond expect entry of the
- # pitch that will be played.
- #
- # In standard 19th century musical notation, the algorithm for translating
- # between printed note and pitch involves using the barlines to determine
- # the scope of the accidentals.
- #
- # Since ABC is frequently used for music in styles that do not use this
- # convention, such as most music written before 1700, or ethnic music in
- # non-western scales, it is necessary to be able to tell a translator that
- # the barlines should not affect its interpretation of the pitch.
- if 'nobarlines' in s:
- nobarlines = 1
- elif s[0:3] == '%LY':
- p = s.find('voices')
- if p > -1:
- voices_append(s[p+7:])
- voices_append("\n")
- p = s.find('slyrics')
- if p > -1:
- slyrics_append(s[p+8:])
-
-# write other kinds of appending if we ever need them.
- return s
-
-
-lineno = 0
-happy_count = 100
-
-
-def parse_file(fn):
- f = open(fn, encoding='utf-8')
- ls = f.readlines()
- ls = [re.sub("\r$", '', x) for x in ls]
-
- select_voice('default', '')
- global lineno
- lineno = 0
- if not global_options.quiet:
- sys.stderr.write("Line ... ")
- sys.stderr.flush()
- __main__.state = state_list[current_voice_idx]
-
- for ln in ls:
- lineno = lineno + 1
-
- if not lineno % happy_count:
- sys.stderr.write('[%d]' % lineno)
- sys.stderr.flush()
- m = re.match('^([^%]*)%(.*)$', ln) # add comments to current voice
- if m:
- if m.group(2):
- try_parse_comment(m.group(2))
- voices_append('%% %s\n' % m.group(2))
- ln = m.group(1)
-
- orig_ln = ln
-
- ln = junk_space(ln, state)
- ln = try_parse_header_line(ln, state)
-
- # Try nibbling characters off until the line doesn't change.
- prev_ln = ''
- while ln != prev_ln:
- prev_ln = ln
- ln = try_parse_chord_delims(ln, state)
- ln = try_parse_rest(ln, state)
- ln = try_parse_articulation(ln, state)
- ln = try_parse_note(ln, state)
- ln = try_parse_bar(ln, state)
- ln = try_parse_tie(ln)
- ln = try_parse_escape(ln)
- ln = try_parse_guitar_chord(ln, state)
- ln = try_parse_tuplet_begin(ln, state)
- ln = try_parse_group_end(ln, state)
- ln = try_parse_grace_delims(ln, state)
- ln = junk_space(ln, state)
-
- if ln:
- error("%s: %d: Huh? Don't understand\n" % (fn, lineno))
- left = orig_ln[0:-len(ln)]
- sys.stderr.write(left + '\n')
- sys.stderr.write(' ' * len(left) + ln + '\n')
-
-
-def identify():
- if not global_options.quiet:
- sys.stderr.write("%s from LilyPond %s\n" % (ly.program_name, version))
-
-
-authors = """
-Written by Han-Wen Nienhuys , Laura Conrad
-, Roy Rankin .
-"""
-
-
-def print_version():
- print(r"""abc2ly (GNU lilypond) %s""" % version)
-
-
-def get_option_parser():
- p = ly.get_option_parser(usage=_("%s [OPTION]... FILE") % 'abc2ly',
- description=_('''abc2ly converts ABC music files (see
-%s) to LilyPond input.
-''') % 'http://abcnotation.com/abc2mtex/abc.txt',
- add_help_option=False)
-
- p.version = "abc2ly (LilyPond) 2.24.2"
- p.add_option("--version",
- action="version",
- help=_("show version number and exit"))
- p.add_option("-h", "--help",
- action="help",
- help=_("show this help and exit"))
- p.add_option("-o", "--output", metavar='FILE',
- action="store",
- help=_("write output to FILE"))
- p.add_option("-s", "--strict",
- action="store_true",
- help=_("be strict about success"))
- p.add_option('-b', '--beams',
- action="store_true",
- help=_("preserve ABC's notion of beams"))
- p.add_option('-q', '--quiet',
- action="store_true",
- help=_("suppress progress messages"))
- p.add_option_group('',
- description=(
- _('Report bugs via %s')
- % 'bug-lilypond@gnu.org') + '\n')
- return p
-
-
-option_parser = get_option_parser()
-(global_options, files) = option_parser.parse_args()
-
-
-identify()
-
-header['tagline'] = 'Lily was here %s -- automatically converted from ABC' % version
-for f in files:
- if f == '-':
- f = ''
-
- if not global_options.quiet:
- sys.stderr.write('Parsing `%s\'...\n' % f)
- parse_file(f)
-
- if not global_options.output:
- global_options.output = os.path.basename(
- os.path.splitext(f)[0]) + ".ly"
- if not global_options.quiet:
- sys.stderr.write('lilypond output to: `%s\'...' %
- global_options.output)
- outf = open(global_options.output, 'w', encoding='utf-8')
-
-# don't substitute @VERSION@. We want this to reflect
-# the last version that was verified to work.
- outf.write('\\version "2.7.40"\n')
-
-# dump_global (outf)
- dump_header(outf, header)
- dump_slyrics(outf)
- dump_voices(outf)
- dump_score(outf)
- dump_lyrics(outf)
- if not global_options.quiet:
- sys.stderr.write('\n')
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/list.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/list.go
deleted file mode 100644
index 4a120bfda0b2de2553b8d9b20559417f2951fe5d..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/list.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-9/gnu.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-9/gnu.go
deleted file mode 100644
index d200e62c9fe5ce0bf9ada3f46157fce8ff8b4954..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-9/gnu.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/memory/weaviate.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/memory/weaviate.py
deleted file mode 100644
index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/AutoGPT/autogpt/memory/weaviate.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import uuid
-
-import weaviate
-from weaviate import Client
-from weaviate.embedded import EmbeddedOptions
-from weaviate.util import generate_uuid5
-
-from autogpt.config import Config
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-def default_schema(weaviate_index):
- return {
- "class": weaviate_index,
- "properties": [
- {
- "name": "raw_text",
- "dataType": ["text"],
- "description": "original text for the embedding",
- }
- ],
- }
-
-
-class WeaviateMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- auth_credentials = self._build_auth_credentials(cfg)
-
- url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
-
- if cfg.use_weaviate_embedded:
- self.client = Client(
- embedded_options=EmbeddedOptions(
- hostname=cfg.weaviate_host,
- port=int(cfg.weaviate_port),
- persistence_data_path=cfg.weaviate_embedded_path,
- )
- )
-
- print(
- f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
- )
- else:
- self.client = Client(url, auth_client_secret=auth_credentials)
-
- self.index = WeaviateMemory.format_classname(cfg.memory_index)
- self._create_schema()
-
- @staticmethod
- def format_classname(index):
- # weaviate uses capitalised index names
- # The python client uses the following code to format
- # index names before the corresponding class is created
- if len(index) == 1:
- return index.capitalize()
- return index[0].capitalize() + index[1:]
-
- def _create_schema(self):
- schema = default_schema(self.index)
- if not self.client.schema.contains(schema):
- self.client.schema.create_class(schema)
-
- def _build_auth_credentials(self, cfg):
- if cfg.weaviate_username and cfg.weaviate_password:
- return weaviate.AuthClientPassword(
- cfg.weaviate_username, cfg.weaviate_password
- )
- if cfg.weaviate_api_key:
- return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
- else:
- return None
-
- def add(self, data):
- vector = get_ada_embedding(data)
-
- doc_uuid = generate_uuid5(data, self.index)
- data_object = {"raw_text": data}
-
- with self.client.batch as batch:
- batch.add_data_object(
- uuid=doc_uuid,
- data_object=data_object,
- class_name=self.index,
- vector=vector,
- )
-
- return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.client.schema.delete_all()
-
- # weaviate does not yet have a neat way to just remove the items in an index
- # without removing the entire schema, therefore we need to re-create it
- # after a call to delete_all
- self._create_schema()
-
- return "Obliterated"
-
- def get_relevant(self, data, num_relevant=5):
- query_embedding = get_ada_embedding(data)
- try:
- results = (
- self.client.query.get(self.index, ["raw_text"])
- .with_near_vector({"vector": query_embedding, "certainty": 0.7})
- .with_limit(num_relevant)
- .do()
- )
-
- if len(results["data"]["Get"][self.index]) > 0:
- return [
- str(item["raw_text"]) for item in results["data"]["Get"][self.index]
- ]
- else:
- return []
-
- except Exception as err:
- print(f"Unexpected error {err=}, {type(err)=}")
- return []
-
- def get_stats(self):
- result = self.client.query.aggregate(self.index).with_meta_count().do()
- class_data = result["data"]["Aggregate"][self.index]
-
- return class_data[0]["meta"] if class_data else {}
diff --git a/spaces/PeepDaSlan9/De-limiter/utils/read_wave_utils.py b/spaces/PeepDaSlan9/De-limiter/utils/read_wave_utils.py
deleted file mode 100644
index 9f5cf510c69547162c435b9c30fcca04f3218e57..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/De-limiter/utils/read_wave_utils.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import random
-import math
-
-import numpy as np
-import librosa
-import torchaudio
-
-
-def load_wav_arbitrary_position_mono(filename, sample_rate, seq_duration):
- # mono
- # seq_duration[second]
- length = torchaudio.info(filename).num_frames
-
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
- if length > read_length:
- random_start = random.randint(0, int(length - read_length - 1)) / sample_rate
- X, sr = librosa.load(
- filename, sr=None, offset=random_start, duration=seq_duration
- )
- else:
- random_start = 0
- total_pad_length = read_length - length
- X, sr = librosa.load(filename, sr=None, offset=0, duration=seq_duration)
- pad_left = random.randint(0, total_pad_length)
- X = np.pad(X, (pad_left, total_pad_length - pad_left))
-
- return X
-
-
-def load_wav_specific_position_mono(
- filename, sample_rate, seq_duration, start_position
-):
- # mono
- # seq_duration[second]
- # start_position[second]
- length = torchaudio.info(filename).num_frames
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
-
- start_pos_sec = max(
- start_position, 0
- ) # if start_position is minus, then start from 0.
- start_pos_sample = librosa.time_to_samples(start_pos_sec, sr=sample_rate)
-
- if (
- length <= start_pos_sample
- ): # if start position exceeds audio length, then start from 0.
- start_pos_sec = 0
- start_pos_sample = 0
- X, sr = librosa.load(filename, sr=None, offset=start_pos_sec, duration=seq_duration)
-
- if length < start_pos_sample + read_length:
- X = np.pad(X, (0, (start_pos_sample + read_length) - length))
-
- return X
-
-
-# load wav file from arbitrary positions of 16bit stereo wav file
-def load_wav_arbitrary_position_stereo(
- filename, sample_rate, seq_duration, return_pos=False
-):
- # stereo
- # seq_duration[second]
- length = torchaudio.info(filename).num_frames
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
-
- random_start_sample = random.randint(
- 0, int(length - math.ceil(seq_duration * sample_rate) - 1)
- )
- random_start_sec = librosa.samples_to_time(random_start_sample, sr=sample_rate)
- X, sr = librosa.load(
- filename, sr=None, mono=False, offset=random_start_sec, duration=seq_duration
- )
-
- if length < random_start_sample + read_length:
- X = np.pad(X, ((0, 0), (0, (random_start_sample + read_length) - length)))
-
- if return_pos:
- return X, random_start_sec
- else:
- return X
-
-
-def load_wav_specific_position_stereo(
- filename, sample_rate, seq_duration, start_position
-):
- # stereo
- # seq_duration[second]
- # start_position[second]
- length = torchaudio.info(filename).num_frames
- read_length = librosa.time_to_samples(seq_duration, sr=sample_rate)
-
- start_pos_sec = max(
- start_position, 0
- ) # if start_position is minus, then start from 0.
- start_pos_sample = librosa.time_to_samples(start_pos_sec, sr=sample_rate)
-
- if (
- length <= start_pos_sample
- ): # if start position exceeds audio length, then start from 0.
- start_pos_sec = 0
- start_pos_sample = 0
- X, sr = librosa.load(
- filename, sr=None, mono=False, offset=start_pos_sec, duration=seq_duration
- )
-
- if length < start_pos_sample + read_length:
- X = np.pad(X, ((0, 0), (0, (start_pos_sample + read_length) - length)))
-
- return X
diff --git a/spaces/Plurigrid/LifeSim/src/components/ui/card.tsx b/spaces/Plurigrid/LifeSim/src/components/ui/card.tsx
deleted file mode 100644
index 6583ebc1bb942bfb94e00fb4e7c7d685073c7b2a..0000000000000000000000000000000000000000
--- a/spaces/Plurigrid/LifeSim/src/components/ui/card.tsx
+++ /dev/null
@@ -1,79 +0,0 @@
-import * as React from "react"
-
-import { cn } from "@/lib/utils"
-
-const Card = React.forwardRef<
- HTMLDivElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-Card.displayName = "Card"
-
-const CardHeader = React.forwardRef<
- HTMLDivElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-CardHeader.displayName = "CardHeader"
-
-const CardTitle = React.forwardRef<
- HTMLParagraphElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-CardTitle.displayName = "CardTitle"
-
-const CardDescription = React.forwardRef<
- HTMLParagraphElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-CardDescription.displayName = "CardDescription"
-
-const CardContent = React.forwardRef<
- HTMLDivElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-CardContent.displayName = "CardContent"
-
-const CardFooter = React.forwardRef<
- HTMLDivElement,
- React.HTMLAttributes
->(({ className, ...props }, ref) => (
-
-))
-CardFooter.displayName = "CardFooter"
-
-export { Card, CardHeader, CardFooter, CardTitle, CardDescription, CardContent }
diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/attentions.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/attentions.py
deleted file mode 100644
index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000
--- a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from lib.infer_pack import commons
-from lib.infer_pack import modules
-from lib.infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/cvvp.py b/spaces/Pranjal12345/Text_to_Speech/tortoise/models/cvvp.py
deleted file mode 100644
index 544ca47b21a31c8d26d4ea407b9783e7d59e8126..0000000000000000000000000000000000000000
--- a/spaces/Pranjal12345/Text_to_Speech/tortoise/models/cvvp.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import einsum
-
-from tortoise.models.arch_util import AttentionBlock
-from tortoise.models.xtransformers import ContinuousTransformerWrapper, Encoder
-
-
-def exists(val):
- return val is not None
-
-
-def masked_mean(t, mask):
- t = t.masked_fill(~mask, 0.)
- return t.sum(dim=1) / mask.sum(dim=1)
-
-
-class CollapsingTransformer(nn.Module):
- def __init__(self, model_dim, output_dims, heads, dropout, depth, mask_percentage=0, **encoder_kwargs):
- super().__init__()
- self.transformer = ContinuousTransformerWrapper(
- max_seq_len=-1,
- use_pos_emb=False,
- attn_layers=Encoder(
- dim=model_dim,
- depth=depth,
- heads=heads,
- ff_dropout=dropout,
- ff_mult=1,
- attn_dropout=dropout,
- use_rmsnorm=True,
- ff_glu=True,
- rotary_pos_emb=True,
- **encoder_kwargs,
- ))
- self.pre_combiner = nn.Sequential(nn.Conv1d(model_dim, output_dims, 1),
- AttentionBlock(
- output_dims, num_heads=heads, do_checkpoint=False),
- nn.Conv1d(output_dims, output_dims, 1))
- self.mask_percentage = mask_percentage
-
- def forward(self, x, **transformer_kwargs):
- h = self.transformer(x, **transformer_kwargs)
- h = h.permute(0, 2, 1)
- h = self.pre_combiner(h).permute(0, 2, 1)
- if self.training:
- mask = torch.rand_like(h.float()) > self.mask_percentage
- else:
- mask = torch.ones_like(h.float()).bool()
- return masked_mean(h, mask)
-
-
-class ConvFormatEmbedding(nn.Module):
- def __init__(self, *args, **kwargs):
- super().__init__()
- self.emb = nn.Embedding(*args, **kwargs)
-
- def forward(self, x):
- y = self.emb(x)
- return y.permute(0, 2, 1)
-
-
-class CVVP(nn.Module):
- def __init__(
- self,
- model_dim=512,
- transformer_heads=8,
- dropout=.1,
- conditioning_enc_depth=8,
- cond_mask_percentage=0,
- mel_channels=80,
- mel_codes=None,
- speech_enc_depth=8,
- speech_mask_percentage=0,
- latent_multiplier=1,
- ):
- super().__init__()
- latent_dim = latent_multiplier*model_dim
- self.temperature = nn.Parameter(torch.tensor(1.))
-
- self.cond_emb = nn.Sequential(nn.Conv1d(mel_channels, model_dim//2, kernel_size=5, stride=2, padding=2),
- nn.Conv1d(model_dim//2, model_dim, kernel_size=3, stride=2, padding=1))
- self.conditioning_transformer = CollapsingTransformer(
- model_dim, model_dim, transformer_heads, dropout, conditioning_enc_depth, cond_mask_percentage)
- self.to_conditioning_latent = nn.Linear(
- latent_dim, latent_dim, bias=False)
-
- if mel_codes is None:
- self.speech_emb = nn.Conv1d(
- mel_channels, model_dim, kernel_size=5, padding=2)
- else:
- self.speech_emb = ConvFormatEmbedding(mel_codes, model_dim)
- self.speech_transformer = CollapsingTransformer(
- model_dim, latent_dim, transformer_heads, dropout, speech_enc_depth, speech_mask_percentage)
- self.to_speech_latent = nn.Linear(
- latent_dim, latent_dim, bias=False)
-
- def get_grad_norm_parameter_groups(self):
- return {
- 'conditioning': list(self.conditioning_transformer.parameters()),
- 'speech': list(self.speech_transformer.parameters()),
- }
-
- def forward(
- self,
- mel_cond,
- mel_input,
- return_loss=False
- ):
- cond_emb = self.cond_emb(mel_cond).permute(0, 2, 1)
- enc_cond = self.conditioning_transformer(cond_emb)
- cond_latents = self.to_conditioning_latent(enc_cond)
-
- speech_emb = self.speech_emb(mel_input).permute(0, 2, 1)
- enc_speech = self.speech_transformer(speech_emb)
- speech_latents = self.to_speech_latent(enc_speech)
-
- cond_latents, speech_latents = map(lambda t: F.normalize(
- t, p=2, dim=-1), (cond_latents, speech_latents))
- temp = self.temperature.exp()
-
- if not return_loss:
- sim = einsum('n d, n d -> n', cond_latents,
- speech_latents) * temp
- return sim
-
- sim = einsum('i d, j d -> i j', cond_latents,
- speech_latents) * temp
- labels = torch.arange(
- cond_latents.shape[0], device=mel_input.device)
- loss = (F.cross_entropy(sim, labels) +
- F.cross_entropy(sim.t(), labels)) / 2
-
- return loss
-
-
-if __name__ == '__main__':
- clvp = CVVP()
- clvp(torch.randn(2, 80, 100),
- torch.randn(2, 80, 95),
- return_loss=True)
diff --git a/spaces/PulsarAI/thebloke-quantized-models/README.md b/spaces/PulsarAI/thebloke-quantized-models/README.md
deleted file mode 100644
index cfbb32b5998acac57883540fee0ea14aa05fcd95..0000000000000000000000000000000000000000
--- a/spaces/PulsarAI/thebloke-quantized-models/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: TheBloke Quantized Models
-emoji: 👾
-colorFrom: yellow
-colorTo: green
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py
deleted file mode 100644
index 823b44fb64898e8dcbb12180ba45d1718f9b03f7..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_537238KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 64)
- self.stg1_high_band_net = BaseASPPNet(2, 64)
-
- self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(32, 64)
-
- self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(64, 128)
-
- self.out = nn.Conv2d(128, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/Rakesh30/Sentence_Embedding-App/README.md b/spaces/Rakesh30/Sentence_Embedding-App/README.md
deleted file mode 100644
index 2cc7a3cb2035d33616003efec784c15d6edcc7b7..0000000000000000000000000000000000000000
--- a/spaces/Rakesh30/Sentence_Embedding-App/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sentence Embedding-App
-emoji: 🐠
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/msgpack/ext.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/msgpack/ext.py
deleted file mode 100644
index 25544c555648c13762e150ea559d3a69674bdd34..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/msgpack/ext.py
+++ /dev/null
@@ -1,193 +0,0 @@
-# coding: utf-8
-from collections import namedtuple
-import datetime
-import sys
-import struct
-
-
-PY2 = sys.version_info[0] == 2
-
-if PY2:
- int_types = (int, long)
- _utc = None
-else:
- int_types = int
- try:
- _utc = datetime.timezone.utc
- except AttributeError:
- _utc = datetime.timezone(datetime.timedelta(0))
-
-
-class ExtType(namedtuple("ExtType", "code data")):
- """ExtType represents ext type in msgpack."""
-
- def __new__(cls, code, data):
- if not isinstance(code, int):
- raise TypeError("code must be int")
- if not isinstance(data, bytes):
- raise TypeError("data must be bytes")
- if not 0 <= code <= 127:
- raise ValueError("code must be 0~127")
- return super(ExtType, cls).__new__(cls, code, data)
-
-
-class Timestamp(object):
- """Timestamp represents the Timestamp extension type in msgpack.
-
- When built with Cython, msgpack uses C methods to pack and unpack `Timestamp`. When using pure-Python
- msgpack, :func:`to_bytes` and :func:`from_bytes` are used to pack and unpack `Timestamp`.
-
- This class is immutable: Do not override seconds and nanoseconds.
- """
-
- __slots__ = ["seconds", "nanoseconds"]
-
- def __init__(self, seconds, nanoseconds=0):
- """Initialize a Timestamp object.
-
- :param int seconds:
- Number of seconds since the UNIX epoch (00:00:00 UTC Jan 1 1970, minus leap seconds).
- May be negative.
-
- :param int nanoseconds:
- Number of nanoseconds to add to `seconds` to get fractional time.
- Maximum is 999_999_999. Default is 0.
-
- Note: Negative times (before the UNIX epoch) are represented as negative seconds + positive ns.
- """
- if not isinstance(seconds, int_types):
- raise TypeError("seconds must be an interger")
- if not isinstance(nanoseconds, int_types):
- raise TypeError("nanoseconds must be an integer")
- if not (0 <= nanoseconds < 10**9):
- raise ValueError(
- "nanoseconds must be a non-negative integer less than 999999999."
- )
- self.seconds = seconds
- self.nanoseconds = nanoseconds
-
- def __repr__(self):
- """String representation of Timestamp."""
- return "Timestamp(seconds={0}, nanoseconds={1})".format(
- self.seconds, self.nanoseconds
- )
-
- def __eq__(self, other):
- """Check for equality with another Timestamp object"""
- if type(other) is self.__class__:
- return (
- self.seconds == other.seconds and self.nanoseconds == other.nanoseconds
- )
- return False
-
- def __ne__(self, other):
- """not-equals method (see :func:`__eq__()`)"""
- return not self.__eq__(other)
-
- def __hash__(self):
- return hash((self.seconds, self.nanoseconds))
-
- @staticmethod
- def from_bytes(b):
- """Unpack bytes into a `Timestamp` object.
-
- Used for pure-Python msgpack unpacking.
-
- :param b: Payload from msgpack ext message with code -1
- :type b: bytes
-
- :returns: Timestamp object unpacked from msgpack ext payload
- :rtype: Timestamp
- """
- if len(b) == 4:
- seconds = struct.unpack("!L", b)[0]
- nanoseconds = 0
- elif len(b) == 8:
- data64 = struct.unpack("!Q", b)[0]
- seconds = data64 & 0x00000003FFFFFFFF
- nanoseconds = data64 >> 34
- elif len(b) == 12:
- nanoseconds, seconds = struct.unpack("!Iq", b)
- else:
- raise ValueError(
- "Timestamp type can only be created from 32, 64, or 96-bit byte objects"
- )
- return Timestamp(seconds, nanoseconds)
-
- def to_bytes(self):
- """Pack this Timestamp object into bytes.
-
- Used for pure-Python msgpack packing.
-
- :returns data: Payload for EXT message with code -1 (timestamp type)
- :rtype: bytes
- """
- if (self.seconds >> 34) == 0: # seconds is non-negative and fits in 34 bits
- data64 = self.nanoseconds << 34 | self.seconds
- if data64 & 0xFFFFFFFF00000000 == 0:
- # nanoseconds is zero and seconds < 2**32, so timestamp 32
- data = struct.pack("!L", data64)
- else:
- # timestamp 64
- data = struct.pack("!Q", data64)
- else:
- # timestamp 96
- data = struct.pack("!Iq", self.nanoseconds, self.seconds)
- return data
-
- @staticmethod
- def from_unix(unix_sec):
- """Create a Timestamp from posix timestamp in seconds.
-
- :param unix_float: Posix timestamp in seconds.
- :type unix_float: int or float.
- """
- seconds = int(unix_sec // 1)
- nanoseconds = int((unix_sec % 1) * 10**9)
- return Timestamp(seconds, nanoseconds)
-
- def to_unix(self):
- """Get the timestamp as a floating-point value.
-
- :returns: posix timestamp
- :rtype: float
- """
- return self.seconds + self.nanoseconds / 1e9
-
- @staticmethod
- def from_unix_nano(unix_ns):
- """Create a Timestamp from posix timestamp in nanoseconds.
-
- :param int unix_ns: Posix timestamp in nanoseconds.
- :rtype: Timestamp
- """
- return Timestamp(*divmod(unix_ns, 10**9))
-
- def to_unix_nano(self):
- """Get the timestamp as a unixtime in nanoseconds.
-
- :returns: posix timestamp in nanoseconds
- :rtype: int
- """
- return self.seconds * 10**9 + self.nanoseconds
-
- def to_datetime(self):
- """Get the timestamp as a UTC datetime.
-
- Python 2 is not supported.
-
- :rtype: datetime.
- """
- return datetime.datetime.fromtimestamp(0, _utc) + datetime.timedelta(
- seconds=self.to_unix()
- )
-
- @staticmethod
- def from_datetime(dt):
- """Create a Timestamp from datetime with tzinfo.
-
- Python 2 is not supported.
-
- :rtype: Timestamp
- """
- return Timestamp.from_unix(dt.timestamp())
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/cygwinccompiler.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/cygwinccompiler.py
deleted file mode 100644
index 2c4da5b57e5fda8b1510a61c2f14a61fac1c0916..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/cygwinccompiler.py
+++ /dev/null
@@ -1,364 +0,0 @@
-"""distutils.cygwinccompiler
-
-Provides the CygwinCCompiler class, a subclass of UnixCCompiler that
-handles the Cygwin port of the GNU C compiler to Windows. It also contains
-the Mingw32CCompiler class which handles the mingw32 port of GCC (same as
-cygwin in no-cygwin mode).
-"""
-
-import os
-import sys
-import copy
-import shlex
-import warnings
-from subprocess import check_output
-
-from distutils.unixccompiler import UnixCCompiler
-from distutils.file_util import write_file
-from distutils.errors import (
- DistutilsExecError,
- DistutilsPlatformError,
- CCompilerError,
- CompileError,
-)
-from distutils.version import LooseVersion, suppress_known_deprecation
-
-
-def get_msvcr():
- """Include the appropriate MSVC runtime library if Python was built
- with MSVC 7.0 or later.
- """
- msc_pos = sys.version.find('MSC v.')
- if msc_pos != -1:
- msc_ver = sys.version[msc_pos + 6 : msc_pos + 10]
- if msc_ver == '1300':
- # MSVC 7.0
- return ['msvcr70']
- elif msc_ver == '1310':
- # MSVC 7.1
- return ['msvcr71']
- elif msc_ver == '1400':
- # VS2005 / MSVC 8.0
- return ['msvcr80']
- elif msc_ver == '1500':
- # VS2008 / MSVC 9.0
- return ['msvcr90']
- elif msc_ver == '1600':
- # VS2010 / MSVC 10.0
- return ['msvcr100']
- elif msc_ver == '1700':
- # VS2012 / MSVC 11.0
- return ['msvcr110']
- elif msc_ver == '1800':
- # VS2013 / MSVC 12.0
- return ['msvcr120']
- elif 1900 <= int(msc_ver) < 2000:
- # VS2015 / MSVC 14.0
- return ['ucrt', 'vcruntime140']
- else:
- raise ValueError("Unknown MS Compiler version %s " % msc_ver)
-
-
-_runtime_library_dirs_msg = (
- "Unable to set runtime library search path on Windows, "
- "usually indicated by `runtime_library_dirs` parameter to Extension"
-)
-
-
-class CygwinCCompiler(UnixCCompiler):
- """Handles the Cygwin port of the GNU C compiler to Windows."""
-
- compiler_type = 'cygwin'
- obj_extension = ".o"
- static_lib_extension = ".a"
- shared_lib_extension = ".dll.a"
- dylib_lib_extension = ".dll"
- static_lib_format = "lib%s%s"
- shared_lib_format = "lib%s%s"
- dylib_lib_format = "cyg%s%s"
- exe_extension = ".exe"
-
- def __init__(self, verbose=0, dry_run=0, force=0):
-
- super().__init__(verbose, dry_run, force)
-
- status, details = check_config_h()
- self.debug_print(
- "Python's GCC status: {} (details: {})".format(status, details)
- )
- if status is not CONFIG_H_OK:
- self.warn(
- "Python's pyconfig.h doesn't seem to support your compiler. "
- "Reason: %s. "
- "Compiling may fail because of undefined preprocessor macros." % details
- )
-
- self.cc = os.environ.get('CC', 'gcc')
- self.cxx = os.environ.get('CXX', 'g++')
-
- self.linker_dll = self.cc
- shared_option = "-shared"
-
- self.set_executables(
- compiler='%s -mcygwin -O -Wall' % self.cc,
- compiler_so='%s -mcygwin -mdll -O -Wall' % self.cc,
- compiler_cxx='%s -mcygwin -O -Wall' % self.cxx,
- linker_exe='%s -mcygwin' % self.cc,
- linker_so=('{} -mcygwin {}'.format(self.linker_dll, shared_option)),
- )
-
- # Include the appropriate MSVC runtime library if Python was built
- # with MSVC 7.0 or later.
- self.dll_libraries = get_msvcr()
-
- @property
- def gcc_version(self):
- # Older numpy dependend on this existing to check for ancient
- # gcc versions. This doesn't make much sense with clang etc so
- # just hardcode to something recent.
- # https://github.com/numpy/numpy/pull/20333
- warnings.warn(
- "gcc_version attribute of CygwinCCompiler is deprecated. "
- "Instead of returning actual gcc version a fixed value 11.2.0 is returned.",
- DeprecationWarning,
- stacklevel=2,
- )
- with suppress_known_deprecation():
- return LooseVersion("11.2.0")
-
- def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
- """Compiles the source by spawning GCC and windres if needed."""
- if ext == '.rc' or ext == '.res':
- # gcc needs '.res' and '.rc' compiled to object files !!!
- try:
- self.spawn(["windres", "-i", src, "-o", obj])
- except DistutilsExecError as msg:
- raise CompileError(msg)
- else: # for other files use the C-compiler
- try:
- self.spawn(
- self.compiler_so + cc_args + [src, '-o', obj] + extra_postargs
- )
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- def link(
- self,
- target_desc,
- objects,
- output_filename,
- output_dir=None,
- libraries=None,
- library_dirs=None,
- runtime_library_dirs=None,
- export_symbols=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- build_temp=None,
- target_lang=None,
- ):
- """Link the objects."""
- # use separate copies, so we can modify the lists
- extra_preargs = copy.copy(extra_preargs or [])
- libraries = copy.copy(libraries or [])
- objects = copy.copy(objects or [])
-
- if runtime_library_dirs:
- self.warn(_runtime_library_dirs_msg)
-
- # Additional libraries
- libraries.extend(self.dll_libraries)
-
- # handle export symbols by creating a def-file
- # with executables this only works with gcc/ld as linker
- if (export_symbols is not None) and (
- target_desc != self.EXECUTABLE or self.linker_dll == "gcc"
- ):
- # (The linker doesn't do anything if output is up-to-date.
- # So it would probably better to check if we really need this,
- # but for this we had to insert some unchanged parts of
- # UnixCCompiler, and this is not what we want.)
-
- # we want to put some files in the same directory as the
- # object files are, build_temp doesn't help much
- # where are the object files
- temp_dir = os.path.dirname(objects[0])
- # name of dll to give the helper files the same base name
- (dll_name, dll_extension) = os.path.splitext(
- os.path.basename(output_filename)
- )
-
- # generate the filenames for these files
- def_file = os.path.join(temp_dir, dll_name + ".def")
-
- # Generate .def file
- contents = ["LIBRARY %s" % os.path.basename(output_filename), "EXPORTS"]
- for sym in export_symbols:
- contents.append(sym)
- self.execute(write_file, (def_file, contents), "writing %s" % def_file)
-
- # next add options for def-file
-
- # for gcc/ld the def-file is specified as any object files
- objects.append(def_file)
-
- # end: if ((export_symbols is not None) and
- # (target_desc != self.EXECUTABLE or self.linker_dll == "gcc")):
-
- # who wants symbols and a many times larger output file
- # should explicitly switch the debug mode on
- # otherwise we let ld strip the output file
- # (On my machine: 10KiB < stripped_file < ??100KiB
- # unstripped_file = stripped_file + XXX KiB
- # ( XXX=254 for a typical python extension))
- if not debug:
- extra_preargs.append("-s")
-
- UnixCCompiler.link(
- self,
- target_desc,
- objects,
- output_filename,
- output_dir,
- libraries,
- library_dirs,
- runtime_library_dirs,
- None, # export_symbols, we do this in our def-file
- debug,
- extra_preargs,
- extra_postargs,
- build_temp,
- target_lang,
- )
-
- def runtime_library_dir_option(self, dir):
- # cygwin doesn't support rpath. While in theory we could error
- # out like MSVC does, code might expect it to work like on Unix, so
- # just warn and hope for the best.
- self.warn(_runtime_library_dirs_msg)
- return []
-
- # -- Miscellaneous methods -----------------------------------------
-
- def _make_out_path(self, output_dir, strip_dir, src_name):
- # use normcase to make sure '.rc' is really '.rc' and not '.RC'
- norm_src_name = os.path.normcase(src_name)
- return super()._make_out_path(output_dir, strip_dir, norm_src_name)
-
- @property
- def out_extensions(self):
- """
- Add support for rc and res files.
- """
- return {
- **super().out_extensions,
- **{ext: ext + self.obj_extension for ext in ('.res', '.rc')},
- }
-
-
-# the same as cygwin plus some additional parameters
-class Mingw32CCompiler(CygwinCCompiler):
- """Handles the Mingw32 port of the GNU C compiler to Windows."""
-
- compiler_type = 'mingw32'
-
- def __init__(self, verbose=0, dry_run=0, force=0):
-
- super().__init__(verbose, dry_run, force)
-
- shared_option = "-shared"
-
- if is_cygwincc(self.cc):
- raise CCompilerError('Cygwin gcc cannot be used with --compiler=mingw32')
-
- self.set_executables(
- compiler='%s -O -Wall' % self.cc,
- compiler_so='%s -mdll -O -Wall' % self.cc,
- compiler_cxx='%s -O -Wall' % self.cxx,
- linker_exe='%s' % self.cc,
- linker_so='{} {}'.format(self.linker_dll, shared_option),
- )
-
- # Maybe we should also append -mthreads, but then the finished
- # dlls need another dll (mingwm10.dll see Mingw32 docs)
- # (-mthreads: Support thread-safe exception handling on `Mingw32')
-
- # no additional libraries needed
- self.dll_libraries = []
-
- # Include the appropriate MSVC runtime library if Python was built
- # with MSVC 7.0 or later.
- self.dll_libraries = get_msvcr()
-
- def runtime_library_dir_option(self, dir):
- raise DistutilsPlatformError(_runtime_library_dirs_msg)
-
-
-# Because these compilers aren't configured in Python's pyconfig.h file by
-# default, we should at least warn the user if he is using an unmodified
-# version.
-
-CONFIG_H_OK = "ok"
-CONFIG_H_NOTOK = "not ok"
-CONFIG_H_UNCERTAIN = "uncertain"
-
-
-def check_config_h():
- """Check if the current Python installation appears amenable to building
- extensions with GCC.
-
- Returns a tuple (status, details), where 'status' is one of the following
- constants:
-
- - CONFIG_H_OK: all is well, go ahead and compile
- - CONFIG_H_NOTOK: doesn't look good
- - CONFIG_H_UNCERTAIN: not sure -- unable to read pyconfig.h
-
- 'details' is a human-readable string explaining the situation.
-
- Note there are two ways to conclude "OK": either 'sys.version' contains
- the string "GCC" (implying that this Python was built with GCC), or the
- installed "pyconfig.h" contains the string "__GNUC__".
- """
-
- # XXX since this function also checks sys.version, it's not strictly a
- # "pyconfig.h" check -- should probably be renamed...
-
- from distutils import sysconfig
-
- # if sys.version contains GCC then python was compiled with GCC, and the
- # pyconfig.h file should be OK
- if "GCC" in sys.version:
- return CONFIG_H_OK, "sys.version mentions 'GCC'"
-
- # Clang would also work
- if "Clang" in sys.version:
- return CONFIG_H_OK, "sys.version mentions 'Clang'"
-
- # let's see if __GNUC__ is mentioned in python.h
- fn = sysconfig.get_config_h_filename()
- try:
- config_h = open(fn)
- try:
- if "__GNUC__" in config_h.read():
- return CONFIG_H_OK, "'%s' mentions '__GNUC__'" % fn
- else:
- return CONFIG_H_NOTOK, "'%s' does not mention '__GNUC__'" % fn
- finally:
- config_h.close()
- except OSError as exc:
- return (CONFIG_H_UNCERTAIN, "couldn't read '{}': {}".format(fn, exc.strerror))
-
-
-def is_cygwincc(cc):
- '''Try to determine if the compiler that would be used is from cygwin.'''
- out_string = check_output(shlex.split(cc) + ['-dumpmachine'])
- return out_string.strip().endswith(b'cygwin')
-
-
-get_versions = None
-"""
-A stand-in for the previous get_versions() function to prevent failures
-when monkeypatched. See pypa/setuptools#2969.
-"""
diff --git a/spaces/Reha2704/VToonify/vtoonify/style_transfer.py b/spaces/Reha2704/VToonify/vtoonify/style_transfer.py
deleted file mode 100644
index 3e6ba13ca84dc595dfa9eb9ef85a638889d8cdd3..0000000000000000000000000000000000000000
--- a/spaces/Reha2704/VToonify/vtoonify/style_transfer.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import os
-#os.environ['CUDA_VISIBLE_DEVICES'] = "0"
-import argparse
-import numpy as np
-import cv2
-import dlib
-import torch
-from torchvision import transforms
-import torch.nn.functional as F
-from tqdm import tqdm
-from model.vtoonify import VToonify
-from model.bisenet.model import BiSeNet
-from model.encoder.align_all_parallel import align_face
-from util import save_image, load_image, visualize, load_psp_standalone, get_video_crop_parameter, tensor2cv2
-
-
-class TestOptions():
- def __init__(self):
-
- self.parser = argparse.ArgumentParser(description="Style Transfer")
- self.parser.add_argument("--content", type=str, default='./data/077436.jpg', help="path of the content image/video")
- self.parser.add_argument("--style_id", type=int, default=26, help="the id of the style image")
- self.parser.add_argument("--style_degree", type=float, default=0.5, help="style degree for VToonify-D")
- self.parser.add_argument("--color_transfer", action="store_true", help="transfer the color of the style")
- self.parser.add_argument("--ckpt", type=str, default='./checkpoint/vtoonify_d_cartoon/vtoonify_s_d.pt', help="path of the saved model")
- self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output images")
- self.parser.add_argument("--scale_image", action="store_true", help="resize and crop the image to best fit the model")
- self.parser.add_argument("--style_encoder_path", type=str, default='./checkpoint/encoder.pt', help="path of the style encoder")
- self.parser.add_argument("--exstyle_path", type=str, default=None, help="path of the extrinsic style code")
- self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model")
- self.parser.add_argument("--video", action="store_true", help="if true, video stylization; if false, image stylization")
- self.parser.add_argument("--cpu", action="store_true", help="if true, only use cpu")
- self.parser.add_argument("--backbone", type=str, default='dualstylegan', help="dualstylegan | toonify")
- self.parser.add_argument("--padding", type=int, nargs=4, default=[200,200,200,200], help="left, right, top, bottom paddings to the face center")
- self.parser.add_argument("--batch_size", type=int, default=4, help="batch size of frames when processing video")
- self.parser.add_argument("--parsing_map_path", type=str, default=None, help="path of the refined parsing map of the target video")
-
- def parse(self):
- self.opt = self.parser.parse_args()
- if self.opt.exstyle_path is None:
- self.opt.exstyle_path = os.path.join(os.path.dirname(self.opt.ckpt), 'exstyle_code.npy')
- args = vars(self.opt)
- print('Load options')
- for name, value in sorted(args.items()):
- print('%s: %s' % (str(name), str(value)))
- return self.opt
-
-if __name__ == "__main__":
-
- parser = TestOptions()
- args = parser.parse()
- print('*'*98)
-
-
- device = "cpu" if args.cpu else "cuda"
-
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- vtoonify = VToonify(backbone = args.backbone)
- vtoonify.load_state_dict(torch.load(args.ckpt, map_location=lambda storage, loc: storage)['g_ema'])
- vtoonify.to(device)
-
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage))
- parsingpredictor.to(device).eval()
-
- modelname = './checkpoint/shape_predictor_68_face_landmarks.dat'
- if not os.path.exists(modelname):
- import wget, bz2
- wget.download('http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2', modelname+'.bz2')
- zipfile = bz2.BZ2File(modelname+'.bz2')
- data = zipfile.read()
- open(modelname, 'wb').write(data)
- landmarkpredictor = dlib.shape_predictor(modelname)
-
- pspencoder = load_psp_standalone(args.style_encoder_path, device)
-
- if args.backbone == 'dualstylegan':
- exstyles = np.load(args.exstyle_path, allow_pickle='TRUE').item()
- stylename = list(exstyles.keys())[args.style_id]
- exstyle = torch.tensor(exstyles[stylename]).to(device)
- with torch.no_grad():
- exstyle = vtoonify.zplus2wplus(exstyle)
-
- if args.video and args.parsing_map_path is not None:
- x_p_hat = torch.tensor(np.load(args.parsing_map_path))
-
- print('Load models successfully!')
-
-
- filename = args.content
- basename = os.path.basename(filename).split('.')[0]
- scale = 1
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- print('Processing ' + os.path.basename(filename) + ' with vtoonify_' + args.backbone[0])
- if args.video:
- cropname = os.path.join(args.output_path, basename + '_input.mp4')
- savename = os.path.join(args.output_path, basename + '_vtoonify_' + args.backbone[0] + '.mp4')
-
- video_cap = cv2.VideoCapture(filename)
- num = int(video_cap.get(7))
-
- first_valid_frame = True
- batch_frames = []
- for i in tqdm(range(num)):
- success, frame = video_cap.read()
- if success == False:
- assert('load video frames error')
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- # We proprocess the video by detecting the face in the first frame,
- # and resizing the frame so that the eye distance is 64 pixels.
- # Centered on the eyes, we crop the first frame to almost 400x400 (based on args.padding).
- # All other frames use the same resizing and cropping parameters as the first frame.
- if first_valid_frame:
- if args.scale_image:
- paras = get_video_crop_parameter(frame, landmarkpredictor, args.padding)
- if paras is None:
- continue
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR video, we apply gaussian blur to the frames to avoid flickers caused by bilinear downsampling
- # this can also prevent over-sharp stylization results.
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- else:
- H, W = frame.shape[0], frame.shape[1]
-
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter(cropname, fourcc, video_cap.get(5), (W, H))
- videoWriter2 = cv2.VideoWriter(savename, fourcc, video_cap.get(5), (4*W, 4*H))
-
- # For each video, we detect and align the face in the first frame for pSp to obtain the style code.
- # This style code is used for all other frames.
- with torch.no_grad():
- I = align_face(frame, landmarkpredictor)
- I = transform(I).unsqueeze(dim=0).to(device)
- s_w = pspencoder(I)
- s_w = vtoonify.zplus2wplus(s_w)
- if vtoonify.backbone == 'dualstylegan':
- if args.color_transfer:
- s_w = exstyle
- else:
- s_w[:,:7] = exstyle[:,:7]
- first_valid_frame = False
- elif args.scale_image:
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
-
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
-
- batch_frames += [transform(frame).unsqueeze(dim=0).to(device)]
-
- if len(batch_frames) == args.batch_size or (i+1) == num:
- x = torch.cat(batch_frames, dim=0)
- batch_frames = []
- with torch.no_grad():
- # parsing network works best on 512x512 images, so we predict parsing maps on upsmapled frames
- # followed by downsampling the parsing maps
- if args.video and args.parsing_map_path is not None:
- x_p = x_p_hat[i+1-x.size(0):i+1].to(device)
- else:
- x_p = F.interpolate(parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- # we give parsing maps lower weight (1/16)
- inputs = torch.cat((x, x_p/16.), dim=1)
- # d_s has no effect when backbone is toonify
- y_tilde = vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = args.style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- for k in range(y_tilde.size(0)):
- videoWriter2.write(tensor2cv2(y_tilde[k].cpu()))
-
- videoWriter.release()
- videoWriter2.release()
- video_cap.release()
-
-
- else:
- cropname = os.path.join(args.output_path, basename + '_input.jpg')
- savename = os.path.join(args.output_path, basename + '_vtoonify_' + args.backbone[0] + '.jpg')
-
- frame = cv2.imread(filename)
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
-
- # We detect the face in the image, and resize the image so that the eye distance is 64 pixels.
- # Centered on the eyes, we crop the image to almost 400x400 (based on args.padding).
- if args.scale_image:
- paras = get_video_crop_parameter(frame, landmarkpredictor, args.padding)
- if paras is not None:
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR image, we apply gaussian blur to it to avoid over-sharp stylization results
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
-
- with torch.no_grad():
- I = align_face(frame, landmarkpredictor)
- I = transform(I).unsqueeze(dim=0).to(device)
- s_w = pspencoder(I)
- s_w = vtoonify.zplus2wplus(s_w)
- if vtoonify.backbone == 'dualstylegan':
- if args.color_transfer:
- s_w = exstyle
- else:
- s_w[:,:7] = exstyle[:,:7]
-
- x = transform(frame).unsqueeze(dim=0).to(device)
- # parsing network works best on 512x512 images, so we predict parsing maps on upsmapled frames
- # followed by downsampling the parsing maps
- x_p = F.interpolate(parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- # we give parsing maps lower weight (1/16)
- inputs = torch.cat((x, x_p/16.), dim=1)
- # d_s has no effect when backbone is toonify
- y_tilde = vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = args.style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
-
- cv2.imwrite(cropname, cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
- save_image(y_tilde[0].cpu(), savename)
-
- print('Transfer style successfully!')
\ No newline at end of file
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/mask/mask_target.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/mask/mask_target.py
deleted file mode 100644
index 15d26a88bbf3710bd92813335918407db8c4e053..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/mask/mask_target.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import numpy as np
-import torch
-from torch.nn.modules.utils import _pair
-
-
-def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list,
- cfg):
- """Compute mask target for positive proposals in multiple images.
-
- Args:
- pos_proposals_list (list[Tensor]): Positive proposals in multiple
- images.
- pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each
- positive proposals.
- gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of
- each image.
- cfg (dict): Config dict that specifies the mask size.
-
- Returns:
- list[Tensor]: Mask target of each image.
-
- Example:
- >>> import mmcv
- >>> import mmdet
- >>> from mmdet.core.mask import BitmapMasks
- >>> from mmdet.core.mask.mask_target import *
- >>> H, W = 17, 18
- >>> cfg = mmcv.Config({'mask_size': (13, 14)})
- >>> rng = np.random.RandomState(0)
- >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image
- >>> pos_proposals_list = [
- >>> torch.Tensor([
- >>> [ 7.2425, 5.5929, 13.9414, 14.9541],
- >>> [ 7.3241, 3.6170, 16.3850, 15.3102],
- >>> ]),
- >>> torch.Tensor([
- >>> [ 4.8448, 6.4010, 7.0314, 9.7681],
- >>> [ 5.9790, 2.6989, 7.4416, 4.8580],
- >>> [ 0.0000, 0.0000, 0.1398, 9.8232],
- >>> ]),
- >>> ]
- >>> # Corresponding class index for each proposal for each image
- >>> pos_assigned_gt_inds_list = [
- >>> torch.LongTensor([7, 0]),
- >>> torch.LongTensor([5, 4, 1]),
- >>> ]
- >>> # Ground truth mask for each true object for each image
- >>> gt_masks_list = [
- >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W),
- >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W),
- >>> ]
- >>> mask_targets = mask_target(
- >>> pos_proposals_list, pos_assigned_gt_inds_list,
- >>> gt_masks_list, cfg)
- >>> assert mask_targets.shape == (5,) + cfg['mask_size']
- """
- cfg_list = [cfg for _ in range(len(pos_proposals_list))]
- mask_targets = map(mask_target_single, pos_proposals_list,
- pos_assigned_gt_inds_list, gt_masks_list, cfg_list)
- mask_targets = list(mask_targets)
- if len(mask_targets) > 0:
- mask_targets = torch.cat(mask_targets)
- return mask_targets
-
-
-def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg):
- """Compute mask target for each positive proposal in the image.
-
- Args:
- pos_proposals (Tensor): Positive proposals.
- pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals.
- gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap
- or Polygon.
- cfg (dict): Config dict that indicate the mask size.
-
- Returns:
- Tensor: Mask target of each positive proposals in the image.
-
- Example:
- >>> import mmcv
- >>> import mmdet
- >>> from mmdet.core.mask import BitmapMasks
- >>> from mmdet.core.mask.mask_target import * # NOQA
- >>> H, W = 32, 32
- >>> cfg = mmcv.Config({'mask_size': (7, 11)})
- >>> rng = np.random.RandomState(0)
- >>> # Masks for each ground truth box (relative to the image)
- >>> gt_masks_data = rng.rand(3, H, W)
- >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W)
- >>> # Predicted positive boxes in one image
- >>> pos_proposals = torch.FloatTensor([
- >>> [ 16.2, 5.5, 19.9, 20.9],
- >>> [ 17.3, 13.6, 19.3, 19.3],
- >>> [ 14.8, 16.4, 17.0, 23.7],
- >>> [ 0.0, 0.0, 16.0, 16.0],
- >>> [ 4.0, 0.0, 20.0, 16.0],
- >>> ])
- >>> # For each predicted proposal, its assignment to a gt mask
- >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1])
- >>> mask_targets = mask_target_single(
- >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg)
- >>> assert mask_targets.shape == (5,) + cfg['mask_size']
- """
- device = pos_proposals.device
- mask_size = _pair(cfg.mask_size)
- num_pos = pos_proposals.size(0)
- if num_pos > 0:
- proposals_np = pos_proposals.cpu().numpy()
- maxh, maxw = gt_masks.height, gt_masks.width
- proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw)
- proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh)
- pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy()
-
- mask_targets = gt_masks.crop_and_resize(
- proposals_np, mask_size, device=device,
- inds=pos_assigned_gt_inds).to_ndarray()
-
- mask_targets = torch.from_numpy(mask_targets).float().to(device)
- else:
- mask_targets = pos_proposals.new_zeros((0, ) + mask_size)
-
- return mask_targets
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/evaluation.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/evaluation.py
deleted file mode 100644
index 4d00999ce5665c53bded8de9e084943eee2d230d..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/evaluation.py
+++ /dev/null
@@ -1,509 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import warnings
-from math import inf
-
-import torch.distributed as dist
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.utils.data import DataLoader
-
-from annotator.uniformer.mmcv.fileio import FileClient
-from annotator.uniformer.mmcv.utils import is_seq_of
-from .hook import Hook
-from .logger import LoggerHook
-
-
-class EvalHook(Hook):
- """Non-Distributed evaluation hook.
-
- This hook will regularly perform evaluation in a given interval when
- performing in non-distributed environment.
-
- Args:
- dataloader (DataLoader): A PyTorch dataloader, whose dataset has
- implemented ``evaluate`` function.
- start (int | None, optional): Evaluation starting epoch. It enables
- evaluation before the training starts if ``start`` <= the resuming
- epoch. If None, whether to evaluate is merely decided by
- ``interval``. Default: None.
- interval (int): Evaluation interval. Default: 1.
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- Default: True.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep
- best score value and best checkpoint path, which will be also
- loaded when resume checkpoint. Options are the evaluation metrics
- on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox
- detection and instance segmentation. ``AR@100`` for proposal
- recall. If ``save_best`` is ``auto``, the first key of the returned
- ``OrderedDict`` result will be used. Default: None.
- rule (str | None, optional): Comparison rule for best score. If set to
- None, it will infer a reasonable rule. Keys such as 'acc', 'top'
- .etc will be inferred by 'greater' rule. Keys contain 'loss' will
- be inferred by 'less' rule. Options are 'greater', 'less', None.
- Default: None.
- test_fn (callable, optional): test a model with samples from a
- dataloader, and return the test results. If ``None``, the default
- test function ``mmcv.engine.single_gpu_test`` will be used.
- (default: ``None``)
- greater_keys (List[str] | None, optional): Metric keys that will be
- inferred by 'greater' comparison rule. If ``None``,
- _default_greater_keys will be used. (default: ``None``)
- less_keys (List[str] | None, optional): Metric keys that will be
- inferred by 'less' comparison rule. If ``None``, _default_less_keys
- will be used. (default: ``None``)
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, `runner.work_dir` will be used by default. If specified,
- the `out_dir` will be the concatenation of `out_dir` and the last
- level directory of `runner.work_dir`.
- `New in version 1.3.16.`
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details. Default: None.
- `New in version 1.3.16.`
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
-
- Notes:
- If new arguments are added for EvalHook, tools/test.py,
- tools/eval_metric.py may be affected.
- """
-
- # Since the key for determine greater or less is related to the downstream
- # tasks, downstream repos may need to overwrite the following inner
- # variable accordingly.
-
- rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y}
- init_value_map = {'greater': -inf, 'less': inf}
- _default_greater_keys = [
- 'acc', 'top', 'AR@', 'auc', 'precision', 'mAP', 'mDice', 'mIoU',
- 'mAcc', 'aAcc'
- ]
- _default_less_keys = ['loss']
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- test_fn=None,
- greater_keys=None,
- less_keys=None,
- out_dir=None,
- file_client_args=None,
- **eval_kwargs):
- if not isinstance(dataloader, DataLoader):
- raise TypeError(f'dataloader must be a pytorch DataLoader, '
- f'but got {type(dataloader)}')
-
- if interval <= 0:
- raise ValueError(f'interval must be a positive number, '
- f'but got {interval}')
-
- assert isinstance(by_epoch, bool), '``by_epoch`` should be a boolean'
-
- if start is not None and start < 0:
- raise ValueError(f'The evaluation start epoch {start} is smaller '
- f'than 0')
-
- self.dataloader = dataloader
- self.interval = interval
- self.start = start
- self.by_epoch = by_epoch
-
- assert isinstance(save_best, str) or save_best is None, \
- '""save_best"" should be a str or None ' \
- f'rather than {type(save_best)}'
- self.save_best = save_best
- self.eval_kwargs = eval_kwargs
- self.initial_flag = True
-
- if test_fn is None:
- from annotator.uniformer.mmcv.engine import single_gpu_test
- self.test_fn = single_gpu_test
- else:
- self.test_fn = test_fn
-
- if greater_keys is None:
- self.greater_keys = self._default_greater_keys
- else:
- if not isinstance(greater_keys, (list, tuple)):
- greater_keys = (greater_keys, )
- assert is_seq_of(greater_keys, str)
- self.greater_keys = greater_keys
-
- if less_keys is None:
- self.less_keys = self._default_less_keys
- else:
- if not isinstance(less_keys, (list, tuple)):
- less_keys = (less_keys, )
- assert is_seq_of(less_keys, str)
- self.less_keys = less_keys
-
- if self.save_best is not None:
- self.best_ckpt_path = None
- self._init_rule(rule, self.save_best)
-
- self.out_dir = out_dir
- self.file_client_args = file_client_args
-
- def _init_rule(self, rule, key_indicator):
- """Initialize rule, key_indicator, comparison_func, and best score.
-
- Here is the rule to determine which rule is used for key indicator
- when the rule is not specific (note that the key indicator matching
- is case-insensitive):
- 1. If the key indicator is in ``self.greater_keys``, the rule will be
- specified as 'greater'.
- 2. Or if the key indicator is in ``self.less_keys``, the rule will be
- specified as 'less'.
- 3. Or if the key indicator is equal to the substring in any one item
- in ``self.greater_keys``, the rule will be specified as 'greater'.
- 4. Or if the key indicator is equal to the substring in any one item
- in ``self.less_keys``, the rule will be specified as 'less'.
-
- Args:
- rule (str | None): Comparison rule for best score.
- key_indicator (str | None): Key indicator to determine the
- comparison rule.
- """
- if rule not in self.rule_map and rule is not None:
- raise KeyError(f'rule must be greater, less or None, '
- f'but got {rule}.')
-
- if rule is None:
- if key_indicator != 'auto':
- # `_lc` here means we use the lower case of keys for
- # case-insensitive matching
- key_indicator_lc = key_indicator.lower()
- greater_keys = [key.lower() for key in self.greater_keys]
- less_keys = [key.lower() for key in self.less_keys]
-
- if key_indicator_lc in greater_keys:
- rule = 'greater'
- elif key_indicator_lc in less_keys:
- rule = 'less'
- elif any(key in key_indicator_lc for key in greater_keys):
- rule = 'greater'
- elif any(key in key_indicator_lc for key in less_keys):
- rule = 'less'
- else:
- raise ValueError(f'Cannot infer the rule for key '
- f'{key_indicator}, thus a specific rule '
- f'must be specified.')
- self.rule = rule
- self.key_indicator = key_indicator
- if self.rule is not None:
- self.compare_func = self.rule_map[self.rule]
-
- def before_run(self, runner):
- if not self.out_dir:
- self.out_dir = runner.work_dir
-
- self.file_client = FileClient.infer_client(self.file_client_args,
- self.out_dir)
-
- # if `self.out_dir` is not equal to `runner.work_dir`, it means that
- # `self.out_dir` is set so the final `self.out_dir` is the
- # concatenation of `self.out_dir` and the last level directory of
- # `runner.work_dir`
- if self.out_dir != runner.work_dir:
- basename = osp.basename(runner.work_dir.rstrip(osp.sep))
- self.out_dir = self.file_client.join_path(self.out_dir, basename)
- runner.logger.info(
- (f'The best checkpoint will be saved to {self.out_dir} by '
- f'{self.file_client.name}'))
-
- if self.save_best is not None:
- if runner.meta is None:
- warnings.warn('runner.meta is None. Creating an empty one.')
- runner.meta = dict()
- runner.meta.setdefault('hook_msgs', dict())
- self.best_ckpt_path = runner.meta['hook_msgs'].get(
- 'best_ckpt', None)
-
- def before_train_iter(self, runner):
- """Evaluate the model only at the start of training by iteration."""
- if self.by_epoch or not self.initial_flag:
- return
- if self.start is not None and runner.iter >= self.start:
- self.after_train_iter(runner)
- self.initial_flag = False
-
- def before_train_epoch(self, runner):
- """Evaluate the model only at the start of training by epoch."""
- if not (self.by_epoch and self.initial_flag):
- return
- if self.start is not None and runner.epoch >= self.start:
- self.after_train_epoch(runner)
- self.initial_flag = False
-
- def after_train_iter(self, runner):
- """Called after every training iter to evaluate the results."""
- if not self.by_epoch and self._should_evaluate(runner):
- # Because the priority of EvalHook is higher than LoggerHook, the
- # training log and the evaluating log are mixed. Therefore,
- # we need to dump the training log and clear it before evaluating
- # log is generated. In addition, this problem will only appear in
- # `IterBasedRunner` whose `self.by_epoch` is False, because
- # `EpochBasedRunner` whose `self.by_epoch` is True calls
- # `_do_evaluate` in `after_train_epoch` stage, and at this stage
- # the training log has been printed, so it will not cause any
- # problem. more details at
- # https://github.com/open-mmlab/mmsegmentation/issues/694
- for hook in runner._hooks:
- if isinstance(hook, LoggerHook):
- hook.after_train_iter(runner)
- runner.log_buffer.clear()
-
- self._do_evaluate(runner)
-
- def after_train_epoch(self, runner):
- """Called after every training epoch to evaluate the results."""
- if self.by_epoch and self._should_evaluate(runner):
- self._do_evaluate(runner)
-
- def _do_evaluate(self, runner):
- """perform evaluation and save ckpt."""
- results = self.test_fn(runner.model, self.dataloader)
- runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
- key_score = self.evaluate(runner, results)
- # the key_score may be `None` so it needs to skip the action to save
- # the best checkpoint
- if self.save_best and key_score:
- self._save_ckpt(runner, key_score)
-
- def _should_evaluate(self, runner):
- """Judge whether to perform evaluation.
-
- Here is the rule to judge whether to perform evaluation:
- 1. It will not perform evaluation during the epoch/iteration interval,
- which is determined by ``self.interval``.
- 2. It will not perform evaluation if the start time is larger than
- current time.
- 3. It will not perform evaluation when current time is larger than
- the start time but during epoch/iteration interval.
-
- Returns:
- bool: The flag indicating whether to perform evaluation.
- """
- if self.by_epoch:
- current = runner.epoch
- check_time = self.every_n_epochs
- else:
- current = runner.iter
- check_time = self.every_n_iters
-
- if self.start is None:
- if not check_time(runner, self.interval):
- # No evaluation during the interval.
- return False
- elif (current + 1) < self.start:
- # No evaluation if start is larger than the current time.
- return False
- else:
- # Evaluation only at epochs/iters 3, 5, 7...
- # if start==3 and interval==2
- if (current + 1 - self.start) % self.interval:
- return False
- return True
-
- def _save_ckpt(self, runner, key_score):
- """Save the best checkpoint.
-
- It will compare the score according to the compare function, write
- related information (best score, best checkpoint path) and save the
- best checkpoint into ``work_dir``.
- """
- if self.by_epoch:
- current = f'epoch_{runner.epoch + 1}'
- cur_type, cur_time = 'epoch', runner.epoch + 1
- else:
- current = f'iter_{runner.iter + 1}'
- cur_type, cur_time = 'iter', runner.iter + 1
-
- best_score = runner.meta['hook_msgs'].get(
- 'best_score', self.init_value_map[self.rule])
- if self.compare_func(key_score, best_score):
- best_score = key_score
- runner.meta['hook_msgs']['best_score'] = best_score
-
- if self.best_ckpt_path and self.file_client.isfile(
- self.best_ckpt_path):
- self.file_client.remove(self.best_ckpt_path)
- runner.logger.info(
- (f'The previous best checkpoint {self.best_ckpt_path} was '
- 'removed'))
-
- best_ckpt_name = f'best_{self.key_indicator}_{current}.pth'
- self.best_ckpt_path = self.file_client.join_path(
- self.out_dir, best_ckpt_name)
- runner.meta['hook_msgs']['best_ckpt'] = self.best_ckpt_path
-
- runner.save_checkpoint(
- self.out_dir, best_ckpt_name, create_symlink=False)
- runner.logger.info(
- f'Now best checkpoint is saved as {best_ckpt_name}.')
- runner.logger.info(
- f'Best {self.key_indicator} is {best_score:0.4f} '
- f'at {cur_time} {cur_type}.')
-
- def evaluate(self, runner, results):
- """Evaluate the results.
-
- Args:
- runner (:obj:`mmcv.Runner`): The underlined training runner.
- results (list): Output results.
- """
- eval_res = self.dataloader.dataset.evaluate(
- results, logger=runner.logger, **self.eval_kwargs)
-
- for name, val in eval_res.items():
- runner.log_buffer.output[name] = val
- runner.log_buffer.ready = True
-
- if self.save_best is not None:
- # If the performance of model is pool, the `eval_res` may be an
- # empty dict and it will raise exception when `self.save_best` is
- # not None. More details at
- # https://github.com/open-mmlab/mmdetection/issues/6265.
- if not eval_res:
- warnings.warn(
- 'Since `eval_res` is an empty dict, the behavior to save '
- 'the best checkpoint will be skipped in this evaluation.')
- return None
-
- if self.key_indicator == 'auto':
- # infer from eval_results
- self._init_rule(self.rule, list(eval_res.keys())[0])
- return eval_res[self.key_indicator]
-
- return None
-
-
-class DistEvalHook(EvalHook):
- """Distributed evaluation hook.
-
- This hook will regularly perform evaluation in a given interval when
- performing in distributed environment.
-
- Args:
- dataloader (DataLoader): A PyTorch dataloader, whose dataset has
- implemented ``evaluate`` function.
- start (int | None, optional): Evaluation starting epoch. It enables
- evaluation before the training starts if ``start`` <= the resuming
- epoch. If None, whether to evaluate is merely decided by
- ``interval``. Default: None.
- interval (int): Evaluation interval. Default: 1.
- by_epoch (bool): Determine perform evaluation by epoch or by iteration.
- If set to True, it will perform by epoch. Otherwise, by iteration.
- default: True.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be saved in ``runner.meta['hook_msgs']`` to keep
- best score value and best checkpoint path, which will be also
- loaded when resume checkpoint. Options are the evaluation metrics
- on the test dataset. e.g., ``bbox_mAP``, ``segm_mAP`` for bbox
- detection and instance segmentation. ``AR@100`` for proposal
- recall. If ``save_best`` is ``auto``, the first key of the returned
- ``OrderedDict`` result will be used. Default: None.
- rule (str | None, optional): Comparison rule for best score. If set to
- None, it will infer a reasonable rule. Keys such as 'acc', 'top'
- .etc will be inferred by 'greater' rule. Keys contain 'loss' will
- be inferred by 'less' rule. Options are 'greater', 'less', None.
- Default: None.
- test_fn (callable, optional): test a model with samples from a
- dataloader in a multi-gpu manner, and return the test results. If
- ``None``, the default test function ``mmcv.engine.multi_gpu_test``
- will be used. (default: ``None``)
- tmpdir (str | None): Temporary directory to save the results of all
- processes. Default: None.
- gpu_collect (bool): Whether to use gpu or cpu to collect results.
- Default: False.
- broadcast_bn_buffer (bool): Whether to broadcast the
- buffer(running_mean and running_var) of rank 0 to other rank
- before evaluation. Default: True.
- out_dir (str, optional): The root directory to save checkpoints. If not
- specified, `runner.work_dir` will be used by default. If specified,
- the `out_dir` will be the concatenation of `out_dir` and the last
- level directory of `runner.work_dir`.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details. Default: None.
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
- """
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- test_fn=None,
- greater_keys=None,
- less_keys=None,
- broadcast_bn_buffer=True,
- tmpdir=None,
- gpu_collect=False,
- out_dir=None,
- file_client_args=None,
- **eval_kwargs):
-
- if test_fn is None:
- from annotator.uniformer.mmcv.engine import multi_gpu_test
- test_fn = multi_gpu_test
-
- super().__init__(
- dataloader,
- start=start,
- interval=interval,
- by_epoch=by_epoch,
- save_best=save_best,
- rule=rule,
- test_fn=test_fn,
- greater_keys=greater_keys,
- less_keys=less_keys,
- out_dir=out_dir,
- file_client_args=file_client_args,
- **eval_kwargs)
-
- self.broadcast_bn_buffer = broadcast_bn_buffer
- self.tmpdir = tmpdir
- self.gpu_collect = gpu_collect
-
- def _do_evaluate(self, runner):
- """perform evaluation and save ckpt."""
- # Synchronization of BatchNorm's buffer (running_mean
- # and running_var) is not supported in the DDP of pytorch,
- # which may cause the inconsistent performance of models in
- # different ranks, so we broadcast BatchNorm's buffers
- # of rank 0 to other ranks to avoid this.
- if self.broadcast_bn_buffer:
- model = runner.model
- for name, module in model.named_modules():
- if isinstance(module,
- _BatchNorm) and module.track_running_stats:
- dist.broadcast(module.running_var, 0)
- dist.broadcast(module.running_mean, 0)
-
- tmpdir = self.tmpdir
- if tmpdir is None:
- tmpdir = osp.join(runner.work_dir, '.eval_hook')
-
- results = self.test_fn(
- runner.model,
- self.dataloader,
- tmpdir=tmpdir,
- gpu_collect=self.gpu_collect)
- if runner.rank == 0:
- print('\n')
- runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
- key_score = self.evaluate(runner, results)
- # the key_score may be `None` so it needs to skip the action to
- # save the best checkpoint
- if self.save_best and key_score:
- self._save_ckpt(runner, key_score)
diff --git a/spaces/Rongjiehuang/ProDiff/vocoders/pwg.py b/spaces/Rongjiehuang/ProDiff/vocoders/pwg.py
deleted file mode 100644
index ca9b6891ab2ba5cb413eeca97a41534e5db129d5..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/vocoders/pwg.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import glob
-import re
-import librosa
-import torch
-import yaml
-from sklearn.preprocessing import StandardScaler
-from torch import nn
-from modules.parallel_wavegan.models import ParallelWaveGANGenerator
-from modules.parallel_wavegan.utils import read_hdf5
-from utils.hparams import hparams
-from utils.pitch_utils import f0_to_coarse
-from vocoders.base_vocoder import BaseVocoder, register_vocoder
-import numpy as np
-
-
-def load_pwg_model(config_path, checkpoint_path, stats_path):
- # load config
- with open(config_path) as f:
- config = yaml.load(f, Loader=yaml.Loader)
-
- # setup
- if torch.cuda.is_available():
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
- model = ParallelWaveGANGenerator(**config["generator_params"])
-
- ckpt_dict = torch.load(checkpoint_path, map_location="cpu")
- if 'state_dict' not in ckpt_dict: # official vocoder
- model.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["model"]["generator"])
- scaler = StandardScaler()
- if config["format"] == "hdf5":
- scaler.mean_ = read_hdf5(stats_path, "mean")
- scaler.scale_ = read_hdf5(stats_path, "scale")
- elif config["format"] == "npy":
- scaler.mean_ = np.load(stats_path)[0]
- scaler.scale_ = np.load(stats_path)[1]
- else:
- raise ValueError("support only hdf5 or npy format.")
- else: # custom PWG vocoder
- fake_task = nn.Module()
- fake_task.model_gen = model
- fake_task.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["state_dict"], strict=False)
- scaler = None
-
- model.remove_weight_norm()
- model = model.eval().to(device)
- print(f"| Loaded model parameters from {checkpoint_path}.")
- print(f"| PWG device: {device}.")
- return model, scaler, config, device
-
-
-@register_vocoder
-class PWG(BaseVocoder):
- def __init__(self):
- if hparams['vocoder_ckpt'] == '': # load LJSpeech PWG pretrained model
- base_dir = 'wavegan_pretrained'
- ckpts = glob.glob(f'{base_dir}/checkpoint-*steps.pkl')
- ckpt = sorted(ckpts, key=
- lambda x: int(re.findall(f'{base_dir}/checkpoint-(\d+)steps.pkl', x)[0]))[-1]
- config_path = f'{base_dir}/config.yaml'
- print('| load PWG: ', ckpt)
- self.model, self.scaler, self.config, self.device = load_pwg_model(
- config_path=config_path,
- checkpoint_path=ckpt,
- stats_path=f'{base_dir}/stats.h5',
- )
- else:
- base_dir = hparams['vocoder_ckpt']
- print(base_dir)
- config_path = f'{base_dir}/config.yaml'
- ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
- lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
- print('| load PWG: ', ckpt)
- self.scaler = None
- self.model, _, self.config, self.device = load_pwg_model(
- config_path=config_path,
- checkpoint_path=ckpt,
- stats_path=f'{base_dir}/stats.h5',
- )
-
- def spec2wav(self, mel, **kwargs):
- # start generation
- config = self.config
- device = self.device
- pad_size = (config["generator_params"]["aux_context_window"],
- config["generator_params"]["aux_context_window"])
- c = mel
- if self.scaler is not None:
- c = self.scaler.transform(c)
-
- with torch.no_grad():
- z = torch.randn(1, 1, c.shape[0] * config["hop_size"]).to(device)
- c = np.pad(c, (pad_size, (0, 0)), "edge")
- c = torch.FloatTensor(c).unsqueeze(0).transpose(2, 1).to(device)
- p = kwargs.get('f0')
- if p is not None:
- p = f0_to_coarse(p)
- p = np.pad(p, (pad_size,), "edge")
- p = torch.LongTensor(p[None, :]).to(device)
- y = self.model(z, c, p).view(-1)
- wav_out = y.cpu().numpy()
- return wav_out
-
- @staticmethod
- def wav2spec(wav_fn, return_linear=False):
- from data_gen.tts.data_gen_utils import process_utterance
- res = process_utterance(
- wav_fn, fft_size=hparams['fft_size'],
- hop_size=hparams['hop_size'],
- win_length=hparams['win_size'],
- num_mels=hparams['audio_num_mel_bins'],
- fmin=hparams['fmin'],
- fmax=hparams['fmax'],
- sample_rate=hparams['audio_sample_rate'],
- loud_norm=hparams['loud_norm'],
- min_level_db=hparams['min_level_db'],
- return_linear=return_linear, vocoder='pwg', eps=float(hparams.get('wav2spec_eps', 1e-10)))
- if return_linear:
- return res[0], res[1].T, res[2].T # [T, 80], [T, n_fft]
- else:
- return res[0], res[1].T
-
- @staticmethod
- def wav2mfcc(wav_fn):
- fft_size = hparams['fft_size']
- hop_size = hparams['hop_size']
- win_length = hparams['win_size']
- sample_rate = hparams['audio_sample_rate']
- wav, _ = librosa.core.load(wav_fn, sr=sample_rate)
- mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13,
- n_fft=fft_size, hop_length=hop_size,
- win_length=win_length, pad_mode="constant", power=1.0)
- mfcc_delta = librosa.feature.delta(mfcc, order=1)
- mfcc_delta_delta = librosa.feature.delta(mfcc, order=2)
- mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T
- return mfcc
diff --git a/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/results/retrieval_time.tex b/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/results/retrieval_time.tex
deleted file mode 100644
index 6835f3b9feab933e871a3e1eaf41da046a886025..0000000000000000000000000000000000000000
--- a/spaces/RugNlpFlashcards/Speech_Language_Processing_Jurafsky_Martin/results/retrieval_time.tex
+++ /dev/null
@@ -1,14 +0,0 @@
-\begin{tabular}{lrrrr}
-\toprule
-{} & faiss\_dpr.retrieve & faiss\_longformer.retrieve & es\_dpr.retrieve & es\_longformer.retrieve \\
-\midrule
-count & 59.000000 & 59.000000 & 59.000000 & 59.000000 \\
-mean & 0.056994 & 0.854546 & 0.013451 & 0.013016 \\
-std & 0.038737 & 0.165768 & 0.003771 & 0.002781 \\
-min & 0.035896 & 0.729217 & 0.008990 & 0.009167 \\
-25\% & 0.043558 & 0.775807 & 0.010590 & 0.011279 \\
-50\% & 0.046970 & 0.795175 & 0.011699 & 0.012060 \\
-75\% & 0.056887 & 0.838984 & 0.016232 & 0.013151 \\
-max & 0.303843 & 1.465686 & 0.026489 & 0.020290 \\
-\bottomrule
-\end{tabular}
diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/__init__.py b/spaces/SankarSrin/image-matting-app/ppmatting/__init__.py
deleted file mode 100644
index c1094808e27aa683fc3b5766e9968712b3021532..0000000000000000000000000000000000000000
--- a/spaces/SankarSrin/image-matting-app/ppmatting/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from . import ml, metrics, transforms, datasets, models
diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_modules.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_modules.py
deleted file mode 100644
index 6cf676ce37c1eaf8428c4094e749f862182cb0c3..0000000000000000000000000000000000000000
--- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from ONNXVITS_transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/SaulLu/test-demo/README.md b/spaces/SaulLu/test-demo/README.md
deleted file mode 100644
index 9006df2a94744a0546c467cd088a55196119e27d..0000000000000000000000000000000000000000
--- a/spaces/SaulLu/test-demo/README.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Test Demo
-emoji: 🔥
-colorFrom: indigo
-colorTo: red
-sdk: static
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/SeViLA/SeViLA/lavis/runners/__init__.py b/spaces/SeViLA/SeViLA/lavis/runners/__init__.py
deleted file mode 100644
index 38494f1cdf68b0a419c3e0c401871b653a481963..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/lavis/runners/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-from lavis.runners.runner_base import RunnerBase
-from lavis.runners.runner_iter import RunnerIter
-
-__all__ = ["RunnerBase", "RunnerIter"]
diff --git a/spaces/Senayfre/CropHealth/README.md b/spaces/Senayfre/CropHealth/README.md
deleted file mode 100644
index 3247a36f276d90dfea2ce2e79fa8ef208cf080e5..0000000000000000000000000000000000000000
--- a/spaces/Senayfre/CropHealth/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: CropHealth
-emoji: 👁
-colorFrom: purple
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/SilenWang/ReviewGPT/utils/__init__.py b/spaces/SilenWang/ReviewGPT/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Silentlin/DiffSinger/vocoders/hifigan.py b/spaces/Silentlin/DiffSinger/vocoders/hifigan.py
deleted file mode 100644
index 810d3c931b556387f8a2e85537f4964add1e76b0..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/vocoders/hifigan.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import glob
-import json
-import os
-import re
-
-import librosa
-import torch
-
-import utils
-from modules.hifigan.hifigan import HifiGanGenerator
-from utils.hparams import hparams, set_hparams
-from vocoders.base_vocoder import register_vocoder
-from vocoders.pwg import PWG
-from vocoders.vocoder_utils import denoise
-
-
-def load_model(config_path, checkpoint_path):
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- ckpt_dict = torch.load(checkpoint_path, map_location="cpu")
- if '.yaml' in config_path:
- config = set_hparams(config_path, global_hparams=False)
- state = ckpt_dict["state_dict"]["model_gen"]
- elif '.json' in config_path:
- config = json.load(open(config_path, 'r'))
- state = ckpt_dict["generator"]
-
- model = HifiGanGenerator(config)
- model.load_state_dict(state, strict=True)
- model.remove_weight_norm()
- model = model.eval().to(device)
- print(f"| Loaded model parameters from {checkpoint_path}.")
- print(f"| HifiGAN device: {device}.")
- return model, config, device
-
-
-total_time = 0
-
-
-@register_vocoder
-class HifiGAN(PWG):
- def __init__(self):
- base_dir = hparams['vocoder_ckpt']
- config_path = f'{base_dir}/config.yaml'
- if os.path.exists(config_path):
- ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
- lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
- print('| load HifiGAN: ', ckpt)
- self.model, self.config, self.device = load_model(config_path=config_path, checkpoint_path=ckpt)
- else:
- config_path = f'{base_dir}/config.json'
- ckpt = f'{base_dir}/generator_v1'
- if os.path.exists(config_path):
- self.model, self.config, self.device = load_model(config_path=config_path, checkpoint_path=ckpt)
-
- def spec2wav(self, mel, **kwargs):
- device = self.device
- with torch.no_grad():
- c = torch.FloatTensor(mel).unsqueeze(0).transpose(2, 1).to(device)
- with utils.Timer('hifigan', print_time=hparams['profile_infer']):
- f0 = kwargs.get('f0')
- if f0 is not None and hparams.get('use_nsf'):
- f0 = torch.FloatTensor(f0[None, :]).to(device)
- y = self.model(c, f0).view(-1)
- else:
- y = self.model(c).view(-1)
- wav_out = y.cpu().numpy()
- if hparams.get('vocoder_denoise_c', 0.0) > 0:
- wav_out = denoise(wav_out, v=hparams['vocoder_denoise_c'])
- return wav_out
-
- # @staticmethod
- # def wav2spec(wav_fn, **kwargs):
- # wav, _ = librosa.core.load(wav_fn, sr=hparams['audio_sample_rate'])
- # wav_torch = torch.FloatTensor(wav)[None, :]
- # mel = mel_spectrogram(wav_torch, hparams).numpy()[0]
- # return wav, mel.T
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/guisupport.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/guisupport.py
deleted file mode 100644
index 4d532d0f4d589efa103890a10fa41d047a223ead..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/guisupport.py
+++ /dev/null
@@ -1,155 +0,0 @@
-# coding: utf-8
-"""
-Support for creating GUI apps and starting event loops.
-
-IPython's GUI integration allows interactive plotting and GUI usage in IPython
-session. IPython has two different types of GUI integration:
-
-1. The terminal based IPython supports GUI event loops through Python's
- PyOS_InputHook. PyOS_InputHook is a hook that Python calls periodically
- whenever raw_input is waiting for a user to type code. We implement GUI
- support in the terminal by setting PyOS_InputHook to a function that
- iterates the event loop for a short while. It is important to note that
- in this situation, the real GUI event loop is NOT run in the normal
- manner, so you can't use the normal means to detect that it is running.
-2. In the two process IPython kernel/frontend, the GUI event loop is run in
- the kernel. In this case, the event loop is run in the normal manner by
- calling the function or method of the GUI toolkit that starts the event
- loop.
-
-In addition to starting the GUI event loops in one of these two ways, IPython
-will *always* create an appropriate GUI application object when GUi
-integration is enabled.
-
-If you want your GUI apps to run in IPython you need to do two things:
-
-1. Test to see if there is already an existing main application object. If
- there is, you should use it. If there is not an existing application object
- you should create one.
-2. Test to see if the GUI event loop is running. If it is, you should not
- start it. If the event loop is not running you may start it.
-
-This module contains functions for each toolkit that perform these things
-in a consistent manner. Because of how PyOS_InputHook runs the event loop
-you cannot detect if the event loop is running using the traditional calls
-(such as ``wx.GetApp.IsMainLoopRunning()`` in wxPython). If PyOS_InputHook is
-set These methods will return a false negative. That is, they will say the
-event loop is not running, when is actually is. To work around this limitation
-we proposed the following informal protocol:
-
-* Whenever someone starts the event loop, they *must* set the ``_in_event_loop``
- attribute of the main application object to ``True``. This should be done
- regardless of how the event loop is actually run.
-* Whenever someone stops the event loop, they *must* set the ``_in_event_loop``
- attribute of the main application object to ``False``.
-* If you want to see if the event loop is running, you *must* use ``hasattr``
- to see if ``_in_event_loop`` attribute has been set. If it is set, you
- *must* use its value. If it has not been set, you can query the toolkit
- in the normal manner.
-* If you want GUI support and no one else has created an application or
- started the event loop you *must* do this. We don't want projects to
- attempt to defer these things to someone else if they themselves need it.
-
-The functions below implement this logic for each GUI toolkit. If you need
-to create custom application subclasses, you will likely have to modify this
-code for your own purposes. This code can be copied into your own project
-so you don't have to depend on IPython.
-
-"""
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-from IPython.core.getipython import get_ipython
-
-#-----------------------------------------------------------------------------
-# wx
-#-----------------------------------------------------------------------------
-
-def get_app_wx(*args, **kwargs):
- """Create a new wx app or return an exiting one."""
- import wx
- app = wx.GetApp()
- if app is None:
- if 'redirect' not in kwargs:
- kwargs['redirect'] = False
- app = wx.PySimpleApp(*args, **kwargs)
- return app
-
-def is_event_loop_running_wx(app=None):
- """Is the wx event loop running."""
- # New way: check attribute on shell instance
- ip = get_ipython()
- if ip is not None:
- if ip.active_eventloop and ip.active_eventloop == 'wx':
- return True
- # Fall through to checking the application, because Wx has a native way
- # to check if the event loop is running, unlike Qt.
-
- # Old way: check Wx application
- if app is None:
- app = get_app_wx()
- if hasattr(app, '_in_event_loop'):
- return app._in_event_loop
- else:
- return app.IsMainLoopRunning()
-
-def start_event_loop_wx(app=None):
- """Start the wx event loop in a consistent manner."""
- if app is None:
- app = get_app_wx()
- if not is_event_loop_running_wx(app):
- app._in_event_loop = True
- app.MainLoop()
- app._in_event_loop = False
- else:
- app._in_event_loop = True
-
-#-----------------------------------------------------------------------------
-# Qt
-#-----------------------------------------------------------------------------
-
-def get_app_qt4(*args, **kwargs):
- """Create a new Qt app or return an existing one."""
- from IPython.external.qt_for_kernel import QtGui
- app = QtGui.QApplication.instance()
- if app is None:
- if not args:
- args = ([""],)
- app = QtGui.QApplication(*args, **kwargs)
- return app
-
-def is_event_loop_running_qt4(app=None):
- """Is the qt event loop running."""
- # New way: check attribute on shell instance
- ip = get_ipython()
- if ip is not None:
- return ip.active_eventloop and ip.active_eventloop.startswith('qt')
-
- # Old way: check attribute on QApplication singleton
- if app is None:
- app = get_app_qt4([""])
- if hasattr(app, '_in_event_loop'):
- return app._in_event_loop
- else:
- # Does qt provide a other way to detect this?
- return False
-
-def start_event_loop_qt4(app=None):
- """Start the qt event loop in a consistent manner."""
- if app is None:
- app = get_app_qt4([""])
- if not is_event_loop_running_qt4(app):
- app._in_event_loop = True
- app.exec_()
- app._in_event_loop = False
- else:
- app._in_event_loop = True
-
-#-----------------------------------------------------------------------------
-# Tk
-#-----------------------------------------------------------------------------
-
-#-----------------------------------------------------------------------------
-# gtk
-#-----------------------------------------------------------------------------
diff --git a/spaces/TH5314/newbing/src/components/chat-scroll-anchor.tsx b/spaces/TH5314/newbing/src/components/chat-scroll-anchor.tsx
deleted file mode 100644
index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/components/chat-scroll-anchor.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useInView } from 'react-intersection-observer'
-
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-
-interface ChatScrollAnchorProps {
- trackVisibility?: boolean
-}
-
-export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) {
- const isAtBottom = useAtBottom()
- const { ref, entry, inView } = useInView({
- trackVisibility,
- delay: 100,
- rootMargin: '0px 0px -150px 0px'
- })
-
- React.useEffect(() => {
- if (isAtBottom && trackVisibility && !inView) {
- entry?.target.scrollIntoView({
- block: 'start'
- })
- }
- }, [inView, entry, isAtBottom, trackVisibility])
-
- return
-}
diff --git a/spaces/TNR-5/chatorO/config.py b/spaces/TNR-5/chatorO/config.py
deleted file mode 100644
index d24686ff888211748ad74614ea4f8c5cf372b4f3..0000000000000000000000000000000000000000
--- a/spaces/TNR-5/chatorO/config.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from dotenv import load_dotenv
-import os
-
-load_dotenv(dotenv_path=".env") # Load environment variables from .env file
-
-# DATABASE_URL = os.getenv("DATABASE_URL")
-# OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
-# OCR_API_KEY = os.getenv("OCR_API_KEY")
-NGROK_AUTH_TOKEN = os.getenv("NGROK_AUTH_TOKEN")
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/wheel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/wheel.py
deleted file mode 100644
index 064811ad11bb07b2b7bc8e30ec6c03f21997d6b2..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/build/wheel.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import logging
-import os
-from typing import Optional
-
-from pip._vendor.pyproject_hooks import BuildBackendHookCaller
-
-from pip._internal.utils.subprocess import runner_with_spinner_message
-
-logger = logging.getLogger(__name__)
-
-
-def build_wheel_pep517(
- name: str,
- backend: BuildBackendHookCaller,
- metadata_directory: str,
- tempd: str,
-) -> Optional[str]:
- """Build one InstallRequirement using the PEP 517 build process.
-
- Returns path to wheel if successfully built. Otherwise, returns None.
- """
- assert metadata_directory is not None
- try:
- logger.debug("Destination directory: %s", tempd)
-
- runner = runner_with_spinner_message(
- f"Building wheel for {name} (pyproject.toml)"
- )
- with backend.subprocess_runner(runner):
- wheel_name = backend.build_wheel(
- tempd,
- metadata_directory=metadata_directory,
- )
- except Exception:
- logger.error("Failed building wheel for %s", name)
- return None
- return os.path.join(tempd, wheel_name)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_securetransport/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/typing_extensions.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/typing_extensions.py
deleted file mode 100644
index 9f1c7aa31e20a7d0ef2e6877ea325c068d50e406..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/typing_extensions.py
+++ /dev/null
@@ -1,2296 +0,0 @@
-import abc
-import collections
-import collections.abc
-import operator
-import sys
-import typing
-
-# After PEP 560, internal typing API was substantially reworked.
-# This is especially important for Protocol class which uses internal APIs
-# quite extensively.
-PEP_560 = sys.version_info[:3] >= (3, 7, 0)
-
-if PEP_560:
- GenericMeta = type
-else:
- # 3.6
- from typing import GenericMeta, _type_vars # noqa
-
-# The two functions below are copies of typing internal helpers.
-# They are needed by _ProtocolMeta
-
-
-def _no_slots_copy(dct):
- dict_copy = dict(dct)
- if '__slots__' in dict_copy:
- for slot in dict_copy['__slots__']:
- dict_copy.pop(slot, None)
- return dict_copy
-
-
-def _check_generic(cls, parameters):
- if not cls.__parameters__:
- raise TypeError(f"{cls} is not a generic class")
- alen = len(parameters)
- elen = len(cls.__parameters__)
- if alen != elen:
- raise TypeError(f"Too {'many' if alen > elen else 'few'} arguments for {cls};"
- f" actual {alen}, expected {elen}")
-
-
-# Please keep __all__ alphabetized within each category.
-__all__ = [
- # Super-special typing primitives.
- 'ClassVar',
- 'Concatenate',
- 'Final',
- 'ParamSpec',
- 'Self',
- 'Type',
-
- # ABCs (from collections.abc).
- 'Awaitable',
- 'AsyncIterator',
- 'AsyncIterable',
- 'Coroutine',
- 'AsyncGenerator',
- 'AsyncContextManager',
- 'ChainMap',
-
- # Concrete collection types.
- 'ContextManager',
- 'Counter',
- 'Deque',
- 'DefaultDict',
- 'OrderedDict',
- 'TypedDict',
-
- # Structural checks, a.k.a. protocols.
- 'SupportsIndex',
-
- # One-off things.
- 'Annotated',
- 'final',
- 'IntVar',
- 'Literal',
- 'NewType',
- 'overload',
- 'Protocol',
- 'runtime',
- 'runtime_checkable',
- 'Text',
- 'TypeAlias',
- 'TypeGuard',
- 'TYPE_CHECKING',
-]
-
-if PEP_560:
- __all__.extend(["get_args", "get_origin", "get_type_hints"])
-
-# 3.6.2+
-if hasattr(typing, 'NoReturn'):
- NoReturn = typing.NoReturn
-# 3.6.0-3.6.1
-else:
- class _NoReturn(typing._FinalTypingBase, _root=True):
- """Special type indicating functions that never return.
- Example::
-
- from typing import NoReturn
-
- def stop() -> NoReturn:
- raise Exception('no way')
-
- This type is invalid in other positions, e.g., ``List[NoReturn]``
- will fail in static type checkers.
- """
- __slots__ = ()
-
- def __instancecheck__(self, obj):
- raise TypeError("NoReturn cannot be used with isinstance().")
-
- def __subclasscheck__(self, cls):
- raise TypeError("NoReturn cannot be used with issubclass().")
-
- NoReturn = _NoReturn(_root=True)
-
-# Some unconstrained type variables. These are used by the container types.
-# (These are not for export.)
-T = typing.TypeVar('T') # Any type.
-KT = typing.TypeVar('KT') # Key type.
-VT = typing.TypeVar('VT') # Value type.
-T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers.
-T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant.
-
-ClassVar = typing.ClassVar
-
-# On older versions of typing there is an internal class named "Final".
-# 3.8+
-if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7):
- Final = typing.Final
-# 3.7
-elif sys.version_info[:2] >= (3, 7):
- class _FinalForm(typing._SpecialForm, _root=True):
-
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- f'{self._name} accepts only single type')
- return typing._GenericAlias(self, (item,))
-
- Final = _FinalForm('Final',
- doc="""A special typing construct to indicate that a name
- cannot be re-assigned or overridden in a subclass.
- For example:
-
- MAX_SIZE: Final = 9000
- MAX_SIZE += 1 # Error reported by type checker
-
- class Connection:
- TIMEOUT: Final[int] = 10
- class FastConnector(Connection):
- TIMEOUT = 1 # Error reported by type checker
-
- There is no runtime checking of these properties.""")
-# 3.6
-else:
- class _Final(typing._FinalTypingBase, _root=True):
- """A special typing construct to indicate that a name
- cannot be re-assigned or overridden in a subclass.
- For example:
-
- MAX_SIZE: Final = 9000
- MAX_SIZE += 1 # Error reported by type checker
-
- class Connection:
- TIMEOUT: Final[int] = 10
- class FastConnector(Connection):
- TIMEOUT = 1 # Error reported by type checker
-
- There is no runtime checking of these properties.
- """
-
- __slots__ = ('__type__',)
-
- def __init__(self, tp=None, **kwds):
- self.__type__ = tp
-
- def __getitem__(self, item):
- cls = type(self)
- if self.__type__ is None:
- return cls(typing._type_check(item,
- f'{cls.__name__[1:]} accepts only single type.'),
- _root=True)
- raise TypeError(f'{cls.__name__[1:]} cannot be further subscripted')
-
- def _eval_type(self, globalns, localns):
- new_tp = typing._eval_type(self.__type__, globalns, localns)
- if new_tp == self.__type__:
- return self
- return type(self)(new_tp, _root=True)
-
- def __repr__(self):
- r = super().__repr__()
- if self.__type__ is not None:
- r += f'[{typing._type_repr(self.__type__)}]'
- return r
-
- def __hash__(self):
- return hash((type(self).__name__, self.__type__))
-
- def __eq__(self, other):
- if not isinstance(other, _Final):
- return NotImplemented
- if self.__type__ is not None:
- return self.__type__ == other.__type__
- return self is other
-
- Final = _Final(_root=True)
-
-
-# 3.8+
-if hasattr(typing, 'final'):
- final = typing.final
-# 3.6-3.7
-else:
- def final(f):
- """This decorator can be used to indicate to type checkers that
- the decorated method cannot be overridden, and decorated class
- cannot be subclassed. For example:
-
- class Base:
- @final
- def done(self) -> None:
- ...
- class Sub(Base):
- def done(self) -> None: # Error reported by type checker
- ...
- @final
- class Leaf:
- ...
- class Other(Leaf): # Error reported by type checker
- ...
-
- There is no runtime checking of these properties.
- """
- return f
-
-
-def IntVar(name):
- return typing.TypeVar(name)
-
-
-# 3.8+:
-if hasattr(typing, 'Literal'):
- Literal = typing.Literal
-# 3.7:
-elif sys.version_info[:2] >= (3, 7):
- class _LiteralForm(typing._SpecialForm, _root=True):
-
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- return typing._GenericAlias(self, parameters)
-
- Literal = _LiteralForm('Literal',
- doc="""A type that can be used to indicate to type checkers
- that the corresponding value has a value literally equivalent
- to the provided parameter. For example:
-
- var: Literal[4] = 4
-
- The type checker understands that 'var' is literally equal to
- the value 4 and no other value.
-
- Literal[...] cannot be subclassed. There is no runtime
- checking verifying that the parameter is actually a value
- instead of a type.""")
-# 3.6:
-else:
- class _Literal(typing._FinalTypingBase, _root=True):
- """A type that can be used to indicate to type checkers that the
- corresponding value has a value literally equivalent to the
- provided parameter. For example:
-
- var: Literal[4] = 4
-
- The type checker understands that 'var' is literally equal to the
- value 4 and no other value.
-
- Literal[...] cannot be subclassed. There is no runtime checking
- verifying that the parameter is actually a value instead of a type.
- """
-
- __slots__ = ('__values__',)
-
- def __init__(self, values=None, **kwds):
- self.__values__ = values
-
- def __getitem__(self, values):
- cls = type(self)
- if self.__values__ is None:
- if not isinstance(values, tuple):
- values = (values,)
- return cls(values, _root=True)
- raise TypeError(f'{cls.__name__[1:]} cannot be further subscripted')
-
- def _eval_type(self, globalns, localns):
- return self
-
- def __repr__(self):
- r = super().__repr__()
- if self.__values__ is not None:
- r += f'[{", ".join(map(typing._type_repr, self.__values__))}]'
- return r
-
- def __hash__(self):
- return hash((type(self).__name__, self.__values__))
-
- def __eq__(self, other):
- if not isinstance(other, _Literal):
- return NotImplemented
- if self.__values__ is not None:
- return self.__values__ == other.__values__
- return self is other
-
- Literal = _Literal(_root=True)
-
-
-_overload_dummy = typing._overload_dummy # noqa
-overload = typing.overload
-
-
-# This is not a real generic class. Don't use outside annotations.
-Type = typing.Type
-
-# Various ABCs mimicking those in collections.abc.
-# A few are simply re-exported for completeness.
-
-
-class _ExtensionsGenericMeta(GenericMeta):
- def __subclasscheck__(self, subclass):
- """This mimics a more modern GenericMeta.__subclasscheck__() logic
- (that does not have problems with recursion) to work around interactions
- between collections, typing, and typing_extensions on older
- versions of Python, see https://github.com/python/typing/issues/501.
- """
- if self.__origin__ is not None:
- if sys._getframe(1).f_globals['__name__'] not in ['abc', 'functools']:
- raise TypeError("Parameterized generics cannot be used with class "
- "or instance checks")
- return False
- if not self.__extra__:
- return super().__subclasscheck__(subclass)
- res = self.__extra__.__subclasshook__(subclass)
- if res is not NotImplemented:
- return res
- if self.__extra__ in subclass.__mro__:
- return True
- for scls in self.__extra__.__subclasses__():
- if isinstance(scls, GenericMeta):
- continue
- if issubclass(subclass, scls):
- return True
- return False
-
-
-Awaitable = typing.Awaitable
-Coroutine = typing.Coroutine
-AsyncIterable = typing.AsyncIterable
-AsyncIterator = typing.AsyncIterator
-
-# 3.6.1+
-if hasattr(typing, 'Deque'):
- Deque = typing.Deque
-# 3.6.0
-else:
- class Deque(collections.deque, typing.MutableSequence[T],
- metaclass=_ExtensionsGenericMeta,
- extra=collections.deque):
- __slots__ = ()
-
- def __new__(cls, *args, **kwds):
- if cls._gorg is Deque:
- return collections.deque(*args, **kwds)
- return typing._generic_new(collections.deque, cls, *args, **kwds)
-
-ContextManager = typing.ContextManager
-# 3.6.2+
-if hasattr(typing, 'AsyncContextManager'):
- AsyncContextManager = typing.AsyncContextManager
-# 3.6.0-3.6.1
-else:
- from _collections_abc import _check_methods as _check_methods_in_mro # noqa
-
- class AsyncContextManager(typing.Generic[T_co]):
- __slots__ = ()
-
- async def __aenter__(self):
- return self
-
- @abc.abstractmethod
- async def __aexit__(self, exc_type, exc_value, traceback):
- return None
-
- @classmethod
- def __subclasshook__(cls, C):
- if cls is AsyncContextManager:
- return _check_methods_in_mro(C, "__aenter__", "__aexit__")
- return NotImplemented
-
-DefaultDict = typing.DefaultDict
-
-# 3.7.2+
-if hasattr(typing, 'OrderedDict'):
- OrderedDict = typing.OrderedDict
-# 3.7.0-3.7.2
-elif (3, 7, 0) <= sys.version_info[:3] < (3, 7, 2):
- OrderedDict = typing._alias(collections.OrderedDict, (KT, VT))
-# 3.6
-else:
- class OrderedDict(collections.OrderedDict, typing.MutableMapping[KT, VT],
- metaclass=_ExtensionsGenericMeta,
- extra=collections.OrderedDict):
-
- __slots__ = ()
-
- def __new__(cls, *args, **kwds):
- if cls._gorg is OrderedDict:
- return collections.OrderedDict(*args, **kwds)
- return typing._generic_new(collections.OrderedDict, cls, *args, **kwds)
-
-# 3.6.2+
-if hasattr(typing, 'Counter'):
- Counter = typing.Counter
-# 3.6.0-3.6.1
-else:
- class Counter(collections.Counter,
- typing.Dict[T, int],
- metaclass=_ExtensionsGenericMeta, extra=collections.Counter):
-
- __slots__ = ()
-
- def __new__(cls, *args, **kwds):
- if cls._gorg is Counter:
- return collections.Counter(*args, **kwds)
- return typing._generic_new(collections.Counter, cls, *args, **kwds)
-
-# 3.6.1+
-if hasattr(typing, 'ChainMap'):
- ChainMap = typing.ChainMap
-elif hasattr(collections, 'ChainMap'):
- class ChainMap(collections.ChainMap, typing.MutableMapping[KT, VT],
- metaclass=_ExtensionsGenericMeta,
- extra=collections.ChainMap):
-
- __slots__ = ()
-
- def __new__(cls, *args, **kwds):
- if cls._gorg is ChainMap:
- return collections.ChainMap(*args, **kwds)
- return typing._generic_new(collections.ChainMap, cls, *args, **kwds)
-
-# 3.6.1+
-if hasattr(typing, 'AsyncGenerator'):
- AsyncGenerator = typing.AsyncGenerator
-# 3.6.0
-else:
- class AsyncGenerator(AsyncIterator[T_co], typing.Generic[T_co, T_contra],
- metaclass=_ExtensionsGenericMeta,
- extra=collections.abc.AsyncGenerator):
- __slots__ = ()
-
-NewType = typing.NewType
-Text = typing.Text
-TYPE_CHECKING = typing.TYPE_CHECKING
-
-
-def _gorg(cls):
- """This function exists for compatibility with old typing versions."""
- assert isinstance(cls, GenericMeta)
- if hasattr(cls, '_gorg'):
- return cls._gorg
- while cls.__origin__ is not None:
- cls = cls.__origin__
- return cls
-
-
-_PROTO_WHITELIST = ['Callable', 'Awaitable',
- 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator',
- 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible',
- 'ContextManager', 'AsyncContextManager']
-
-
-def _get_protocol_attrs(cls):
- attrs = set()
- for base in cls.__mro__[:-1]: # without object
- if base.__name__ in ('Protocol', 'Generic'):
- continue
- annotations = getattr(base, '__annotations__', {})
- for attr in list(base.__dict__.keys()) + list(annotations.keys()):
- if (not attr.startswith('_abc_') and attr not in (
- '__abstractmethods__', '__annotations__', '__weakref__',
- '_is_protocol', '_is_runtime_protocol', '__dict__',
- '__args__', '__slots__',
- '__next_in_mro__', '__parameters__', '__origin__',
- '__orig_bases__', '__extra__', '__tree_hash__',
- '__doc__', '__subclasshook__', '__init__', '__new__',
- '__module__', '_MutableMapping__marker', '_gorg')):
- attrs.add(attr)
- return attrs
-
-
-def _is_callable_members_only(cls):
- return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls))
-
-
-# 3.8+
-if hasattr(typing, 'Protocol'):
- Protocol = typing.Protocol
-# 3.7
-elif PEP_560:
- from typing import _collect_type_vars # noqa
-
- def _no_init(self, *args, **kwargs):
- if type(self)._is_protocol:
- raise TypeError('Protocols cannot be instantiated')
-
- class _ProtocolMeta(abc.ABCMeta):
- # This metaclass is a bit unfortunate and exists only because of the lack
- # of __instancehook__.
- def __instancecheck__(cls, instance):
- # We need this method for situations where attributes are
- # assigned in __init__.
- if ((not getattr(cls, '_is_protocol', False) or
- _is_callable_members_only(cls)) and
- issubclass(instance.__class__, cls)):
- return True
- if cls._is_protocol:
- if all(hasattr(instance, attr) and
- (not callable(getattr(cls, attr, None)) or
- getattr(instance, attr) is not None)
- for attr in _get_protocol_attrs(cls)):
- return True
- return super().__instancecheck__(instance)
-
- class Protocol(metaclass=_ProtocolMeta):
- # There is quite a lot of overlapping code with typing.Generic.
- # Unfortunately it is hard to avoid this while these live in two different
- # modules. The duplicated code will be removed when Protocol is moved to typing.
- """Base class for protocol classes. Protocol classes are defined as::
-
- class Proto(Protocol):
- def meth(self) -> int:
- ...
-
- Such classes are primarily used with static type checkers that recognize
- structural subtyping (static duck-typing), for example::
-
- class C:
- def meth(self) -> int:
- return 0
-
- def func(x: Proto) -> int:
- return x.meth()
-
- func(C()) # Passes static type check
-
- See PEP 544 for details. Protocol classes decorated with
- @typing_extensions.runtime act as simple-minded runtime protocol that checks
- only the presence of given attributes, ignoring their type signatures.
-
- Protocol classes can be generic, they are defined as::
-
- class GenProto(Protocol[T]):
- def meth(self) -> T:
- ...
- """
- __slots__ = ()
- _is_protocol = True
-
- def __new__(cls, *args, **kwds):
- if cls is Protocol:
- raise TypeError("Type Protocol cannot be instantiated; "
- "it can only be used as a base class")
- return super().__new__(cls)
-
- @typing._tp_cache
- def __class_getitem__(cls, params):
- if not isinstance(params, tuple):
- params = (params,)
- if not params and cls is not typing.Tuple:
- raise TypeError(
- f"Parameter list to {cls.__qualname__}[...] cannot be empty")
- msg = "Parameters to generic types must be types."
- params = tuple(typing._type_check(p, msg) for p in params) # noqa
- if cls is Protocol:
- # Generic can only be subscripted with unique type variables.
- if not all(isinstance(p, typing.TypeVar) for p in params):
- i = 0
- while isinstance(params[i], typing.TypeVar):
- i += 1
- raise TypeError(
- "Parameters to Protocol[...] must all be type variables."
- f" Parameter {i + 1} is {params[i]}")
- if len(set(params)) != len(params):
- raise TypeError(
- "Parameters to Protocol[...] must all be unique")
- else:
- # Subscripting a regular Generic subclass.
- _check_generic(cls, params)
- return typing._GenericAlias(cls, params)
-
- def __init_subclass__(cls, *args, **kwargs):
- tvars = []
- if '__orig_bases__' in cls.__dict__:
- error = typing.Generic in cls.__orig_bases__
- else:
- error = typing.Generic in cls.__bases__
- if error:
- raise TypeError("Cannot inherit from plain Generic")
- if '__orig_bases__' in cls.__dict__:
- tvars = _collect_type_vars(cls.__orig_bases__)
- # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn].
- # If found, tvars must be a subset of it.
- # If not found, tvars is it.
- # Also check for and reject plain Generic,
- # and reject multiple Generic[...] and/or Protocol[...].
- gvars = None
- for base in cls.__orig_bases__:
- if (isinstance(base, typing._GenericAlias) and
- base.__origin__ in (typing.Generic, Protocol)):
- # for error messages
- the_base = base.__origin__.__name__
- if gvars is not None:
- raise TypeError(
- "Cannot inherit from Generic[...]"
- " and/or Protocol[...] multiple types.")
- gvars = base.__parameters__
- if gvars is None:
- gvars = tvars
- else:
- tvarset = set(tvars)
- gvarset = set(gvars)
- if not tvarset <= gvarset:
- s_vars = ', '.join(str(t) for t in tvars if t not in gvarset)
- s_args = ', '.join(str(g) for g in gvars)
- raise TypeError(f"Some type variables ({s_vars}) are"
- f" not listed in {the_base}[{s_args}]")
- tvars = gvars
- cls.__parameters__ = tuple(tvars)
-
- # Determine if this is a protocol or a concrete subclass.
- if not cls.__dict__.get('_is_protocol', None):
- cls._is_protocol = any(b is Protocol for b in cls.__bases__)
-
- # Set (or override) the protocol subclass hook.
- def _proto_hook(other):
- if not cls.__dict__.get('_is_protocol', None):
- return NotImplemented
- if not getattr(cls, '_is_runtime_protocol', False):
- if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']:
- return NotImplemented
- raise TypeError("Instance and class checks can only be used with"
- " @runtime protocols")
- if not _is_callable_members_only(cls):
- if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']:
- return NotImplemented
- raise TypeError("Protocols with non-method members"
- " don't support issubclass()")
- if not isinstance(other, type):
- # Same error as for issubclass(1, int)
- raise TypeError('issubclass() arg 1 must be a class')
- for attr in _get_protocol_attrs(cls):
- for base in other.__mro__:
- if attr in base.__dict__:
- if base.__dict__[attr] is None:
- return NotImplemented
- break
- annotations = getattr(base, '__annotations__', {})
- if (isinstance(annotations, typing.Mapping) and
- attr in annotations and
- isinstance(other, _ProtocolMeta) and
- other._is_protocol):
- break
- else:
- return NotImplemented
- return True
- if '__subclasshook__' not in cls.__dict__:
- cls.__subclasshook__ = _proto_hook
-
- # We have nothing more to do for non-protocols.
- if not cls._is_protocol:
- return
-
- # Check consistency of bases.
- for base in cls.__bases__:
- if not (base in (object, typing.Generic) or
- base.__module__ == 'collections.abc' and
- base.__name__ in _PROTO_WHITELIST or
- isinstance(base, _ProtocolMeta) and base._is_protocol):
- raise TypeError('Protocols can only inherit from other'
- f' protocols, got {repr(base)}')
- cls.__init__ = _no_init
-# 3.6
-else:
- from typing import _next_in_mro, _type_check # noqa
-
- def _no_init(self, *args, **kwargs):
- if type(self)._is_protocol:
- raise TypeError('Protocols cannot be instantiated')
-
- class _ProtocolMeta(GenericMeta):
- """Internal metaclass for Protocol.
-
- This exists so Protocol classes can be generic without deriving
- from Generic.
- """
- def __new__(cls, name, bases, namespace,
- tvars=None, args=None, origin=None, extra=None, orig_bases=None):
- # This is just a version copied from GenericMeta.__new__ that
- # includes "Protocol" special treatment. (Comments removed for brevity.)
- assert extra is None # Protocols should not have extra
- if tvars is not None:
- assert origin is not None
- assert all(isinstance(t, typing.TypeVar) for t in tvars), tvars
- else:
- tvars = _type_vars(bases)
- gvars = None
- for base in bases:
- if base is typing.Generic:
- raise TypeError("Cannot inherit from plain Generic")
- if (isinstance(base, GenericMeta) and
- base.__origin__ in (typing.Generic, Protocol)):
- if gvars is not None:
- raise TypeError(
- "Cannot inherit from Generic[...] or"
- " Protocol[...] multiple times.")
- gvars = base.__parameters__
- if gvars is None:
- gvars = tvars
- else:
- tvarset = set(tvars)
- gvarset = set(gvars)
- if not tvarset <= gvarset:
- s_vars = ", ".join(str(t) for t in tvars if t not in gvarset)
- s_args = ", ".join(str(g) for g in gvars)
- cls_name = "Generic" if any(b.__origin__ is typing.Generic
- for b in bases) else "Protocol"
- raise TypeError(f"Some type variables ({s_vars}) are"
- f" not listed in {cls_name}[{s_args}]")
- tvars = gvars
-
- initial_bases = bases
- if (extra is not None and type(extra) is abc.ABCMeta and
- extra not in bases):
- bases = (extra,) + bases
- bases = tuple(_gorg(b) if isinstance(b, GenericMeta) else b
- for b in bases)
- if any(isinstance(b, GenericMeta) and b is not typing.Generic for b in bases):
- bases = tuple(b for b in bases if b is not typing.Generic)
- namespace.update({'__origin__': origin, '__extra__': extra})
- self = super(GenericMeta, cls).__new__(cls, name, bases, namespace,
- _root=True)
- super(GenericMeta, self).__setattr__('_gorg',
- self if not origin else
- _gorg(origin))
- self.__parameters__ = tvars
- self.__args__ = tuple(... if a is typing._TypingEllipsis else
- () if a is typing._TypingEmpty else
- a for a in args) if args else None
- self.__next_in_mro__ = _next_in_mro(self)
- if orig_bases is None:
- self.__orig_bases__ = initial_bases
- elif origin is not None:
- self._abc_registry = origin._abc_registry
- self._abc_cache = origin._abc_cache
- if hasattr(self, '_subs_tree'):
- self.__tree_hash__ = (hash(self._subs_tree()) if origin else
- super(GenericMeta, self).__hash__())
- return self
-
- def __init__(cls, *args, **kwargs):
- super().__init__(*args, **kwargs)
- if not cls.__dict__.get('_is_protocol', None):
- cls._is_protocol = any(b is Protocol or
- isinstance(b, _ProtocolMeta) and
- b.__origin__ is Protocol
- for b in cls.__bases__)
- if cls._is_protocol:
- for base in cls.__mro__[1:]:
- if not (base in (object, typing.Generic) or
- base.__module__ == 'collections.abc' and
- base.__name__ in _PROTO_WHITELIST or
- isinstance(base, typing.TypingMeta) and base._is_protocol or
- isinstance(base, GenericMeta) and
- base.__origin__ is typing.Generic):
- raise TypeError(f'Protocols can only inherit from other'
- f' protocols, got {repr(base)}')
-
- cls.__init__ = _no_init
-
- def _proto_hook(other):
- if not cls.__dict__.get('_is_protocol', None):
- return NotImplemented
- if not isinstance(other, type):
- # Same error as for issubclass(1, int)
- raise TypeError('issubclass() arg 1 must be a class')
- for attr in _get_protocol_attrs(cls):
- for base in other.__mro__:
- if attr in base.__dict__:
- if base.__dict__[attr] is None:
- return NotImplemented
- break
- annotations = getattr(base, '__annotations__', {})
- if (isinstance(annotations, typing.Mapping) and
- attr in annotations and
- isinstance(other, _ProtocolMeta) and
- other._is_protocol):
- break
- else:
- return NotImplemented
- return True
- if '__subclasshook__' not in cls.__dict__:
- cls.__subclasshook__ = _proto_hook
-
- def __instancecheck__(self, instance):
- # We need this method for situations where attributes are
- # assigned in __init__.
- if ((not getattr(self, '_is_protocol', False) or
- _is_callable_members_only(self)) and
- issubclass(instance.__class__, self)):
- return True
- if self._is_protocol:
- if all(hasattr(instance, attr) and
- (not callable(getattr(self, attr, None)) or
- getattr(instance, attr) is not None)
- for attr in _get_protocol_attrs(self)):
- return True
- return super(GenericMeta, self).__instancecheck__(instance)
-
- def __subclasscheck__(self, cls):
- if self.__origin__ is not None:
- if sys._getframe(1).f_globals['__name__'] not in ['abc', 'functools']:
- raise TypeError("Parameterized generics cannot be used with class "
- "or instance checks")
- return False
- if (self.__dict__.get('_is_protocol', None) and
- not self.__dict__.get('_is_runtime_protocol', None)):
- if sys._getframe(1).f_globals['__name__'] in ['abc',
- 'functools',
- 'typing']:
- return False
- raise TypeError("Instance and class checks can only be used with"
- " @runtime protocols")
- if (self.__dict__.get('_is_runtime_protocol', None) and
- not _is_callable_members_only(self)):
- if sys._getframe(1).f_globals['__name__'] in ['abc',
- 'functools',
- 'typing']:
- return super(GenericMeta, self).__subclasscheck__(cls)
- raise TypeError("Protocols with non-method members"
- " don't support issubclass()")
- return super(GenericMeta, self).__subclasscheck__(cls)
-
- @typing._tp_cache
- def __getitem__(self, params):
- # We also need to copy this from GenericMeta.__getitem__ to get
- # special treatment of "Protocol". (Comments removed for brevity.)
- if not isinstance(params, tuple):
- params = (params,)
- if not params and _gorg(self) is not typing.Tuple:
- raise TypeError(
- f"Parameter list to {self.__qualname__}[...] cannot be empty")
- msg = "Parameters to generic types must be types."
- params = tuple(_type_check(p, msg) for p in params)
- if self in (typing.Generic, Protocol):
- if not all(isinstance(p, typing.TypeVar) for p in params):
- raise TypeError(
- f"Parameters to {repr(self)}[...] must all be type variables")
- if len(set(params)) != len(params):
- raise TypeError(
- f"Parameters to {repr(self)}[...] must all be unique")
- tvars = params
- args = params
- elif self in (typing.Tuple, typing.Callable):
- tvars = _type_vars(params)
- args = params
- elif self.__origin__ in (typing.Generic, Protocol):
- raise TypeError(f"Cannot subscript already-subscripted {repr(self)}")
- else:
- _check_generic(self, params)
- tvars = _type_vars(params)
- args = params
-
- prepend = (self,) if self.__origin__ is None else ()
- return self.__class__(self.__name__,
- prepend + self.__bases__,
- _no_slots_copy(self.__dict__),
- tvars=tvars,
- args=args,
- origin=self,
- extra=self.__extra__,
- orig_bases=self.__orig_bases__)
-
- class Protocol(metaclass=_ProtocolMeta):
- """Base class for protocol classes. Protocol classes are defined as::
-
- class Proto(Protocol):
- def meth(self) -> int:
- ...
-
- Such classes are primarily used with static type checkers that recognize
- structural subtyping (static duck-typing), for example::
-
- class C:
- def meth(self) -> int:
- return 0
-
- def func(x: Proto) -> int:
- return x.meth()
-
- func(C()) # Passes static type check
-
- See PEP 544 for details. Protocol classes decorated with
- @typing_extensions.runtime act as simple-minded runtime protocol that checks
- only the presence of given attributes, ignoring their type signatures.
-
- Protocol classes can be generic, they are defined as::
-
- class GenProto(Protocol[T]):
- def meth(self) -> T:
- ...
- """
- __slots__ = ()
- _is_protocol = True
-
- def __new__(cls, *args, **kwds):
- if _gorg(cls) is Protocol:
- raise TypeError("Type Protocol cannot be instantiated; "
- "it can be used only as a base class")
- return typing._generic_new(cls.__next_in_mro__, cls, *args, **kwds)
-
-
-# 3.8+
-if hasattr(typing, 'runtime_checkable'):
- runtime_checkable = typing.runtime_checkable
-# 3.6-3.7
-else:
- def runtime_checkable(cls):
- """Mark a protocol class as a runtime protocol, so that it
- can be used with isinstance() and issubclass(). Raise TypeError
- if applied to a non-protocol class.
-
- This allows a simple-minded structural check very similar to the
- one-offs in collections.abc such as Hashable.
- """
- if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol:
- raise TypeError('@runtime_checkable can be only applied to protocol classes,'
- f' got {cls!r}')
- cls._is_runtime_protocol = True
- return cls
-
-
-# Exists for backwards compatibility.
-runtime = runtime_checkable
-
-
-# 3.8+
-if hasattr(typing, 'SupportsIndex'):
- SupportsIndex = typing.SupportsIndex
-# 3.6-3.7
-else:
- @runtime_checkable
- class SupportsIndex(Protocol):
- __slots__ = ()
-
- @abc.abstractmethod
- def __index__(self) -> int:
- pass
-
-
-if sys.version_info >= (3, 9, 2):
- # The standard library TypedDict in Python 3.8 does not store runtime information
- # about which (if any) keys are optional. See https://bugs.python.org/issue38834
- # The standard library TypedDict in Python 3.9.0/1 does not honour the "total"
- # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059
- TypedDict = typing.TypedDict
-else:
- def _check_fails(cls, other):
- try:
- if sys._getframe(1).f_globals['__name__'] not in ['abc',
- 'functools',
- 'typing']:
- # Typed dicts are only for static structural subtyping.
- raise TypeError('TypedDict does not support instance and class checks')
- except (AttributeError, ValueError):
- pass
- return False
-
- def _dict_new(*args, **kwargs):
- if not args:
- raise TypeError('TypedDict.__new__(): not enough arguments')
- _, args = args[0], args[1:] # allow the "cls" keyword be passed
- return dict(*args, **kwargs)
-
- _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)'
-
- def _typeddict_new(*args, total=True, **kwargs):
- if not args:
- raise TypeError('TypedDict.__new__(): not enough arguments')
- _, args = args[0], args[1:] # allow the "cls" keyword be passed
- if args:
- typename, args = args[0], args[1:] # allow the "_typename" keyword be passed
- elif '_typename' in kwargs:
- typename = kwargs.pop('_typename')
- import warnings
- warnings.warn("Passing '_typename' as keyword argument is deprecated",
- DeprecationWarning, stacklevel=2)
- else:
- raise TypeError("TypedDict.__new__() missing 1 required positional "
- "argument: '_typename'")
- if args:
- try:
- fields, = args # allow the "_fields" keyword be passed
- except ValueError:
- raise TypeError('TypedDict.__new__() takes from 2 to 3 '
- f'positional arguments but {len(args) + 2} '
- 'were given')
- elif '_fields' in kwargs and len(kwargs) == 1:
- fields = kwargs.pop('_fields')
- import warnings
- warnings.warn("Passing '_fields' as keyword argument is deprecated",
- DeprecationWarning, stacklevel=2)
- else:
- fields = None
-
- if fields is None:
- fields = kwargs
- elif kwargs:
- raise TypeError("TypedDict takes either a dict or keyword arguments,"
- " but not both")
-
- ns = {'__annotations__': dict(fields)}
- try:
- # Setting correct module is necessary to make typed dict classes pickleable.
- ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- pass
-
- return _TypedDictMeta(typename, (), ns, total=total)
-
- _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,'
- ' /, *, total=True, **kwargs)')
-
- class _TypedDictMeta(type):
- def __init__(cls, name, bases, ns, total=True):
- super().__init__(name, bases, ns)
-
- def __new__(cls, name, bases, ns, total=True):
- # Create new typed dict class object.
- # This method is called directly when TypedDict is subclassed,
- # or via _typeddict_new when TypedDict is instantiated. This way
- # TypedDict supports all three syntaxes described in its docstring.
- # Subclasses and instances of TypedDict return actual dictionaries
- # via _dict_new.
- ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new
- tp_dict = super().__new__(cls, name, (dict,), ns)
-
- annotations = {}
- own_annotations = ns.get('__annotations__', {})
- own_annotation_keys = set(own_annotations.keys())
- msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type"
- own_annotations = {
- n: typing._type_check(tp, msg) for n, tp in own_annotations.items()
- }
- required_keys = set()
- optional_keys = set()
-
- for base in bases:
- annotations.update(base.__dict__.get('__annotations__', {}))
- required_keys.update(base.__dict__.get('__required_keys__', ()))
- optional_keys.update(base.__dict__.get('__optional_keys__', ()))
-
- annotations.update(own_annotations)
- if total:
- required_keys.update(own_annotation_keys)
- else:
- optional_keys.update(own_annotation_keys)
-
- tp_dict.__annotations__ = annotations
- tp_dict.__required_keys__ = frozenset(required_keys)
- tp_dict.__optional_keys__ = frozenset(optional_keys)
- if not hasattr(tp_dict, '__total__'):
- tp_dict.__total__ = total
- return tp_dict
-
- __instancecheck__ = __subclasscheck__ = _check_fails
-
- TypedDict = _TypedDictMeta('TypedDict', (dict,), {})
- TypedDict.__module__ = __name__
- TypedDict.__doc__ = \
- """A simple typed name space. At runtime it is equivalent to a plain dict.
-
- TypedDict creates a dictionary type that expects all of its
- instances to have a certain set of keys, with each key
- associated with a value of a consistent type. This expectation
- is not checked at runtime but is only enforced by type checkers.
- Usage::
-
- class Point2D(TypedDict):
- x: int
- y: int
- label: str
-
- a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK
- b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check
-
- assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first')
-
- The type info can be accessed via the Point2D.__annotations__ dict, and
- the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets.
- TypedDict supports two additional equivalent forms::
-
- Point2D = TypedDict('Point2D', x=int, y=int, label=str)
- Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str})
-
- The class syntax is only supported in Python 3.6+, while two other
- syntax forms work for Python 2.7 and 3.2+
- """
-
-
-# Python 3.9+ has PEP 593 (Annotated and modified get_type_hints)
-if hasattr(typing, 'Annotated'):
- Annotated = typing.Annotated
- get_type_hints = typing.get_type_hints
- # Not exported and not a public API, but needed for get_origin() and get_args()
- # to work.
- _AnnotatedAlias = typing._AnnotatedAlias
-# 3.7-3.8
-elif PEP_560:
- class _AnnotatedAlias(typing._GenericAlias, _root=True):
- """Runtime representation of an annotated type.
-
- At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't'
- with extra annotations. The alias behaves like a normal typing alias,
- instantiating is the same as instantiating the underlying type, binding
- it to types is also the same.
- """
- def __init__(self, origin, metadata):
- if isinstance(origin, _AnnotatedAlias):
- metadata = origin.__metadata__ + metadata
- origin = origin.__origin__
- super().__init__(origin, origin)
- self.__metadata__ = metadata
-
- def copy_with(self, params):
- assert len(params) == 1
- new_type = params[0]
- return _AnnotatedAlias(new_type, self.__metadata__)
-
- def __repr__(self):
- return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, "
- f"{', '.join(repr(a) for a in self.__metadata__)}]")
-
- def __reduce__(self):
- return operator.getitem, (
- Annotated, (self.__origin__,) + self.__metadata__
- )
-
- def __eq__(self, other):
- if not isinstance(other, _AnnotatedAlias):
- return NotImplemented
- if self.__origin__ != other.__origin__:
- return False
- return self.__metadata__ == other.__metadata__
-
- def __hash__(self):
- return hash((self.__origin__, self.__metadata__))
-
- class Annotated:
- """Add context specific metadata to a type.
-
- Example: Annotated[int, runtime_check.Unsigned] indicates to the
- hypothetical runtime_check module that this type is an unsigned int.
- Every other consumer of this type can ignore this metadata and treat
- this type as int.
-
- The first argument to Annotated must be a valid type (and will be in
- the __origin__ field), the remaining arguments are kept as a tuple in
- the __extra__ field.
-
- Details:
-
- - It's an error to call `Annotated` with less than two arguments.
- - Nested Annotated are flattened::
-
- Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3]
-
- - Instantiating an annotated type is equivalent to instantiating the
- underlying type::
-
- Annotated[C, Ann1](5) == C(5)
-
- - Annotated can be used as a generic type alias::
-
- Optimized = Annotated[T, runtime.Optimize()]
- Optimized[int] == Annotated[int, runtime.Optimize()]
-
- OptimizedList = Annotated[List[T], runtime.Optimize()]
- OptimizedList[int] == Annotated[List[int], runtime.Optimize()]
- """
-
- __slots__ = ()
-
- def __new__(cls, *args, **kwargs):
- raise TypeError("Type Annotated cannot be instantiated.")
-
- @typing._tp_cache
- def __class_getitem__(cls, params):
- if not isinstance(params, tuple) or len(params) < 2:
- raise TypeError("Annotated[...] should be used "
- "with at least two arguments (a type and an "
- "annotation).")
- msg = "Annotated[t, ...]: t must be a type."
- origin = typing._type_check(params[0], msg)
- metadata = tuple(params[1:])
- return _AnnotatedAlias(origin, metadata)
-
- def __init_subclass__(cls, *args, **kwargs):
- raise TypeError(
- f"Cannot subclass {cls.__module__}.Annotated"
- )
-
- def _strip_annotations(t):
- """Strips the annotations from a given type.
- """
- if isinstance(t, _AnnotatedAlias):
- return _strip_annotations(t.__origin__)
- if isinstance(t, typing._GenericAlias):
- stripped_args = tuple(_strip_annotations(a) for a in t.__args__)
- if stripped_args == t.__args__:
- return t
- res = t.copy_with(stripped_args)
- res._special = t._special
- return res
- return t
-
- def get_type_hints(obj, globalns=None, localns=None, include_extras=False):
- """Return type hints for an object.
-
- This is often the same as obj.__annotations__, but it handles
- forward references encoded as string literals, adds Optional[t] if a
- default value equal to None is set and recursively replaces all
- 'Annotated[T, ...]' with 'T' (unless 'include_extras=True').
-
- The argument may be a module, class, method, or function. The annotations
- are returned as a dictionary. For classes, annotations include also
- inherited members.
-
- TypeError is raised if the argument is not of a type that can contain
- annotations, and an empty dictionary is returned if no annotations are
- present.
-
- BEWARE -- the behavior of globalns and localns is counterintuitive
- (unless you are familiar with how eval() and exec() work). The
- search order is locals first, then globals.
-
- - If no dict arguments are passed, an attempt is made to use the
- globals from obj (or the respective module's globals for classes),
- and these are also used as the locals. If the object does not appear
- to have globals, an empty dictionary is used.
-
- - If one dict argument is passed, it is used for both globals and
- locals.
-
- - If two dict arguments are passed, they specify globals and
- locals, respectively.
- """
- hint = typing.get_type_hints(obj, globalns=globalns, localns=localns)
- if include_extras:
- return hint
- return {k: _strip_annotations(t) for k, t in hint.items()}
-# 3.6
-else:
-
- def _is_dunder(name):
- """Returns True if name is a __dunder_variable_name__."""
- return len(name) > 4 and name.startswith('__') and name.endswith('__')
-
- # Prior to Python 3.7 types did not have `copy_with`. A lot of the equality
- # checks, argument expansion etc. are done on the _subs_tre. As a result we
- # can't provide a get_type_hints function that strips out annotations.
-
- class AnnotatedMeta(typing.GenericMeta):
- """Metaclass for Annotated"""
-
- def __new__(cls, name, bases, namespace, **kwargs):
- if any(b is not object for b in bases):
- raise TypeError("Cannot subclass " + str(Annotated))
- return super().__new__(cls, name, bases, namespace, **kwargs)
-
- @property
- def __metadata__(self):
- return self._subs_tree()[2]
-
- def _tree_repr(self, tree):
- cls, origin, metadata = tree
- if not isinstance(origin, tuple):
- tp_repr = typing._type_repr(origin)
- else:
- tp_repr = origin[0]._tree_repr(origin)
- metadata_reprs = ", ".join(repr(arg) for arg in metadata)
- return f'{cls}[{tp_repr}, {metadata_reprs}]'
-
- def _subs_tree(self, tvars=None, args=None): # noqa
- if self is Annotated:
- return Annotated
- res = super()._subs_tree(tvars=tvars, args=args)
- # Flatten nested Annotated
- if isinstance(res[1], tuple) and res[1][0] is Annotated:
- sub_tp = res[1][1]
- sub_annot = res[1][2]
- return (Annotated, sub_tp, sub_annot + res[2])
- return res
-
- def _get_cons(self):
- """Return the class used to create instance of this type."""
- if self.__origin__ is None:
- raise TypeError("Cannot get the underlying type of a "
- "non-specialized Annotated type.")
- tree = self._subs_tree()
- while isinstance(tree, tuple) and tree[0] is Annotated:
- tree = tree[1]
- if isinstance(tree, tuple):
- return tree[0]
- else:
- return tree
-
- @typing._tp_cache
- def __getitem__(self, params):
- if not isinstance(params, tuple):
- params = (params,)
- if self.__origin__ is not None: # specializing an instantiated type
- return super().__getitem__(params)
- elif not isinstance(params, tuple) or len(params) < 2:
- raise TypeError("Annotated[...] should be instantiated "
- "with at least two arguments (a type and an "
- "annotation).")
- else:
- msg = "Annotated[t, ...]: t must be a type."
- tp = typing._type_check(params[0], msg)
- metadata = tuple(params[1:])
- return self.__class__(
- self.__name__,
- self.__bases__,
- _no_slots_copy(self.__dict__),
- tvars=_type_vars((tp,)),
- # Metadata is a tuple so it won't be touched by _replace_args et al.
- args=(tp, metadata),
- origin=self,
- )
-
- def __call__(self, *args, **kwargs):
- cons = self._get_cons()
- result = cons(*args, **kwargs)
- try:
- result.__orig_class__ = self
- except AttributeError:
- pass
- return result
-
- def __getattr__(self, attr):
- # For simplicity we just don't relay all dunder names
- if self.__origin__ is not None and not _is_dunder(attr):
- return getattr(self._get_cons(), attr)
- raise AttributeError(attr)
-
- def __setattr__(self, attr, value):
- if _is_dunder(attr) or attr.startswith('_abc_'):
- super().__setattr__(attr, value)
- elif self.__origin__ is None:
- raise AttributeError(attr)
- else:
- setattr(self._get_cons(), attr, value)
-
- def __instancecheck__(self, obj):
- raise TypeError("Annotated cannot be used with isinstance().")
-
- def __subclasscheck__(self, cls):
- raise TypeError("Annotated cannot be used with issubclass().")
-
- class Annotated(metaclass=AnnotatedMeta):
- """Add context specific metadata to a type.
-
- Example: Annotated[int, runtime_check.Unsigned] indicates to the
- hypothetical runtime_check module that this type is an unsigned int.
- Every other consumer of this type can ignore this metadata and treat
- this type as int.
-
- The first argument to Annotated must be a valid type, the remaining
- arguments are kept as a tuple in the __metadata__ field.
-
- Details:
-
- - It's an error to call `Annotated` with less than two arguments.
- - Nested Annotated are flattened::
-
- Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3]
-
- - Instantiating an annotated type is equivalent to instantiating the
- underlying type::
-
- Annotated[C, Ann1](5) == C(5)
-
- - Annotated can be used as a generic type alias::
-
- Optimized = Annotated[T, runtime.Optimize()]
- Optimized[int] == Annotated[int, runtime.Optimize()]
-
- OptimizedList = Annotated[List[T], runtime.Optimize()]
- OptimizedList[int] == Annotated[List[int], runtime.Optimize()]
- """
-
-# Python 3.8 has get_origin() and get_args() but those implementations aren't
-# Annotated-aware, so we can't use those. Python 3.9's versions don't support
-# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do.
-if sys.version_info[:2] >= (3, 10):
- get_origin = typing.get_origin
- get_args = typing.get_args
-# 3.7-3.9
-elif PEP_560:
- try:
- # 3.9+
- from typing import _BaseGenericAlias
- except ImportError:
- _BaseGenericAlias = typing._GenericAlias
- try:
- # 3.9+
- from typing import GenericAlias
- except ImportError:
- GenericAlias = typing._GenericAlias
-
- def get_origin(tp):
- """Get the unsubscripted version of a type.
-
- This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar
- and Annotated. Return None for unsupported types. Examples::
-
- get_origin(Literal[42]) is Literal
- get_origin(int) is None
- get_origin(ClassVar[int]) is ClassVar
- get_origin(Generic) is Generic
- get_origin(Generic[T]) is Generic
- get_origin(Union[T, int]) is Union
- get_origin(List[Tuple[T, T]][int]) == list
- get_origin(P.args) is P
- """
- if isinstance(tp, _AnnotatedAlias):
- return Annotated
- if isinstance(tp, (typing._GenericAlias, GenericAlias, _BaseGenericAlias,
- ParamSpecArgs, ParamSpecKwargs)):
- return tp.__origin__
- if tp is typing.Generic:
- return typing.Generic
- return None
-
- def get_args(tp):
- """Get type arguments with all substitutions performed.
-
- For unions, basic simplifications used by Union constructor are performed.
- Examples::
- get_args(Dict[str, int]) == (str, int)
- get_args(int) == ()
- get_args(Union[int, Union[T, int], str][int]) == (int, str)
- get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int])
- get_args(Callable[[], T][int]) == ([], int)
- """
- if isinstance(tp, _AnnotatedAlias):
- return (tp.__origin__,) + tp.__metadata__
- if isinstance(tp, (typing._GenericAlias, GenericAlias)):
- if getattr(tp, "_special", False):
- return ()
- res = tp.__args__
- if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis:
- res = (list(res[:-1]), res[-1])
- return res
- return ()
-
-
-# 3.10+
-if hasattr(typing, 'TypeAlias'):
- TypeAlias = typing.TypeAlias
-# 3.9
-elif sys.version_info[:2] >= (3, 9):
- class _TypeAliasForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- @_TypeAliasForm
- def TypeAlias(self, parameters):
- """Special marker indicating that an assignment should
- be recognized as a proper type alias definition by type
- checkers.
-
- For example::
-
- Predicate: TypeAlias = Callable[..., bool]
-
- It's invalid when used anywhere except as in the example above.
- """
- raise TypeError(f"{self} is not subscriptable")
-# 3.7-3.8
-elif sys.version_info[:2] >= (3, 7):
- class _TypeAliasForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- TypeAlias = _TypeAliasForm('TypeAlias',
- doc="""Special marker indicating that an assignment should
- be recognized as a proper type alias definition by type
- checkers.
-
- For example::
-
- Predicate: TypeAlias = Callable[..., bool]
-
- It's invalid when used anywhere except as in the example
- above.""")
-# 3.6
-else:
- class _TypeAliasMeta(typing.TypingMeta):
- """Metaclass for TypeAlias"""
-
- def __repr__(self):
- return 'typing_extensions.TypeAlias'
-
- class _TypeAliasBase(typing._FinalTypingBase, metaclass=_TypeAliasMeta, _root=True):
- """Special marker indicating that an assignment should
- be recognized as a proper type alias definition by type
- checkers.
-
- For example::
-
- Predicate: TypeAlias = Callable[..., bool]
-
- It's invalid when used anywhere except as in the example above.
- """
- __slots__ = ()
-
- def __instancecheck__(self, obj):
- raise TypeError("TypeAlias cannot be used with isinstance().")
-
- def __subclasscheck__(self, cls):
- raise TypeError("TypeAlias cannot be used with issubclass().")
-
- def __repr__(self):
- return 'typing_extensions.TypeAlias'
-
- TypeAlias = _TypeAliasBase(_root=True)
-
-
-# Python 3.10+ has PEP 612
-if hasattr(typing, 'ParamSpecArgs'):
- ParamSpecArgs = typing.ParamSpecArgs
- ParamSpecKwargs = typing.ParamSpecKwargs
-# 3.6-3.9
-else:
- class _Immutable:
- """Mixin to indicate that object should not be copied."""
- __slots__ = ()
-
- def __copy__(self):
- return self
-
- def __deepcopy__(self, memo):
- return self
-
- class ParamSpecArgs(_Immutable):
- """The args for a ParamSpec object.
-
- Given a ParamSpec object P, P.args is an instance of ParamSpecArgs.
-
- ParamSpecArgs objects have a reference back to their ParamSpec:
-
- P.args.__origin__ is P
-
- This type is meant for runtime introspection and has no special meaning to
- static type checkers.
- """
- def __init__(self, origin):
- self.__origin__ = origin
-
- def __repr__(self):
- return f"{self.__origin__.__name__}.args"
-
- class ParamSpecKwargs(_Immutable):
- """The kwargs for a ParamSpec object.
-
- Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs.
-
- ParamSpecKwargs objects have a reference back to their ParamSpec:
-
- P.kwargs.__origin__ is P
-
- This type is meant for runtime introspection and has no special meaning to
- static type checkers.
- """
- def __init__(self, origin):
- self.__origin__ = origin
-
- def __repr__(self):
- return f"{self.__origin__.__name__}.kwargs"
-
-# 3.10+
-if hasattr(typing, 'ParamSpec'):
- ParamSpec = typing.ParamSpec
-# 3.6-3.9
-else:
-
- # Inherits from list as a workaround for Callable checks in Python < 3.9.2.
- class ParamSpec(list):
- """Parameter specification variable.
-
- Usage::
-
- P = ParamSpec('P')
-
- Parameter specification variables exist primarily for the benefit of static
- type checkers. They are used to forward the parameter types of one
- callable to another callable, a pattern commonly found in higher order
- functions and decorators. They are only valid when used in ``Concatenate``,
- or s the first argument to ``Callable``. In Python 3.10 and higher,
- they are also supported in user-defined Generics at runtime.
- See class Generic for more information on generic types. An
- example for annotating a decorator::
-
- T = TypeVar('T')
- P = ParamSpec('P')
-
- def add_logging(f: Callable[P, T]) -> Callable[P, T]:
- '''A type-safe decorator to add logging to a function.'''
- def inner(*args: P.args, **kwargs: P.kwargs) -> T:
- logging.info(f'{f.__name__} was called')
- return f(*args, **kwargs)
- return inner
-
- @add_logging
- def add_two(x: float, y: float) -> float:
- '''Add two numbers together.'''
- return x + y
-
- Parameter specification variables defined with covariant=True or
- contravariant=True can be used to declare covariant or contravariant
- generic types. These keyword arguments are valid, but their actual semantics
- are yet to be decided. See PEP 612 for details.
-
- Parameter specification variables can be introspected. e.g.:
-
- P.__name__ == 'T'
- P.__bound__ == None
- P.__covariant__ == False
- P.__contravariant__ == False
-
- Note that only parameter specification variables defined in global scope can
- be pickled.
- """
-
- # Trick Generic __parameters__.
- __class__ = typing.TypeVar
-
- @property
- def args(self):
- return ParamSpecArgs(self)
-
- @property
- def kwargs(self):
- return ParamSpecKwargs(self)
-
- def __init__(self, name, *, bound=None, covariant=False, contravariant=False):
- super().__init__([self])
- self.__name__ = name
- self.__covariant__ = bool(covariant)
- self.__contravariant__ = bool(contravariant)
- if bound:
- self.__bound__ = typing._type_check(bound, 'Bound must be a type.')
- else:
- self.__bound__ = None
-
- # for pickling:
- try:
- def_mod = sys._getframe(1).f_globals.get('__name__', '__main__')
- except (AttributeError, ValueError):
- def_mod = None
- if def_mod != 'typing_extensions':
- self.__module__ = def_mod
-
- def __repr__(self):
- if self.__covariant__:
- prefix = '+'
- elif self.__contravariant__:
- prefix = '-'
- else:
- prefix = '~'
- return prefix + self.__name__
-
- def __hash__(self):
- return object.__hash__(self)
-
- def __eq__(self, other):
- return self is other
-
- def __reduce__(self):
- return self.__name__
-
- # Hack to get typing._type_check to pass.
- def __call__(self, *args, **kwargs):
- pass
-
- if not PEP_560:
- # Only needed in 3.6.
- def _get_type_vars(self, tvars):
- if self not in tvars:
- tvars.append(self)
-
-
-# 3.6-3.9
-if not hasattr(typing, 'Concatenate'):
- # Inherits from list as a workaround for Callable checks in Python < 3.9.2.
- class _ConcatenateGenericAlias(list):
-
- # Trick Generic into looking into this for __parameters__.
- if PEP_560:
- __class__ = typing._GenericAlias
- else:
- __class__ = typing._TypingBase
-
- # Flag in 3.8.
- _special = False
- # Attribute in 3.6 and earlier.
- _gorg = typing.Generic
-
- def __init__(self, origin, args):
- super().__init__(args)
- self.__origin__ = origin
- self.__args__ = args
-
- def __repr__(self):
- _type_repr = typing._type_repr
- return (f'{_type_repr(self.__origin__)}'
- f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]')
-
- def __hash__(self):
- return hash((self.__origin__, self.__args__))
-
- # Hack to get typing._type_check to pass in Generic.
- def __call__(self, *args, **kwargs):
- pass
-
- @property
- def __parameters__(self):
- return tuple(
- tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec))
- )
-
- if not PEP_560:
- # Only required in 3.6.
- def _get_type_vars(self, tvars):
- if self.__origin__ and self.__parameters__:
- typing._get_type_vars(self.__parameters__, tvars)
-
-
-# 3.6-3.9
-@typing._tp_cache
-def _concatenate_getitem(self, parameters):
- if parameters == ():
- raise TypeError("Cannot take a Concatenate of no types.")
- if not isinstance(parameters, tuple):
- parameters = (parameters,)
- if not isinstance(parameters[-1], ParamSpec):
- raise TypeError("The last parameter to Concatenate should be a "
- "ParamSpec variable.")
- msg = "Concatenate[arg, ...]: each arg must be a type."
- parameters = tuple(typing._type_check(p, msg) for p in parameters)
- return _ConcatenateGenericAlias(self, parameters)
-
-
-# 3.10+
-if hasattr(typing, 'Concatenate'):
- Concatenate = typing.Concatenate
- _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa
-# 3.9
-elif sys.version_info[:2] >= (3, 9):
- @_TypeAliasForm
- def Concatenate(self, parameters):
- """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a
- higher order function which adds, removes or transforms parameters of a
- callable.
-
- For example::
-
- Callable[Concatenate[int, P], int]
-
- See PEP 612 for detailed information.
- """
- return _concatenate_getitem(self, parameters)
-# 3.7-8
-elif sys.version_info[:2] >= (3, 7):
- class _ConcatenateForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- return _concatenate_getitem(self, parameters)
-
- Concatenate = _ConcatenateForm(
- 'Concatenate',
- doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a
- higher order function which adds, removes or transforms parameters of a
- callable.
-
- For example::
-
- Callable[Concatenate[int, P], int]
-
- See PEP 612 for detailed information.
- """)
-# 3.6
-else:
- class _ConcatenateAliasMeta(typing.TypingMeta):
- """Metaclass for Concatenate."""
-
- def __repr__(self):
- return 'typing_extensions.Concatenate'
-
- class _ConcatenateAliasBase(typing._FinalTypingBase,
- metaclass=_ConcatenateAliasMeta,
- _root=True):
- """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a
- higher order function which adds, removes or transforms parameters of a
- callable.
-
- For example::
-
- Callable[Concatenate[int, P], int]
-
- See PEP 612 for detailed information.
- """
- __slots__ = ()
-
- def __instancecheck__(self, obj):
- raise TypeError("Concatenate cannot be used with isinstance().")
-
- def __subclasscheck__(self, cls):
- raise TypeError("Concatenate cannot be used with issubclass().")
-
- def __repr__(self):
- return 'typing_extensions.Concatenate'
-
- def __getitem__(self, parameters):
- return _concatenate_getitem(self, parameters)
-
- Concatenate = _ConcatenateAliasBase(_root=True)
-
-# 3.10+
-if hasattr(typing, 'TypeGuard'):
- TypeGuard = typing.TypeGuard
-# 3.9
-elif sys.version_info[:2] >= (3, 9):
- class _TypeGuardForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- @_TypeGuardForm
- def TypeGuard(self, parameters):
- """Special typing form used to annotate the return type of a user-defined
- type guard function. ``TypeGuard`` only accepts a single type argument.
- At runtime, functions marked this way should return a boolean.
-
- ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static
- type checkers to determine a more precise type of an expression within a
- program's code flow. Usually type narrowing is done by analyzing
- conditional code flow and applying the narrowing to a block of code. The
- conditional expression here is sometimes referred to as a "type guard".
-
- Sometimes it would be convenient to use a user-defined boolean function
- as a type guard. Such a function should use ``TypeGuard[...]`` as its
- return type to alert static type checkers to this intention.
-
- Using ``-> TypeGuard`` tells the static type checker that for a given
- function:
-
- 1. The return value is a boolean.
- 2. If the return value is ``True``, the type of its argument
- is the type inside ``TypeGuard``.
-
- For example::
-
- def is_str(val: Union[str, float]):
- # "isinstance" type guard
- if isinstance(val, str):
- # Type of ``val`` is narrowed to ``str``
- ...
- else:
- # Else, type of ``val`` is narrowed to ``float``.
- ...
-
- Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower
- form of ``TypeA`` (it can even be a wider form) and this may lead to
- type-unsafe results. The main reason is to allow for things like
- narrowing ``List[object]`` to ``List[str]`` even though the latter is not
- a subtype of the former, since ``List`` is invariant. The responsibility of
- writing type-safe type guards is left to the user.
-
- ``TypeGuard`` also works with type variables. For more information, see
- PEP 647 (User-Defined Type Guards).
- """
- item = typing._type_check(parameters, f'{self} accepts only single type.')
- return typing._GenericAlias(self, (item,))
-# 3.7-3.8
-elif sys.version_info[:2] >= (3, 7):
- class _TypeGuardForm(typing._SpecialForm, _root=True):
-
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- f'{self._name} accepts only a single type')
- return typing._GenericAlias(self, (item,))
-
- TypeGuard = _TypeGuardForm(
- 'TypeGuard',
- doc="""Special typing form used to annotate the return type of a user-defined
- type guard function. ``TypeGuard`` only accepts a single type argument.
- At runtime, functions marked this way should return a boolean.
-
- ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static
- type checkers to determine a more precise type of an expression within a
- program's code flow. Usually type narrowing is done by analyzing
- conditional code flow and applying the narrowing to a block of code. The
- conditional expression here is sometimes referred to as a "type guard".
-
- Sometimes it would be convenient to use a user-defined boolean function
- as a type guard. Such a function should use ``TypeGuard[...]`` as its
- return type to alert static type checkers to this intention.
-
- Using ``-> TypeGuard`` tells the static type checker that for a given
- function:
-
- 1. The return value is a boolean.
- 2. If the return value is ``True``, the type of its argument
- is the type inside ``TypeGuard``.
-
- For example::
-
- def is_str(val: Union[str, float]):
- # "isinstance" type guard
- if isinstance(val, str):
- # Type of ``val`` is narrowed to ``str``
- ...
- else:
- # Else, type of ``val`` is narrowed to ``float``.
- ...
-
- Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower
- form of ``TypeA`` (it can even be a wider form) and this may lead to
- type-unsafe results. The main reason is to allow for things like
- narrowing ``List[object]`` to ``List[str]`` even though the latter is not
- a subtype of the former, since ``List`` is invariant. The responsibility of
- writing type-safe type guards is left to the user.
-
- ``TypeGuard`` also works with type variables. For more information, see
- PEP 647 (User-Defined Type Guards).
- """)
-# 3.6
-else:
- class _TypeGuard(typing._FinalTypingBase, _root=True):
- """Special typing form used to annotate the return type of a user-defined
- type guard function. ``TypeGuard`` only accepts a single type argument.
- At runtime, functions marked this way should return a boolean.
-
- ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static
- type checkers to determine a more precise type of an expression within a
- program's code flow. Usually type narrowing is done by analyzing
- conditional code flow and applying the narrowing to a block of code. The
- conditional expression here is sometimes referred to as a "type guard".
-
- Sometimes it would be convenient to use a user-defined boolean function
- as a type guard. Such a function should use ``TypeGuard[...]`` as its
- return type to alert static type checkers to this intention.
-
- Using ``-> TypeGuard`` tells the static type checker that for a given
- function:
-
- 1. The return value is a boolean.
- 2. If the return value is ``True``, the type of its argument
- is the type inside ``TypeGuard``.
-
- For example::
-
- def is_str(val: Union[str, float]):
- # "isinstance" type guard
- if isinstance(val, str):
- # Type of ``val`` is narrowed to ``str``
- ...
- else:
- # Else, type of ``val`` is narrowed to ``float``.
- ...
-
- Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower
- form of ``TypeA`` (it can even be a wider form) and this may lead to
- type-unsafe results. The main reason is to allow for things like
- narrowing ``List[object]`` to ``List[str]`` even though the latter is not
- a subtype of the former, since ``List`` is invariant. The responsibility of
- writing type-safe type guards is left to the user.
-
- ``TypeGuard`` also works with type variables. For more information, see
- PEP 647 (User-Defined Type Guards).
- """
-
- __slots__ = ('__type__',)
-
- def __init__(self, tp=None, **kwds):
- self.__type__ = tp
-
- def __getitem__(self, item):
- cls = type(self)
- if self.__type__ is None:
- return cls(typing._type_check(item,
- f'{cls.__name__[1:]} accepts only a single type.'),
- _root=True)
- raise TypeError(f'{cls.__name__[1:]} cannot be further subscripted')
-
- def _eval_type(self, globalns, localns):
- new_tp = typing._eval_type(self.__type__, globalns, localns)
- if new_tp == self.__type__:
- return self
- return type(self)(new_tp, _root=True)
-
- def __repr__(self):
- r = super().__repr__()
- if self.__type__ is not None:
- r += f'[{typing._type_repr(self.__type__)}]'
- return r
-
- def __hash__(self):
- return hash((type(self).__name__, self.__type__))
-
- def __eq__(self, other):
- if not isinstance(other, _TypeGuard):
- return NotImplemented
- if self.__type__ is not None:
- return self.__type__ == other.__type__
- return self is other
-
- TypeGuard = _TypeGuard(_root=True)
-
-if hasattr(typing, "Self"):
- Self = typing.Self
-elif sys.version_info[:2] >= (3, 7):
- # Vendored from cpython typing._SpecialFrom
- class _SpecialForm(typing._Final, _root=True):
- __slots__ = ('_name', '__doc__', '_getitem')
-
- def __init__(self, getitem):
- self._getitem = getitem
- self._name = getitem.__name__
- self.__doc__ = getitem.__doc__
-
- def __getattr__(self, item):
- if item in {'__name__', '__qualname__'}:
- return self._name
-
- raise AttributeError(item)
-
- def __mro_entries__(self, bases):
- raise TypeError(f"Cannot subclass {self!r}")
-
- def __repr__(self):
- return f'typing_extensions.{self._name}'
-
- def __reduce__(self):
- return self._name
-
- def __call__(self, *args, **kwds):
- raise TypeError(f"Cannot instantiate {self!r}")
-
- def __or__(self, other):
- return typing.Union[self, other]
-
- def __ror__(self, other):
- return typing.Union[other, self]
-
- def __instancecheck__(self, obj):
- raise TypeError(f"{self} cannot be used with isinstance()")
-
- def __subclasscheck__(self, cls):
- raise TypeError(f"{self} cannot be used with issubclass()")
-
- @typing._tp_cache
- def __getitem__(self, parameters):
- return self._getitem(self, parameters)
-
- @_SpecialForm
- def Self(self, params):
- """Used to spell the type of "self" in classes.
-
- Example::
-
- from typing import Self
-
- class ReturnsSelf:
- def parse(self, data: bytes) -> Self:
- ...
- return self
-
- """
-
- raise TypeError(f"{self} is not subscriptable")
-else:
- class _Self(typing._FinalTypingBase, _root=True):
- """Used to spell the type of "self" in classes.
-
- Example::
-
- from typing import Self
-
- class ReturnsSelf:
- def parse(self, data: bytes) -> Self:
- ...
- return self
-
- """
-
- __slots__ = ()
-
- def __instancecheck__(self, obj):
- raise TypeError(f"{self} cannot be used with isinstance().")
-
- def __subclasscheck__(self, cls):
- raise TypeError(f"{self} cannot be used with issubclass().")
-
- Self = _Self(_root=True)
-
-
-if hasattr(typing, 'Required'):
- Required = typing.Required
- NotRequired = typing.NotRequired
-elif sys.version_info[:2] >= (3, 9):
- class _ExtensionsSpecialForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- @_ExtensionsSpecialForm
- def Required(self, parameters):
- """A special typing construct to mark a key of a total=False TypedDict
- as required. For example:
-
- class Movie(TypedDict, total=False):
- title: Required[str]
- year: int
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
-
- There is no runtime checking that a required key is actually provided
- when instantiating a related TypedDict.
- """
- item = typing._type_check(parameters, f'{self._name} accepts only single type')
- return typing._GenericAlias(self, (item,))
-
- @_ExtensionsSpecialForm
- def NotRequired(self, parameters):
- """A special typing construct to mark a key of a TypedDict as
- potentially missing. For example:
-
- class Movie(TypedDict):
- title: str
- year: NotRequired[int]
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
- """
- item = typing._type_check(parameters, f'{self._name} accepts only single type')
- return typing._GenericAlias(self, (item,))
-
-elif sys.version_info[:2] >= (3, 7):
- class _RequiredForm(typing._SpecialForm, _root=True):
- def __repr__(self):
- return 'typing_extensions.' + self._name
-
- def __getitem__(self, parameters):
- item = typing._type_check(parameters,
- '{} accepts only single type'.format(self._name))
- return typing._GenericAlias(self, (item,))
-
- Required = _RequiredForm(
- 'Required',
- doc="""A special typing construct to mark a key of a total=False TypedDict
- as required. For example:
-
- class Movie(TypedDict, total=False):
- title: Required[str]
- year: int
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
-
- There is no runtime checking that a required key is actually provided
- when instantiating a related TypedDict.
- """)
- NotRequired = _RequiredForm(
- 'NotRequired',
- doc="""A special typing construct to mark a key of a TypedDict as
- potentially missing. For example:
-
- class Movie(TypedDict):
- title: str
- year: NotRequired[int]
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
- """)
-else:
- # NOTE: Modeled after _Final's implementation when _FinalTypingBase available
- class _MaybeRequired(typing._FinalTypingBase, _root=True):
- __slots__ = ('__type__',)
-
- def __init__(self, tp=None, **kwds):
- self.__type__ = tp
-
- def __getitem__(self, item):
- cls = type(self)
- if self.__type__ is None:
- return cls(typing._type_check(item,
- '{} accepts only single type.'.format(cls.__name__[1:])),
- _root=True)
- raise TypeError('{} cannot be further subscripted'
- .format(cls.__name__[1:]))
-
- def _eval_type(self, globalns, localns):
- new_tp = typing._eval_type(self.__type__, globalns, localns)
- if new_tp == self.__type__:
- return self
- return type(self)(new_tp, _root=True)
-
- def __repr__(self):
- r = super().__repr__()
- if self.__type__ is not None:
- r += '[{}]'.format(typing._type_repr(self.__type__))
- return r
-
- def __hash__(self):
- return hash((type(self).__name__, self.__type__))
-
- def __eq__(self, other):
- if not isinstance(other, type(self)):
- return NotImplemented
- if self.__type__ is not None:
- return self.__type__ == other.__type__
- return self is other
-
- class _Required(_MaybeRequired, _root=True):
- """A special typing construct to mark a key of a total=False TypedDict
- as required. For example:
-
- class Movie(TypedDict, total=False):
- title: Required[str]
- year: int
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
-
- There is no runtime checking that a required key is actually provided
- when instantiating a related TypedDict.
- """
-
- class _NotRequired(_MaybeRequired, _root=True):
- """A special typing construct to mark a key of a TypedDict as
- potentially missing. For example:
-
- class Movie(TypedDict):
- title: str
- year: NotRequired[int]
-
- m = Movie(
- title='The Matrix', # typechecker error if key is omitted
- year=1999,
- )
- """
-
- Required = _Required(_root=True)
- NotRequired = _NotRequired(_root=True)
diff --git a/spaces/UCAS/ChatGPT4/app.py b/spaces/UCAS/ChatGPT4/app.py
deleted file mode 100644
index 7e09e57ef928fd2451fd0ed1295d0994ca75d026..0000000000000000000000000000000000000000
--- a/spaces/UCAS/ChatGPT4/app.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import gradio as gr
-import os
-import json
-import requests
-
-#Streaming endpoint
-API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
-
-#Huggingface provided GPT4 OpenAI API Key
-OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
-
-#Inferenec function
-def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {OPENAI_API_KEY}"
- }
- print(f"system message is ^^ {system_msg}")
- if system_msg.strip() == '':
- initial_message = [{"role": "user", "content": f"{inputs}"},]
- multi_turn_message = []
- else:
- initial_message= [{"role": "system", "content": system_msg},
- {"role": "user", "content": f"{inputs}"},]
- multi_turn_message = [{"role": "system", "content": system_msg},]
-
- if chat_counter == 0 :
- payload = {
- "model": "gpt-4",
- "messages": initial_message ,
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
- print(f"chat_counter - {chat_counter}")
- else: #if chat_counter != 0 :
- messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},]
- for data in chatbot:
- user = {}
- user["role"] = "user"
- user["content"] = data[0]
- assistant = {}
- assistant["role"] = "assistant"
- assistant["content"] = data[1]
- messages.append(user)
- messages.append(assistant)
- temp = {}
- temp["role"] = "user"
- temp["content"] = inputs
- messages.append(temp)
- #messages
- payload = {
- "model": "gpt-4",
- "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature, #1.0,
- "top_p": top_p, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,}
-
- chat_counter+=1
-
- history.append(inputs)
- print(f"Logging : payload is - {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- print(f"Logging : response code - {response}")
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter+=1
- continue
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
-
-#Resetting to blank
-def reset_textbox():
- return gr.update(value='')
-
-#to set a component as visible=False
-def set_visible_false():
- return gr.update(visible=False)
-
-#to set a component as visible=True
-def set_visible_true():
- return gr.update(visible=True)
-
-title = """
🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming
"""
-
-#display message for themes feature
-theme_addon_msg = """
🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple theme.push_to_hub().
- 🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - Gradio-Themes-Party🎨 🏆
-"""
-
-#Using info to add additional information about System message in GPT4
-system_msg_info = """A conversation could begin with a system message to gently instruct the assistant.
-System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'"""
-
-#Modifying existing Gradio Theme
-theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green",
- text_size=gr.themes.sizes.text_lg)
-
-with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""",
- theme=theme) as demo:
- gr.HTML(title)
- gr.HTML("""
🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌""")
- gr.HTML(theme_addon_msg)
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key
import torch
-from infinibatch.iterators import CheckpointableIterator
-from infinibatch.datasets import chunked_dataset_iterator
-from typing import Union, Iterable, Any
-
-
-# @TODO: This has been tested once, but we have no regression test presently. I am worried tests will fail if Torch is not installed.
-class IterableCheckpointedDataset(torch.utils.data.IterableDataset):
- """
- Wraps a CheckpointableIterator into a PyTorch IterableDataset, which is recognized by its type by
- PyTorch's DataLoader class.
- """
- def __init__(self, source: CheckpointableIterator):
- super().__init__()
- self._source = source
-
- def __iter__(self): # this is called in the forked clone
- worker_info = torch.utils.data.get_worker_info()
- assert worker_info is None or worker_info.num_workers == 1 # not supported since we can't get at the checkpoint for each worker
- return iter(self._source)
-
-
-# @TODO: This is currently untested, and may not work presently.
-class IterableChunkedDataset(torch.utils.data.IterableDataset):
- def __init__(self, paths: Union[str, Iterable[str]], shuffle: bool=True, buffer_size: int=2**20, transform=None, seed: int=None, world_size: int=1, rank: int=0, num_workers_per_rank: int=1):
- super().__init__()
- self.rank = rank
- self.num_workers_per_rank = num_workers_per_rank
- # instance_rank is set assuming that num_workers_per_rank = 1 and adapted dynamically in __iter__
- self.dataset = chunked_dataset_iterator(paths, shuffle=shuffle, buffer_size=buffer_size, transform=transform, seed=seed, num_instances=world_size*num_workers_per_rank, instance_rank=rank)
-
- def __iter__(self):
- worker_info = torch.utils.data.get_worker_info()
- if worker_info is None: # single-process data loading
- self.dataset._instance_rank = self.rank
- else:
- assert worker_info.num_workers == self.num_workers_per_rank
- self.dataset._instance_rank = self.rank * self.num_workers_per_rank + worker_info.id
- return iter(self.dataset)
Wraps a CheckpointableIterator into a PyTorch IterableDataset, which is recognized by its type by
-PyTorch's DataLoader class.
-
-
-Expand source code
-
-
class IterableCheckpointedDataset(torch.utils.data.IterableDataset):
- """
- Wraps a CheckpointableIterator into a PyTorch IterableDataset, which is recognized by its type by
- PyTorch's DataLoader class.
- """
- def __init__(self, source: CheckpointableIterator):
- super().__init__()
- self._source = source
-
- def __iter__(self): # this is called in the forked clone
- worker_info = torch.utils.data.get_worker_info()
- assert worker_info is None or worker_info.num_workers == 1 # not supported since we can't get at the checkpoint for each worker
- return iter(self._source)
-
-
Ancestors
-
-
torch.utils.data.dataset.IterableDataset
-
torch.utils.data.dataset.Dataset
-
-
-
-class IterableChunkedDataset
-(paths: Union[str, Iterable[str]], shuffle: bool = True, buffer_size: int = 1048576, transform=None, seed: int = None, world_size: int = 1, rank: int = 0, num_workers_per_rank: int = 1)
-
-
-
An iterable Dataset.
-
All datasets that represent an iterable of data samples should subclass it.
-Such form of datasets is particularly useful when data come from a stream.
-
All subclasses should overwrite :meth:__iter__, which would return an
-iterator of samples in this dataset.
-
When a subclass is used with :class:~torch.utils.data.DataLoader, each
-item in the dataset will be yielded from the :class:~torch.utils.data.DataLoader
-iterator. When :attr:num_workers > 0, each worker process will have a
-different copy of the dataset object, so it is often desired to configure
-each copy independently to avoid having duplicate data returned from the
-workers. :func:~torch.utils.data.get_worker_info, when called in a worker
-process, returns information about the worker. It can be used in either the
-dataset's :meth:__iter__ method or the :class:~torch.utils.data.DataLoader 's
-:attr:worker_init_fn option to modify each copy's behavior.
-
Example 1: splitting workload across all workers in :meth:__iter__::
-
>>> class MyIterableDataset(torch.utils.data.IterableDataset):
-... def __init__(self, start, end):
-... super(MyIterableDataset).__init__()
-... assert end > start, "this example code only works with end >= start"
-... self.start = start
-... self.end = end
-...
-... def __iter__(self):
-... worker_info = torch.utils.data.get_worker_info()
-... if worker_info is None: # single-process data loading, return the full iterator
-... iter_start = self.start
-... iter_end = self.end
-... else: # in a worker process
-... # split workload
-... per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))
-... worker_id = worker_info.id
-... iter_start = self.start + worker_id * per_worker
-... iter_end = min(iter_start + per_worker, self.end)
-... return iter(range(iter_start, iter_end))
-...
->>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
->>> ds = MyIterableDataset(start=3, end=7)
-
->>> # Single-process loading
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
-[3, 4, 5, 6]
-
->>> # Mult-process loading with two worker processes
->>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6].
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=2)))
-[3, 5, 4, 6]
-
->>> # With even more workers
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=20)))
-[3, 4, 5, 6]
-
-
Example 2: splitting workload across all workers using :attr:worker_init_fn::
-
>>> class MyIterableDataset(torch.utils.data.IterableDataset):
-... def __init__(self, start, end):
-... super(MyIterableDataset).__init__()
-... assert end > start, "this example code only works with end >= start"
-... self.start = start
-... self.end = end
-...
-... def __iter__(self):
-... return iter(range(self.start, self.end))
-...
->>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
->>> ds = MyIterableDataset(start=3, end=7)
-
->>> # Single-process loading
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
-[3, 4, 5, 6]
->>>
->>> # Directly doing multi-process loading yields duplicate data
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=2)))
-[3, 3, 4, 4, 5, 5, 6, 6]
-
->>> # Define a `worker_init_fn` that configures each dataset copy differently
->>> def worker_init_fn(worker_id):
-... worker_info = torch.utils.data.get_worker_info()
-... dataset = worker_info.dataset # the dataset copy in this worker process
-... overall_start = dataset.start
-... overall_end = dataset.end
-... # configure the dataset to only process the split workload
-... per_worker = int(math.ceil((overall_end - overall_start) / float(worker_info.num_workers)))
-... worker_id = worker_info.id
-... dataset.start = overall_start + worker_id * per_worker
-... dataset.end = min(dataset.start + per_worker, overall_end)
-...
-
->>> # Mult-process loading with the custom `worker_init_fn`
->>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6].
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=worker_init_fn)))
-[3, 5, 4, 6]
-
->>> # With even more workers
->>> print(list(torch.utils.data.DataLoader(ds, num_workers=20, worker_init_fn=worker_init_fn)))
-[3, 4, 5, 6]
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/akhaliq/deeplab2/model/layers/resized_fuse_test.py b/spaces/akhaliq/deeplab2/model/layers/resized_fuse_test.py
deleted file mode 100644
index 3ba8431462e4bb5b4e714834bef2dbb97facdc46..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/model/layers/resized_fuse_test.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Tests for resized_fuse."""
-
-import tensorflow as tf
-
-from deeplab2.model.layers import resized_fuse
-
-
-class ResizedFuseTest(tf.test.TestCase):
-
- def test_resize_and_fuse_features(self):
- batch, height, width, channels = 2, 11, 11, 6
- smaller_height, smaller_width, smaller_channels = 6, 6, 3
- larger_height1, larger_width1 = 21, 21 # Stride 2 conv.
- larger_height2, larger_width2 = 22, 22 # Stride 2 conv.
- larger_height3, larger_width3 = 23, 23 # Conv and resize.
-
- feature_list = []
- feature_list.append(tf.zeros([batch, smaller_height, smaller_width,
- smaller_channels]))
- feature_list.append(tf.zeros([batch, smaller_height, smaller_width,
- channels]))
- feature_list.append(tf.zeros([batch, height, width, smaller_channels]))
- feature_list.append(tf.zeros([batch, height, width, channels]))
- feature_list.append(tf.zeros([batch, larger_height1, larger_width1,
- channels]))
- feature_list.append(tf.zeros([batch, larger_height1, larger_width1,
- smaller_channels]))
- feature_list.append(tf.zeros([batch, larger_height2, larger_width2,
- smaller_channels]))
- feature_list.append(tf.zeros([batch, larger_height3, larger_width3,
- smaller_channels]))
- layer = resized_fuse.ResizedFuse(name='fuse',
- height=height,
- width=width,
- num_channels=channels)
- output = layer(feature_list)
- self.assertEqual(output.get_shape().as_list(), [batch, height, width,
- channels])
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/akhaliq/lama/saicinpainting/evaluation/data.py b/spaces/akhaliq/lama/saicinpainting/evaluation/data.py
deleted file mode 100644
index 69ddb8d3c12d0261e459f7c4f66a702d0c477df0..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/saicinpainting/evaluation/data.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import glob
-import os
-
-import cv2
-import PIL.Image as Image
-import numpy as np
-
-from torch.utils.data import Dataset
-import torch.nn.functional as F
-
-
-def load_image(fname, mode='RGB', return_orig=False):
- img = np.array(Image.open(fname).convert(mode))
- if img.ndim == 3:
- img = np.transpose(img, (2, 0, 1))
- out_img = img.astype('float32') / 255
- if return_orig:
- return out_img, img
- else:
- return out_img
-
-
-def ceil_modulo(x, mod):
- if x % mod == 0:
- return x
- return (x // mod + 1) * mod
-
-
-def pad_img_to_modulo(img, mod):
- channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return np.pad(img, ((0, 0), (0, out_height - height), (0, out_width - width)), mode='symmetric')
-
-
-def pad_tensor_to_modulo(img, mod):
- batch_size, channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return F.pad(img, pad=(0, out_width - width, 0, out_height - height), mode='reflect')
-
-
-def scale_image(img, factor, interpolation=cv2.INTER_AREA):
- if img.shape[0] == 1:
- img = img[0]
- else:
- img = np.transpose(img, (1, 2, 0))
-
- img = cv2.resize(img, dsize=None, fx=factor, fy=factor, interpolation=interpolation)
-
- if img.ndim == 2:
- img = img[None, ...]
- else:
- img = np.transpose(img, (2, 0, 1))
- return img
-
-
-class InpaintingDataset(Dataset):
- def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None):
- self.datadir = datadir
- self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, '**', '*mask*.png'), recursive=True)))
- self.img_filenames = [fname.rsplit('_mask', 1)[0] + img_suffix for fname in self.mask_filenames]
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.mask_filenames)
-
- def __getitem__(self, i):
- image = load_image(self.img_filenames[i], mode='RGB')
- mask = load_image(self.mask_filenames[i], mode='L')
- result = dict(image=image, mask=mask[None, ...])
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
-
- return result
-
-class OurInpaintingDataset(Dataset):
- def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None):
- self.datadir = datadir
- self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, 'mask', '**', '*mask*.png'), recursive=True)))
- self.img_filenames = [os.path.join(self.datadir, 'img', os.path.basename(fname.rsplit('-', 1)[0].rsplit('_', 1)[0]) + '.png') for fname in self.mask_filenames]
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.mask_filenames)
-
- def __getitem__(self, i):
- result = dict(image=load_image(self.img_filenames[i], mode='RGB'),
- mask=load_image(self.mask_filenames[i], mode='L')[None, ...])
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
-
- return result
-
-class PrecomputedInpaintingResultsDataset(InpaintingDataset):
- def __init__(self, datadir, predictdir, inpainted_suffix='_inpainted.jpg', **kwargs):
- super().__init__(datadir, **kwargs)
- if not datadir.endswith('/'):
- datadir += '/'
- self.predictdir = predictdir
- self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix)
- for fname in self.mask_filenames]
-
- def __getitem__(self, i):
- result = super().__getitem__(i)
- result['inpainted'] = load_image(self.pred_filenames[i])
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo)
- return result
-
-class OurPrecomputedInpaintingResultsDataset(OurInpaintingDataset):
- def __init__(self, datadir, predictdir, inpainted_suffix="png", **kwargs):
- super().__init__(datadir, **kwargs)
- if not datadir.endswith('/'):
- datadir += '/'
- self.predictdir = predictdir
- self.pred_filenames = [os.path.join(predictdir, os.path.basename(os.path.splitext(fname)[0]) + f'_inpainted.{inpainted_suffix}')
- for fname in self.mask_filenames]
- # self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix)
- # for fname in self.mask_filenames]
-
- def __getitem__(self, i):
- result = super().__getitem__(i)
- result['inpainted'] = self.file_loader(self.pred_filenames[i])
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo)
- return result
-
-class InpaintingEvalOnlineDataset(Dataset):
- def __init__(self, indir, mask_generator, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None, **kwargs):
- self.indir = indir
- self.mask_generator = mask_generator
- self.img_filenames = sorted(list(glob.glob(os.path.join(self.indir, '**', f'*{img_suffix}' ), recursive=True)))
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.img_filenames)
-
- def __getitem__(self, i):
- img, raw_image = load_image(self.img_filenames[i], mode='RGB', return_orig=True)
- mask = self.mask_generator(img, raw_image=raw_image)
- result = dict(image=img, mask=mask)
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
- return result
\ No newline at end of file
diff --git a/spaces/ali-ghamdan/deoldify/fastai/vision/models/xresnet.py b/spaces/ali-ghamdan/deoldify/fastai/vision/models/xresnet.py
deleted file mode 100644
index accc600a15efb63f3d84d4ee68867d73e9f4a9f8..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/deoldify/fastai/vision/models/xresnet.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import torch.nn as nn
-import torch,math,sys
-import torch.utils.model_zoo as model_zoo
-from functools import partial
-from ...torch_core import Module
-
-__all__ = ['XResNet', 'xresnet18', 'xresnet34', 'xresnet50', 'xresnet101', 'xresnet152']
-
-# or: ELU+init (a=0.54; gain=1.55)
-act_fn = nn.ReLU(inplace=True)
-
-class Flatten(Module):
- def forward(self, x): return x.view(x.size(0), -1)
-
-def init_cnn(m):
- if getattr(m, 'bias', None) is not None: nn.init.constant_(m.bias, 0)
- if isinstance(m, (nn.Conv2d,nn.Linear)): nn.init.kaiming_normal_(m.weight)
- for l in m.children(): init_cnn(l)
-
-def conv(ni, nf, ks=3, stride=1, bias=False):
- return nn.Conv2d(ni, nf, kernel_size=ks, stride=stride, padding=ks//2, bias=bias)
-
-def noop(x): return x
-
-def conv_layer(ni, nf, ks=3, stride=1, zero_bn=False, act=True):
- bn = nn.BatchNorm2d(nf)
- nn.init.constant_(bn.weight, 0. if zero_bn else 1.)
- layers = [conv(ni, nf, ks, stride=stride), bn]
- if act: layers.append(act_fn)
- return nn.Sequential(*layers)
-
-class ResBlock(Module):
- def __init__(self, expansion, ni, nh, stride=1):
- nf,ni = nh*expansion,ni*expansion
- layers = [conv_layer(ni, nh, 3, stride=stride),
- conv_layer(nh, nf, 3, zero_bn=True, act=False)
- ] if expansion == 1 else [
- conv_layer(ni, nh, 1),
- conv_layer(nh, nh, 3, stride=stride),
- conv_layer(nh, nf, 1, zero_bn=True, act=False)
- ]
- self.convs = nn.Sequential(*layers)
- # TODO: check whether act=True works better
- self.idconv = noop if ni==nf else conv_layer(ni, nf, 1, act=False)
- self.pool = noop if stride==1 else nn.AvgPool2d(2, ceil_mode=True)
-
- def forward(self, x): return act_fn(self.convs(x) + self.idconv(self.pool(x)))
-
-def filt_sz(recep): return min(64, 2**math.floor(math.log2(recep*0.75)))
-
-class XResNet(nn.Sequential):
- def __init__(self, expansion, layers, c_in=3, c_out=1000):
- stem = []
- sizes = [c_in,32,32,64]
- for i in range(3):
- stem.append(conv_layer(sizes[i], sizes[i+1], stride=2 if i==0 else 1))
- #nf = filt_sz(c_in*9)
- #stem.append(conv_layer(c_in, nf, stride=2 if i==1 else 1))
- #c_in = nf
-
- block_szs = [64//expansion,64,128,256,512]
- blocks = [self._make_layer(expansion, block_szs[i], block_szs[i+1], l, 1 if i==0 else 2)
- for i,l in enumerate(layers)]
- super().__init__(
- *stem,
- nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
- *blocks,
- nn.AdaptiveAvgPool2d(1), Flatten(),
- nn.Linear(block_szs[-1]*expansion, c_out),
- )
- init_cnn(self)
-
- def _make_layer(self, expansion, ni, nf, blocks, stride):
- return nn.Sequential(
- *[ResBlock(expansion, ni if i==0 else nf, nf, stride if i==0 else 1)
- for i in range(blocks)])
-
-def xresnet(expansion, n_layers, name, pretrained=False, **kwargs):
- model = XResNet(expansion, n_layers, **kwargs)
- if pretrained: model.load_state_dict(model_zoo.load_url(model_urls[name]))
- return model
-
-me = sys.modules[__name__]
-for n,e,l in [
- [ 18 , 1, [2,2,2 ,2] ],
- [ 34 , 1, [3,4,6 ,3] ],
- [ 50 , 4, [3,4,6 ,3] ],
- [ 101, 4, [3,4,23,3] ],
- [ 152, 4, [3,8,36,3] ],
-]:
- name = f'xresnet{n}'
- setattr(me, name, partial(xresnet, expansion=e, n_layers=l, name=name))
-
diff --git a/spaces/allandclive/Uganda_MMS/uroman/lib/JSON/backportPP/Boolean.pm b/spaces/allandclive/Uganda_MMS/uroman/lib/JSON/backportPP/Boolean.pm
deleted file mode 100644
index 38be6a3817b3b3b5632f4ee6bd3bba7397af567e..0000000000000000000000000000000000000000
--- a/spaces/allandclive/Uganda_MMS/uroman/lib/JSON/backportPP/Boolean.pm
+++ /dev/null
@@ -1,27 +0,0 @@
-=head1 NAME
-
-JSON::PP::Boolean - dummy module providing JSON::PP::Boolean
-
-=head1 SYNOPSIS
-
- # do not "use" yourself
-
-=head1 DESCRIPTION
-
-This module exists only to provide overload resolution for Storable
-and similar modules. See L for more info about this class.
-
-=cut
-
-use JSON::backportPP ();
-use strict;
-
-1;
-
-=head1 AUTHOR
-
-This idea is from L written by
-Marc Lehmann
-
-=cut
-
diff --git a/spaces/allknowingroger/Image-Models-Test138/README.md b/spaces/allknowingroger/Image-Models-Test138/README.md
deleted file mode 100644
index 157b60126025340e853d024c141cb88a6f2a9dc2..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test138/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-duplicated_from: allknowingroger/Image-Models-Test137
----
-
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test182/README.md b/spaces/allknowingroger/Image-Models-Test182/README.md
deleted file mode 100644
index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test182/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test
----
-
-
\ No newline at end of file
diff --git a/spaces/alsrbdni/magic-to-diffusion/share_btn.py b/spaces/alsrbdni/magic-to-diffusion/share_btn.py
deleted file mode 100644
index 1382fb25a5ef50e843598187e1e660e86ea8dd05..0000000000000000000000000000000000000000
--- a/spaces/alsrbdni/magic-to-diffusion/share_btn.py
+++ /dev/null
@@ -1,88 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `magic-prompt-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `magic-prompt-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputImgEl = gradioEl.querySelector('#input-img img');
- const imgEls = gradioEl.querySelectorAll('#generated-gallery img');
- const promptTxt = gradioEl.querySelector('#translated textarea').value;
- let titleTxt = promptTxt;
- if(titleTxt.length > 100){
- titleTxt = titleTxt.slice(0, 100) + ' ...';
- }
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!imgEls.length){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const files = await Promise.all(
- [...imgEls].map(async (imgEl) => {
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- })
- );
- const inputFile = await getInputImgFile(inputImgEl);
- files.push(inputFile);
- const urls = await Promise.all(files.map((f) => uploadFile(f)));
- const urlInputImg = urls.pop();
- const htmlImgs = urls.map(url => ``);
- const htmlImgsMd = htmlImgs.join(`\n`);
- const descriptionMd = `#### Input img:
-
-#### Caption:
-${promptTxt}
-#### Generations:
-
-${htmlImgsMd}
-
`;
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/huggingface-projects/magic-diffusion/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_sine_srate.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_sine_srate.c
deleted file mode 100644
index d4ce81b26095264fb9822d4421af8962c43566e4..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_sine_srate.c
+++ /dev/null
@@ -1,182 +0,0 @@
-/*
- * $Id: patest_sine.c 1097 2006-08-26 08:27:53Z rossb $
- *
- * This program uses the PortAudio Portable Audio Library.
- * For more information see: http://www.portaudio.com/
- * Copyright (c) 1999-2000 Ross Bencina and Phil Burk
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file patest_sine_srate_mac.c
- @ingroup test_src
- @brief Plays sine waves at 44100 and 48000,
- and forces the hardware to change if this is a mac.
- Designed for use with CoreAudio.
- @author Bjorn Roche
- @author Ross Bencina
- @author Phil Burk
-*/
-
-#include
-#include
-#include "portaudio.h"
-
-#ifdef __APPLE__
-#include "pa_mac_core.h"
-#endif
-
-#define NUM_SECONDS (5)
-#define SAMPLE_RATE1 (44100)
-#define SAMPLE_RATE2 (48000)
-#define FRAMES_PER_BUFFER (64)
-
-#ifndef M_PI
-#define M_PI (3.14159265)
-#endif
-
-#define TABLE_SIZE (200)
-typedef struct
-{
- float sine[TABLE_SIZE];
- int left_phase;
- int right_phase;
-}
-paTestData;
-
-/* This routine will be called by the PortAudio engine when audio is needed.
-** It may called at interrupt level on some machines so don't do anything
-** that could mess up the system like calling malloc() or free().
-*/
-static int patestCallback( const void *inputBuffer, void *outputBuffer,
- unsigned long framesPerBuffer,
- const PaStreamCallbackTimeInfo* timeInfo,
- PaStreamCallbackFlags statusFlags,
- void *userData )
-{
- paTestData *data = (paTestData*)userData;
- float *out = (float*)outputBuffer;
- unsigned long i;
-
- (void) timeInfo; /* Prevent unused variable warnings. */
- (void) statusFlags;
- (void) inputBuffer;
-
- for( i=0; isine[data->left_phase]; /* left */
- *out++ = data->sine[data->right_phase]; /* right */
- data->left_phase += 1;
- if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE;
- data->right_phase += 3; /* higher pitch so we can distinguish left and right. */
- if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE;
- }
-
- return paContinue;
-}
-
-/*******************************************************************/
-int main(void);
-int main(void)
-{
- PaStreamParameters outputParameters;
- PaStream *stream;
- PaError err;
- paTestData data;
-#ifdef __APPLE__
- PaMacCoreStreamInfo macInfo;
-#endif
- int i;
-
- /* initialise sinusoidal wavetable */
- for( i=0; idefaultLowOutputLatency;
- /** setup host specific info */
-#ifdef __APPLE__
- PaMacCore_SetupStreamInfo( &macInfo, paMacCorePro );
- outputParameters.hostApiSpecificStreamInfo = &macInfo;
-#else
- printf( "Hardware SR changing not being tested on this platform.\n" );
- outputParameters.hostApiSpecificStreamInfo = NULL;
-#endif
- err = Pa_OpenStream(
- &stream,
- NULL, /* no input */
- &outputParameters,
- sr,
- FRAMES_PER_BUFFER,
- paClipOff, /* we won't output out of range samples so don't bother clipping them */
- patestCallback,
- &data );
- if( err != paNoError ) goto error;
-
- err = Pa_StartStream( stream );
- if( err != paNoError ) goto error;
-
- printf("Play for %d seconds.\n", NUM_SECONDS );
- Pa_Sleep( NUM_SECONDS * 1000 );
-
- err = Pa_StopStream( stream );
- if( err != paNoError ) goto error;
-
- err = Pa_CloseStream( stream );
- if( err != paNoError ) goto error;
- }
-
- Pa_Terminate();
- printf("Test finished.\n");
-
- return err;
-error:
- Pa_Terminate();
- fprintf( stderr, "An error occurred while using the portaudio stream\n" );
- fprintf( stderr, "Error number: %d\n", err );
- fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) );
- return err;
-}
diff --git a/spaces/anaclaudia13ct/insect_detection/utils/downloads.py b/spaces/anaclaudia13ct/insect_detection/utils/downloads.py
deleted file mode 100644
index 72ea87340eb9b4f07b2271cce24f86bf3a6872ab..0000000000000000000000000000000000000000
--- a/spaces/anaclaudia13ct/insect_detection/utils/downloads.py
+++ /dev/null
@@ -1,108 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Download utils
-"""
-
-import logging
-import os
-import subprocess
-import urllib
-from pathlib import Path
-
-import requests
-import torch
-
-
-def is_url(url, check=True):
- # Check if string is URL and check if URL exists
- try:
- url = str(url)
- result = urllib.parse.urlparse(url)
- assert all([result.scheme, result.netloc]) # check if is url
- return (urllib.request.urlopen(url).getcode() == 200) if check else True # check if exists online
- except (AssertionError, urllib.request.HTTPError):
- return False
-
-
-def gsutil_getsize(url=''):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
- return eval(s.split(' ')[0]) if len(s) else 0 # bytes
-
-
-def url_getsize(url='https://ultralytics.com/images/bus.jpg'):
- # Return downloadable file size in bytes
- response = requests.head(url, allow_redirects=True)
- return int(response.headers.get('content-length', -1))
-
-
-def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
- # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
- from utils.general import LOGGER
-
- file = Path(file)
- assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
- try: # url1
- LOGGER.info(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, str(file), progress=LOGGER.level <= logging.INFO)
- assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check
- except Exception as e: # url2
- if file.exists():
- file.unlink() # remove partial downloads
- LOGGER.info(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...')
- os.system(f"curl -# -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail
- finally:
- if not file.exists() or file.stat().st_size < min_bytes: # check
- if file.exists():
- file.unlink() # remove partial downloads
- LOGGER.info(f"ERROR: {assert_msg}\n{error_msg}")
- LOGGER.info('')
-
-
-def attempt_download(file, repo='ultralytics/yolov5', release='v7.0'):
- # Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v7.0', etc.
- from utils.general import LOGGER
-
- def github_assets(repository, version='latest'):
- # Return GitHub repo tag (i.e. 'v7.0') and assets (i.e. ['yolov5s.pt', 'yolov5m.pt', ...])
- if version != 'latest':
- version = f'tags/{version}' # i.e. tags/v7.0
- response = requests.get(f'https://api.github.com/repos/{repository}/releases/{version}').json() # github api
- return response['tag_name'], [x['name'] for x in response['assets']] # tag, assets
-
- file = Path(str(file).strip().replace("'", ''))
- if not file.exists():
- # URL specified
- name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc.
- if str(file).startswith(('http:/', 'https:/')): # download
- url = str(file).replace(':/', '://') # Pathlib turns :// -> :/
- file = name.split('?')[0] # parse authentication https://url.com/file.txt?auth...
- if Path(file).is_file():
- LOGGER.info(f'Found {url} locally at {file}') # file already exists
- else:
- safe_download(file=file, url=url, min_bytes=1E5)
- return file
-
- # GitHub assets
- assets = [f'yolov5{size}{suffix}.pt' for size in 'nsmlx' for suffix in ('', '6', '-cls', '-seg')] # default
- try:
- tag, assets = github_assets(repo, release)
- except Exception:
- try:
- tag, assets = github_assets(repo) # latest release
- except Exception:
- try:
- tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1]
- except Exception:
- tag = release
-
- file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
- if name in assets:
- url3 = 'https://drive.google.com/drive/folders/1EFQTEUeXWSFww0luse2jB9M1QNZQGwNl' # backup gdrive mirror
- safe_download(
- file,
- url=f'https://github.com/{repo}/releases/download/{tag}/{name}',
- min_bytes=1E5,
- error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/{tag} or {url3}')
-
- return str(file)
diff --git a/spaces/annchen2010/ChatGPT/chat_func.py b/spaces/annchen2010/ChatGPT/chat_func.py
deleted file mode 100644
index 374178f3d22c5c23d1dc2952336cdc298a77315d..0000000000000000000000000000000000000000
--- a/spaces/annchen2010/ChatGPT/chat_func.py
+++ /dev/null
@@ -1,456 +0,0 @@
-# -*- coding:utf-8 -*-
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import os
-import requests
-import urllib3
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-
-from presets import *
-from llama_func import *
-from utils import *
-
-# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s")
-
-if TYPE_CHECKING:
- from typing import TypedDict
-
- class DataframeData(TypedDict):
- headers: List[str]
- data: List[List[str | int | bool]]
-
-
-initial_prompt = "You are a helpful assistant."
-API_URL = "https://api.openai.com/v1/chat/completions"
-HISTORY_DIR = "history"
-TEMPLATES_DIR = "templates"
-
-def get_response(
- openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model
-):
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": selected_model,
- "messages": history, # [{"role": "user", "content": f"{inputs}"}],
- "temperature": temperature, # 1.0,
- "top_p": top_p, # 1.0,
- "n": 1,
- "stream": stream,
- "presence_penalty": 0,
- "frequency_penalty": 0,
- }
- if stream:
- timeout = timeout_streaming
- else:
- timeout = timeout_all
-
- # 获取环境变量中的代理设置
- http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy")
- https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy")
-
- # 如果存在代理设置,使用它们
- proxies = {}
- if http_proxy:
- logging.info(f"Using HTTP proxy: {http_proxy}")
- proxies["http"] = http_proxy
- if https_proxy:
- logging.info(f"Using HTTPS proxy: {https_proxy}")
- proxies["https"] = https_proxy
-
- # 如果有代理,使用代理发送请求,否则使用默认设置发送请求
- if proxies:
- response = requests.post(
- API_URL,
- headers=headers,
- json=payload,
- stream=True,
- timeout=timeout,
- proxies=proxies,
- )
- else:
- response = requests.post(
- API_URL,
- headers=headers,
- json=payload,
- stream=True,
- timeout=timeout,
- )
- return response
-
-
-def stream_predict(
- openai_api_key,
- system_prompt,
- history,
- inputs,
- chatbot,
- all_token_counts,
- top_p,
- temperature,
- selected_model,
- fake_input=None,
- display_append=""
-):
- def get_return_value():
- return chatbot, history, status_text, all_token_counts
-
- logging.info("实时回答模式")
- partial_words = ""
- counter = 0
- status_text = "开始实时传输回答……"
- history.append(construct_user(inputs))
- history.append(construct_assistant(""))
- if fake_input:
- chatbot.append((fake_input, ""))
- else:
- chatbot.append((inputs, ""))
- user_token_count = 0
- if len(all_token_counts) == 0:
- system_prompt_token_count = count_token(construct_system(system_prompt))
- user_token_count = (
- count_token(construct_user(inputs)) + system_prompt_token_count
- )
- else:
- user_token_count = count_token(construct_user(inputs))
- all_token_counts.append(user_token_count)
- logging.info(f"输入token计数: {user_token_count}")
- yield get_return_value()
- try:
- response = get_response(
- openai_api_key,
- system_prompt,
- history,
- temperature,
- top_p,
- True,
- selected_model,
- )
- except requests.exceptions.ConnectTimeout:
- status_text = (
- standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
- )
- yield get_return_value()
- return
- except requests.exceptions.ReadTimeout:
- status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt
- yield get_return_value()
- return
-
- yield get_return_value()
- error_json_str = ""
-
- for chunk in tqdm(response.iter_lines()):
- if counter == 0:
- counter += 1
- continue
- counter += 1
- # check whether each line is non-empty
- if chunk:
- chunk = chunk.decode()
- chunklength = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- logging.info(chunk)
- error_json_str += chunk
- status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}"
- yield get_return_value()
- continue
- # decode each line as response data is in bytes
- if chunklength > 6 and "delta" in chunk["choices"][0]:
- finish_reason = chunk["choices"][0]["finish_reason"]
- status_text = construct_token_message(
- sum(all_token_counts), stream=True
- )
- if finish_reason == "stop":
- yield get_return_value()
- break
- try:
- partial_words = (
- partial_words + chunk["choices"][0]["delta"]["content"]
- )
- except KeyError:
- status_text = (
- standard_error_msg
- + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: "
- + str(sum(all_token_counts))
- )
- yield get_return_value()
- break
- history[-1] = construct_assistant(partial_words)
- chatbot[-1] = (chatbot[-1][0], partial_words+display_append)
- all_token_counts[-1] += 1
- yield get_return_value()
-
-
-def predict_all(
- openai_api_key,
- system_prompt,
- history,
- inputs,
- chatbot,
- all_token_counts,
- top_p,
- temperature,
- selected_model,
- fake_input=None,
- display_append=""
-):
- logging.info("一次性回答模式")
- history.append(construct_user(inputs))
- history.append(construct_assistant(""))
- if fake_input:
- chatbot.append((fake_input, ""))
- else:
- chatbot.append((inputs, ""))
- all_token_counts.append(count_token(construct_user(inputs)))
- try:
- response = get_response(
- openai_api_key,
- system_prompt,
- history,
- temperature,
- top_p,
- False,
- selected_model,
- )
- except requests.exceptions.ConnectTimeout:
- status_text = (
- standard_error_msg + connection_timeout_prompt + error_retrieve_prompt
- )
- return chatbot, history, status_text, all_token_counts
- except requests.exceptions.ProxyError:
- status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt
- return chatbot, history, status_text, all_token_counts
- except requests.exceptions.SSLError:
- status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt
- return chatbot, history, status_text, all_token_counts
- response = json.loads(response.text)
- content = response["choices"][0]["message"]["content"]
- history[-1] = construct_assistant(content)
- chatbot[-1] = (chatbot[-1][0], content+display_append)
- total_token_count = response["usage"]["total_tokens"]
- all_token_counts[-1] = total_token_count - sum(all_token_counts)
- status_text = construct_token_message(total_token_count)
- return chatbot, history, status_text, all_token_counts
-
-
-def predict(
- openai_api_key,
- system_prompt,
- history,
- inputs,
- chatbot,
- all_token_counts,
- top_p,
- temperature,
- stream=False,
- selected_model=MODELS[0],
- use_websearch=False,
- files = None,
- should_check_token_count=True,
-): # repetition_penalty, top_k
- logging.info("输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL)
- if files:
- msg = "构建索引中……(这可能需要比较久的时间)"
- logging.info(msg)
- yield chatbot, history, msg, all_token_counts
- index = construct_index(openai_api_key, file_src=files)
- msg = "索引构建完成,获取回答中……"
- yield chatbot, history, msg, all_token_counts
- history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot)
- yield chatbot, history, status_text, all_token_counts
- return
-
- old_inputs = ""
- link_references = []
- if use_websearch:
- search_results = ddg(inputs, max_results=5)
- old_inputs = inputs
- web_results = []
- for idx, result in enumerate(search_results):
- logging.info(f"搜索结果{idx + 1}:{result}")
- domain_name = urllib3.util.parse_url(result["href"]).host
- web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}')
- link_references.append(f"{idx+1}. [{domain_name}]({result['href']})\n")
- link_references = "\n\n" + "".join(link_references)
- inputs = (
- replace_today(WEBSEARCH_PTOMPT_TEMPLATE)
- .replace("{query}", inputs)
- .replace("{web_results}", "\n\n".join(web_results))
- )
- else:
- link_references = ""
-
- if len(openai_api_key) != 51:
- status_text = standard_error_msg + no_apikey_msg
- logging.info(status_text)
- chatbot.append((inputs, ""))
- if len(history) == 0:
- history.append(construct_user(inputs))
- history.append("")
- all_token_counts.append(0)
- else:
- history[-2] = construct_user(inputs)
- yield chatbot, history, status_text, all_token_counts
- return
-
- yield chatbot, history, "开始生成回答……", all_token_counts
-
- if stream:
- logging.info("使用流式传输")
- iter = stream_predict(
- openai_api_key,
- system_prompt,
- history,
- inputs,
- chatbot,
- all_token_counts,
- top_p,
- temperature,
- selected_model,
- fake_input=old_inputs,
- display_append=link_references
- )
- for chatbot, history, status_text, all_token_counts in iter:
- yield chatbot, history, status_text, all_token_counts
- else:
- logging.info("不使用流式传输")
- chatbot, history, status_text, all_token_counts = predict_all(
- openai_api_key,
- system_prompt,
- history,
- inputs,
- chatbot,
- all_token_counts,
- top_p,
- temperature,
- selected_model,
- fake_input=old_inputs,
- display_append=link_references
- )
- yield chatbot, history, status_text, all_token_counts
-
- logging.info(f"传输完毕。当前token计数为{all_token_counts}")
- if len(history) > 1 and history[-1]["content"] != inputs:
- logging.info(
- "回答为:"
- + colorama.Fore.BLUE
- + f"{history[-1]['content']}"
- + colorama.Style.RESET_ALL
- )
-
- if stream:
- max_token = max_token_streaming
- else:
- max_token = max_token_all
-
- if sum(all_token_counts) > max_token and should_check_token_count:
- status_text = f"精简token中{all_token_counts}/{max_token}"
- logging.info(status_text)
- yield chatbot, history, status_text, all_token_counts
- iter = reduce_token_size(
- openai_api_key,
- system_prompt,
- history,
- chatbot,
- all_token_counts,
- top_p,
- temperature,
- max_token//2,
- selected_model=selected_model,
- )
- for chatbot, history, status_text, all_token_counts in iter:
- status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}"
- yield chatbot, history, status_text, all_token_counts
-
-
-def retry(
- openai_api_key,
- system_prompt,
- history,
- chatbot,
- token_count,
- top_p,
- temperature,
- stream=False,
- selected_model=MODELS[0],
-):
- logging.info("重试中……")
- if len(history) == 0:
- yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count
- return
- history.pop()
- inputs = history.pop()["content"]
- token_count.pop()
- iter = predict(
- openai_api_key,
- system_prompt,
- history,
- inputs,
- chatbot,
- token_count,
- top_p,
- temperature,
- stream=stream,
- selected_model=selected_model,
- )
- logging.info("重试中……")
- for x in iter:
- yield x
- logging.info("重试完毕")
-
-
-def reduce_token_size(
- openai_api_key,
- system_prompt,
- history,
- chatbot,
- token_count,
- top_p,
- temperature,
- max_token_count,
- selected_model=MODELS[0],
-):
- logging.info("开始减少token数量……")
- iter = predict(
- openai_api_key,
- system_prompt,
- history,
- summarize_prompt,
- chatbot,
- token_count,
- top_p,
- temperature,
- selected_model=selected_model,
- should_check_token_count=False,
- )
- logging.info(f"chatbot: {chatbot}")
- flag = False
- for chatbot, history, status_text, previous_token_count in iter:
- num_chat = find_n(previous_token_count, max_token_count)
- if flag:
- chatbot = chatbot[:-1]
- flag = True
- history = history[-2*num_chat:] if num_chat > 0 else []
- token_count = previous_token_count[-num_chat:] if num_chat > 0 else []
- msg = f"保留了最近{num_chat}轮对话"
- yield chatbot, history, msg + "," + construct_token_message(
- sum(token_count) if len(token_count) > 0 else 0,
- ), token_count
- logging.info(msg)
- logging.info("减少token数量完毕")
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/ui.py b/spaces/antonovmaxim/text-generation-webui-space/modules/ui.py
deleted file mode 100644
index 1e9c4ab0cb4933f59318eab1d823144146d1ccc7..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/modules/ui.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from pathlib import Path
-
-import gradio as gr
-import torch
-
-from modules import shared
-
-with open(Path(__file__).resolve().parent / '../css/main.css', 'r') as f:
- css = f.read()
-with open(Path(__file__).resolve().parent / '../css/chat.css', 'r') as f:
- chat_css = f.read()
-with open(Path(__file__).resolve().parent / '../css/main.js', 'r') as f:
- main_js = f.read()
-with open(Path(__file__).resolve().parent / '../css/chat.js', 'r') as f:
- chat_js = f.read()
-
-refresh_symbol = '\U0001f504' # 🔄
-theme = gr.themes.Default(
- font=['Helvetica', 'ui-sans-serif', 'system-ui', 'sans-serif'],
- font_mono=['IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'],
-).set(
- border_color_primary='#c5c5d2',
- button_large_padding='6px 12px',
- body_text_color_subdued='#484848',
- background_fill_secondary='#eaeaea'
-)
-
-
-def list_model_elements():
- elements = ['cpu_memory', 'auto_devices', 'disk', 'cpu', 'bf16', 'load_in_8bit', 'wbits', 'groupsize', 'model_type', 'pre_layer', 'threads', 'n_batch', 'no_mmap', 'mlock', 'n_gpu_layers']
- for i in range(torch.cuda.device_count()):
- elements.append(f'gpu_memory_{i}')
- return elements
-
-
-def list_interface_input_elements(chat=False):
- elements = ['max_new_tokens', 'seed', 'temperature', 'top_p', 'top_k', 'typical_p', 'repetition_penalty', 'encoder_repetition_penalty', 'no_repeat_ngram_size', 'min_length', 'do_sample', 'penalty_alpha', 'num_beams', 'length_penalty', 'early_stopping', 'add_bos_token', 'ban_eos_token', 'truncation_length', 'custom_stopping_strings', 'skip_special_tokens', 'preset_menu', 'stream']
- if chat:
- elements += ['name1', 'name2', 'greeting', 'context', 'chat_prompt_size', 'chat_generation_attempts', 'stop_at_newline', 'mode', 'instruction_template', 'character_menu', 'name1_instruct', 'name2_instruct', 'context_instruct', 'turn_template', 'chat_style', 'chat-instruct_command']
-
- elements += list_model_elements()
- return elements
-
-
-def gather_interface_values(*args):
- output = {}
- for i, element in enumerate(shared.input_elements):
- output[element] = args[i]
-
- shared.persistent_interface_state = output
- return output
-
-
-def apply_interface_values(state, use_persistent=False):
- if use_persistent:
- state = shared.persistent_interface_state
-
- elements = list_interface_input_elements(chat=shared.is_chat())
- if len(state) == 0:
- return [gr.update() for k in elements] # Dummy, do nothing
- else:
- return [state[k] if k in state else gr.update() for k in elements]
-
-
-class ToolButton(gr.Button, gr.components.FormComponent):
- """Small button with single emoji as text, fits inside gradio forms"""
-
- def __init__(self, **kwargs):
- super().__init__(variant="tool", **kwargs)
-
- def get_block_name(self):
- return "button"
-
-
-def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_id):
- def refresh():
- refresh_method()
- args = refreshed_args() if callable(refreshed_args) else refreshed_args
-
- for k, v in args.items():
- setattr(refresh_component, k, v)
-
- return gr.update(**(args or {}))
-
- refresh_button = ToolButton(value=refresh_symbol, elem_id=elem_id)
- refresh_button.click(
- fn=refresh,
- inputs=[],
- outputs=[refresh_component]
- )
- return refresh_button
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/server/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/server/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/radam.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/radam.py
deleted file mode 100644
index cbd14990f33cb671f030e401a3a2f9b96c2710cd..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/radam.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# modified from https://github.com/LiyuanLucasLiu/RAdam
-
-import math
-
-import torch
-from torch.optim.optimizer import Optimizer
-
-
-class RAdam(Optimizer):
- def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, degenerated_to_sgd=True):
- if lr < 0.0:
- raise ValueError("Invalid learning rate: {}".format(lr))
- if eps < 0.0:
- raise ValueError("Invalid epsilon value: {}".format(eps))
- if not 0.0 <= betas[0] < 1.0:
- raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
- if not 0.0 <= betas[1] < 1.0:
- raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
-
- self.degenerated_to_sgd = degenerated_to_sgd
- if isinstance(params, (list, tuple)) and len(params) > 0 and isinstance(params[0], dict):
- for param in params:
- if "betas" in param and (param["betas"][0] != betas[0] or param["betas"][1] != betas[1]):
- param["buffer"] = [[None, None, None] for _ in range(10)]
- defaults = dict(
- lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, buffer=[[None, None, None] for _ in range(10)]
- )
- super().__init__(params, defaults)
-
- def __setstate__(self, state): # pylint: disable=useless-super-delegation
- super().__setstate__(state)
-
- def step(self, closure=None):
- loss = None
- if closure is not None:
- loss = closure()
-
- for group in self.param_groups:
- for p in group["params"]:
- if p.grad is None:
- continue
- grad = p.grad.data.float()
- if grad.is_sparse:
- raise RuntimeError("RAdam does not support sparse gradients")
-
- p_data_fp32 = p.data.float()
-
- state = self.state[p]
-
- if len(state) == 0:
- state["step"] = 0
- state["exp_avg"] = torch.zeros_like(p_data_fp32)
- state["exp_avg_sq"] = torch.zeros_like(p_data_fp32)
- else:
- state["exp_avg"] = state["exp_avg"].type_as(p_data_fp32)
- state["exp_avg_sq"] = state["exp_avg_sq"].type_as(p_data_fp32)
-
- exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
- beta1, beta2 = group["betas"]
-
- exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
- exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
-
- state["step"] += 1
- buffered = group["buffer"][int(state["step"] % 10)]
- if state["step"] == buffered[0]:
- N_sma, step_size = buffered[1], buffered[2]
- else:
- buffered[0] = state["step"]
- beta2_t = beta2 ** state["step"]
- N_sma_max = 2 / (1 - beta2) - 1
- N_sma = N_sma_max - 2 * state["step"] * beta2_t / (1 - beta2_t)
- buffered[1] = N_sma
-
- # more conservative since it's an approximated value
- if N_sma >= 5:
- step_size = math.sqrt(
- (1 - beta2_t)
- * (N_sma - 4)
- / (N_sma_max - 4)
- * (N_sma - 2)
- / N_sma
- * N_sma_max
- / (N_sma_max - 2)
- ) / (1 - beta1 ** state["step"])
- elif self.degenerated_to_sgd:
- step_size = 1.0 / (1 - beta1 ** state["step"])
- else:
- step_size = -1
- buffered[2] = step_size
-
- # more conservative since it's an approximated value
- if N_sma >= 5:
- if group["weight_decay"] != 0:
- p_data_fp32.add_(p_data_fp32, alpha=-group["weight_decay"] * group["lr"])
- denom = exp_avg_sq.sqrt().add_(group["eps"])
- p_data_fp32.addcdiv_(exp_avg, denom, value=-step_size * group["lr"])
- p.data.copy_(p_data_fp32)
- elif step_size > 0:
- if group["weight_decay"] != 0:
- p_data_fp32.add_(p_data_fp32, alpha=-group["weight_decay"] * group["lr"])
- p_data_fp32.add_(exp_avg, alpha=-step_size * group["lr"])
- p.data.copy_(p_data_fp32)
-
- return loss
diff --git a/spaces/asafAdge/Detic/detic/modeling/roi_heads/detic_roi_heads.py b/spaces/asafAdge/Detic/detic/modeling/roi_heads/detic_roi_heads.py
deleted file mode 100644
index c87559359e0516443a43ed327110ec55fa4fa307..0000000000000000000000000000000000000000
--- a/spaces/asafAdge/Detic/detic/modeling/roi_heads/detic_roi_heads.py
+++ /dev/null
@@ -1,271 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import numpy as np
-import json
-import math
-import torch
-from torch import nn
-from torch.autograd.function import Function
-from typing import Dict, List, Optional, Tuple, Union
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec
-from detectron2.layers import batched_nms
-from detectron2.structures import Boxes, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference
-from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
-from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads, _ScaleGradient
-from detectron2.modeling.roi_heads.box_head import build_box_head
-from .detic_fast_rcnn import DeticFastRCNNOutputLayers
-from ..debug import debug_second_stage
-
-from torch.cuda.amp import autocast
-
-@ROI_HEADS_REGISTRY.register()
-class DeticCascadeROIHeads(CascadeROIHeads):
- @configurable
- def __init__(
- self,
- *,
- mult_proposal_score: bool = False,
- with_image_labels: bool = False,
- add_image_box: bool = False,
- image_box_size: float = 1.0,
- ws_num_props: int = 512,
- add_feature_to_prop: bool = False,
- mask_weight: float = 1.0,
- one_class_per_proposal: bool = False,
- **kwargs,
- ):
- super().__init__(**kwargs)
- self.mult_proposal_score = mult_proposal_score
- self.with_image_labels = with_image_labels
- self.add_image_box = add_image_box
- self.image_box_size = image_box_size
- self.ws_num_props = ws_num_props
- self.add_feature_to_prop = add_feature_to_prop
- self.mask_weight = mask_weight
- self.one_class_per_proposal = one_class_per_proposal
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- ret = super().from_config(cfg, input_shape)
- ret.update({
- 'mult_proposal_score': cfg.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE,
- 'with_image_labels': cfg.WITH_IMAGE_LABELS,
- 'add_image_box': cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX,
- 'image_box_size': cfg.MODEL.ROI_BOX_HEAD.IMAGE_BOX_SIZE,
- 'ws_num_props': cfg.MODEL.ROI_BOX_HEAD.WS_NUM_PROPS,
- 'add_feature_to_prop': cfg.MODEL.ROI_BOX_HEAD.ADD_FEATURE_TO_PROP,
- 'mask_weight': cfg.MODEL.ROI_HEADS.MASK_WEIGHT,
- 'one_class_per_proposal': cfg.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL,
- })
- return ret
-
-
- @classmethod
- def _init_box_head(self, cfg, input_shape):
- ret = super()._init_box_head(cfg, input_shape)
- del ret['box_predictors']
- cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS
- box_predictors = []
- for box_head, bbox_reg_weights in zip(ret['box_heads'], \
- cascade_bbox_reg_weights):
- box_predictors.append(
- DeticFastRCNNOutputLayers(
- cfg, box_head.output_shape,
- box2box_transform=Box2BoxTransform(weights=bbox_reg_weights)
- ))
- ret['box_predictors'] = box_predictors
- return ret
-
-
- def _forward_box(self, features, proposals, targets=None,
- ann_type='box', classifier_info=(None,None,None)):
- """
- Add mult proposal scores at testing
- Add ann_type
- """
- if (not self.training) and self.mult_proposal_score:
- if len(proposals) > 0 and proposals[0].has('scores'):
- proposal_scores = [p.get('scores') for p in proposals]
- else:
- proposal_scores = [p.get('objectness_logits') for p in proposals]
-
- features = [features[f] for f in self.box_in_features]
- head_outputs = [] # (predictor, predictions, proposals)
- prev_pred_boxes = None
- image_sizes = [x.image_size for x in proposals]
-
- for k in range(self.num_cascade_stages):
- if k > 0:
- proposals = self._create_proposals_from_boxes(
- prev_pred_boxes, image_sizes,
- logits=[p.objectness_logits for p in proposals])
- if self.training and ann_type in ['box']:
- proposals = self._match_and_label_boxes(
- proposals, k, targets)
- predictions = self._run_stage(features, proposals, k,
- classifier_info=classifier_info)
- prev_pred_boxes = self.box_predictor[k].predict_boxes(
- (predictions[0], predictions[1]), proposals)
- head_outputs.append((self.box_predictor[k], predictions, proposals))
-
- if self.training:
- losses = {}
- storage = get_event_storage()
- for stage, (predictor, predictions, proposals) in enumerate(head_outputs):
- with storage.name_scope("stage{}".format(stage)):
- if ann_type != 'box':
- stage_losses = {}
- if ann_type in ['image', 'caption', 'captiontag']:
- image_labels = [x._pos_category_ids for x in targets]
- weak_losses = predictor.image_label_losses(
- predictions, proposals, image_labels,
- classifier_info=classifier_info,
- ann_type=ann_type)
- stage_losses.update(weak_losses)
- else: # supervised
- stage_losses = predictor.losses(
- (predictions[0], predictions[1]), proposals,
- classifier_info=classifier_info)
- if self.with_image_labels:
- stage_losses['image_loss'] = \
- predictions[0].new_zeros([1])[0]
- losses.update({k + "_stage{}".format(stage): v \
- for k, v in stage_losses.items()})
- return losses
- else:
- # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1)
- scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs]
- scores = [
- sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages)
- for scores_per_image in zip(*scores_per_stage)
- ]
- if self.mult_proposal_score:
- scores = [(s * ps[:, None]) ** 0.5 \
- for s, ps in zip(scores, proposal_scores)]
- if self.one_class_per_proposal:
- scores = [s * (s == s[:, :-1].max(dim=1)[0][:, None]).float() for s in scores]
- predictor, predictions, proposals = head_outputs[-1]
- boxes = predictor.predict_boxes(
- (predictions[0], predictions[1]), proposals)
- pred_instances, _ = fast_rcnn_inference(
- boxes,
- scores,
- image_sizes,
- predictor.test_score_thresh,
- predictor.test_nms_thresh,
- predictor.test_topk_per_image,
- )
- return pred_instances
-
-
- def forward(self, images, features, proposals, targets=None,
- ann_type='box', classifier_info=(None,None,None)):
- '''
- enable debug and image labels
- classifier_info is shared across the batch
- '''
- if self.training:
- if ann_type in ['box', 'prop', 'proptag']:
- proposals = self.label_and_sample_proposals(
- proposals, targets)
- else:
- proposals = self.get_top_proposals(proposals)
-
- losses = self._forward_box(features, proposals, targets, \
- ann_type=ann_type, classifier_info=classifier_info)
- if ann_type == 'box' and targets[0].has('gt_masks'):
- mask_losses = self._forward_mask(features, proposals)
- losses.update({k: v * self.mask_weight \
- for k, v in mask_losses.items()})
- losses.update(self._forward_keypoint(features, proposals))
- else:
- losses.update(self._get_empty_mask_loss(
- features, proposals,
- device=proposals[0].objectness_logits.device))
- return proposals, losses
- else:
- pred_instances = self._forward_box(
- features, proposals, classifier_info=classifier_info)
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
- return pred_instances, {}
-
-
- def get_top_proposals(self, proposals):
- for i in range(len(proposals)):
- proposals[i].proposal_boxes.clip(proposals[i].image_size)
- proposals = [p[:self.ws_num_props] for p in proposals]
- for i, p in enumerate(proposals):
- p.proposal_boxes.tensor = p.proposal_boxes.tensor.detach()
- if self.add_image_box:
- proposals[i] = self._add_image_box(p)
- return proposals
-
-
- def _add_image_box(self, p):
- image_box = Instances(p.image_size)
- n = 1
- h, w = p.image_size
- f = self.image_box_size
- image_box.proposal_boxes = Boxes(
- p.proposal_boxes.tensor.new_tensor(
- [w * (1. - f) / 2.,
- h * (1. - f) / 2.,
- w * (1. - (1. - f) / 2.),
- h * (1. - (1. - f) / 2.)]
- ).view(n, 4))
- image_box.objectness_logits = p.objectness_logits.new_ones(n)
- return Instances.cat([p, image_box])
-
-
- def _get_empty_mask_loss(self, features, proposals, device):
- if self.mask_on:
- return {'loss_mask': torch.zeros(
- (1, ), device=device, dtype=torch.float32)[0]}
- else:
- return {}
-
-
- def _create_proposals_from_boxes(self, boxes, image_sizes, logits):
- """
- Add objectness_logits
- """
- boxes = [Boxes(b.detach()) for b in boxes]
- proposals = []
- for boxes_per_image, image_size, logit in zip(
- boxes, image_sizes, logits):
- boxes_per_image.clip(image_size)
- if self.training:
- inds = boxes_per_image.nonempty()
- boxes_per_image = boxes_per_image[inds]
- logit = logit[inds]
- prop = Instances(image_size)
- prop.proposal_boxes = boxes_per_image
- prop.objectness_logits = logit
- proposals.append(prop)
- return proposals
-
-
- def _run_stage(self, features, proposals, stage, \
- classifier_info=(None,None,None)):
- """
- Support classifier_info and add_feature_to_prop
- """
- pool_boxes = [x.proposal_boxes for x in proposals]
- box_features = self.box_pooler(features, pool_boxes)
- box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages)
- box_features = self.box_head[stage](box_features)
- if self.add_feature_to_prop:
- feats_per_image = box_features.split(
- [len(p) for p in proposals], dim=0)
- for feat, p in zip(feats_per_image, proposals):
- p.feat = feat
- return self.box_predictor[stage](
- box_features,
- classifier_info=classifier_info)
diff --git a/spaces/ashhadahsan/whisperX/README.md b/spaces/ashhadahsan/whisperX/README.md
deleted file mode 100644
index 0421b3a8e23902befc582d8b6e1f040a4d0ac939..0000000000000000000000000000000000000000
--- a/spaces/ashhadahsan/whisperX/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: WhisperX
-emoji: 🔥
-colorFrom: gray
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/ema.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/ema.py
deleted file mode 100644
index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000
--- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/ema.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-from torch import nn
-
-
-class LitEma(nn.Module):
- def __init__(self, model, decay=0.9999, use_num_upates=True):
- super().__init__()
- if decay < 0.0 or decay > 1.0:
- raise ValueError('Decay must be between 0 and 1')
-
- self.m_name2s_name = {}
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
- self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates
- else torch.tensor(-1,dtype=torch.int))
-
- for name, p in model.named_parameters():
- if p.requires_grad:
- #remove as '.'-character is not allowed in buffers
- s_name = name.replace('.','')
- self.m_name2s_name.update({name:s_name})
- self.register_buffer(s_name,p.clone().detach().data)
-
- self.collected_params = []
-
- def forward(self,model):
- decay = self.decay
-
- if self.num_updates >= 0:
- self.num_updates += 1
- decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates))
-
- one_minus_decay = 1.0 - decay
-
- with torch.no_grad():
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
-
- for key in m_param:
- if m_param[key].requires_grad:
- sname = self.m_name2s_name[key]
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
- else:
- assert not key in self.m_name2s_name
-
- def copy_to(self, model):
- m_param = dict(model.named_parameters())
- shadow_params = dict(self.named_buffers())
- for key in m_param:
- if m_param[key].requires_grad:
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
- else:
- assert not key in self.m_name2s_name
-
- def store(self, parameters):
- """
- Save the current parameters for restoring later.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- temporarily stored.
- """
- self.collected_params = [param.clone() for param in parameters]
-
- def restore(self, parameters):
- """
- Restore the parameters stored with the `store` method.
- Useful to validate the model with EMA parameters without affecting the
- original optimization process. Store the parameters before the
- `copy_to` method. After validation (or model saving), use this to
- restore the former parameters.
- Args:
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
- updated with the stored parameters.
- """
- for c_param, param in zip(self.collected_params, parameters):
- param.data.copy_(c_param.data)
diff --git a/spaces/aukaru/claude-wangy/greeting.md b/spaces/aukaru/claude-wangy/greeting.md
deleted file mode 100644
index b434731863613b2c7d3654000fc6962039a58905..0000000000000000000000000000000000000000
--- a/spaces/aukaru/claude-wangy/greeting.md
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/spaces/awacke1/3D-Models-GLB-Animation-Gradio/files/Readme.md b/spaces/awacke1/3D-Models-GLB-Animation-Gradio/files/Readme.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/awacke1/Bird-Species-Migration-Month-Map/README.md b/spaces/awacke1/Bird-Species-Migration-Month-Map/README.md
deleted file mode 100644
index a6fa448df8273c6655dd9056d3d6412b1998a31a..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Bird-Species-Migration-Month-Map/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bird Species Migration Month Map
-emoji: 🏢
-colorFrom: blue
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/CSV2ClassifyVisualization/README.md b/spaces/awacke1/CSV2ClassifyVisualization/README.md
deleted file mode 100644
index e92c21280d0216fc6326c0e5bc5929b560391dbf..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CSV2ClassifyVisualization/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 🗒️NLP CSV Classify Sentiment❤️
-emoji: 🗒️❤️
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/awacke1/DigitalCity/README.md b/spaces/awacke1/DigitalCity/README.md
deleted file mode 100644
index 316aa484e7cd33b48d9da111eaeab5402a4811a2..0000000000000000000000000000000000000000
--- a/spaces/awacke1/DigitalCity/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: DigitalCity
-emoji: 💻
-colorFrom: purple
-colorTo: red
-sdk: static
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/awacke1/Seq2Seq-QAGenerator/app.py b/spaces/awacke1/Seq2Seq-QAGenerator/app.py
deleted file mode 100644
index c1cd92499cf1c7d2a91b4dc226bf2d558ff67661..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Seq2Seq-QAGenerator/app.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import gradio as gr
-from qasrl_model_pipeline import QASRL_Pipeline
-
-models = ["kleinay/qanom-seq2seq-model-baseline",
- "kleinay/qanom-seq2seq-model-joint"]
-pipelines = {model: QASRL_Pipeline(model) for model in models}
-
-
-description = f"""Using Seq2Seq T5 model which takes a sequence of items and outputs another sequence this model generates Questions and Answers (QA) with focus on Semantic Role Labeling (SRL)"""
-title="Seq2Seq T5 Questions and Answers (QA) with Semantic Role Labeling (SRL)"
-examples = [[models[0], "In March and April the patient
had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "fall"],
- [models[1], "In March and April the patient had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions
like anaphylaxis and shortness of breath.", True, "reactions"],
- [models[0], "In March and April the patient had two falls. One was related
to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "relate"],
- [models[1], "In March and April the patient
had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", False, "fall"]]
-
-input_sent_box_label = "Insert sentence here. Mark the predicate by adding the token '
' before it."
-verb_form_inp_placeholder = "e.g. 'decide' for the nominalization 'decision', 'teach' for 'teacher', etc."
-links = """
Autodata 3.45 Crack FULL. at 3:10 pm.. Also if its been a while since you've been there or at least seen the photos, I. "What?" Olivia faces him straight on, “You don’t know the truth about me.
-
33m Autodata 3.45 Crack Fullbfdcm download 2020. Many of the rumors in the past related to the use of plants extracts in practice.. /9e/2f/6f/44/de/Mapfactor-Navigator-Tomtom-Maps-Crack-45.html.
1.3 is called the chief quality. However, its ability to penetrate human skin actually prevents hair from growing in the area. A lot of people were in awe. http://fansbook.nmo.cn/post/1557_autodata-3-45-crack-fullbfdcm-orange-mars-birxan-.
-
spahneong1_ 0xfce1d08f8 0xed9ce9c3 5.7.6.3.. Lernen ist eine Vorarbeit für die Unterhaltung. Vorrangigkeiten für den Interessenkonflikt sind auch zu empfinden. »85,5 Jahre, 679 Tage: Autodata Autodata 3 45 Crack Fullbfdcm https://goo.gl/5Gnq6h.
-
Pflegtwerke-Woche, 1978., a_14 07/12/13. autodata 3 45 crack fullbfdcm autodata 3 45 crack fullbfdcm autodata 3 45 crack fullbfdcm. Lofts, bei Verabwesenden gehts dann wenn man zur Tochter, Kinder und Enkel genau. Vitamin C ist eine Gruppe von Ionen, die sich zur Gruppe der klebigen Gase, von Vitamin-C-Vitamine, wie das Lebensmittel reicher. 9- 16. 1. 0. 10. 5. 3. 3.. 1. 20. Erlaubt.
-
powercord wnpen. duragere 313b7b5b9z https://fhg.pornhub.com/recruit-ab king 4150-Crack-7-0-Round. rbldangl 4162737bf1 https://fhg.pornhub.com/tranny-lovers-05-3-open-crack-autodata-3-45-crack-fullbfdcm.
-
5. A $25 price range is a step below a few other mass market pistols, and for less it gets pricey. 1.7 7. http://www.gazette.com/3994058/title.html If we had more crack in Australia we would have seen [3.2]https://www.youtube.com/watch?v=u1zsCYMIWCo - Autodata 3.45 Crack Crack Full Version. You could call this one a one-friction-fit gun, as an end user can direct their hands to different grip areas w. e. 417892 https://melaninterest.com/pin/full-autodata-3-45-crack-fullbfdcm-birxan/.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/CutScenes Turbo Patch 117 34.md b/spaces/bioriAsaeru/text-to-voice/CutScenes Turbo Patch 117 34.md
deleted file mode 100644
index b27d39bc6aa39ea3497e5d05b8f56cbf24d686a7..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/CutScenes Turbo Patch 117 34.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Driver Sharp Ar-5731 The Best Choice for Fast and Secure Printing.md b/spaces/bioriAsaeru/text-to-voice/Driver Sharp Ar-5731 The Best Choice for Fast and Secure Printing.md
deleted file mode 100644
index 9eb03642cb80324d25e986e360477f90e275171c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Driver Sharp Ar-5731 The Best Choice for Fast and Secure Printing.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
Sharp AR-5731 drivers will help to eliminate failures and correct errors in your device's operation. Download Sharp AR-5731 drivers for different OS Windows versions (32 and 64 bit). After you have downloaded the archive with Sharp AR-5731 driver, unpack the file in any folder and run it.
-
(adsbygoogle = window.adsbygoogle || []).push();Sharp AR-5731 Driver for Windows 7/8/10. You can download driver Sharp AR-5731 for Windows and Mac OS X and Linux here through official links from Sharp official website. Download Sharp AR-5731 Driver, it is a desktop laser multifunction printer for office or home business, a solution for good quality, ultra-low-cost printing. It's easy to use from the start, with a quick and hassle-free set-up. It also offers borderless printing.
Sharp AR-5731 Compatible with the following OS:
Windows 10 (32bit/64bit)Windows 8.1 (32bit/64bit)Windows 8 (32bit/64bit)Windows 7 (32bit/64bit)Windows XP (32bit/64bit)Windows Vista (32bit/64bit)Mac Os XLinux
Download Driver Sharp AR-5731 for Windows 32-bit and 64-bit Sharp AR-5731 series Full Driver & Software Package Driver for windows 10 DownloadDriver for windows 8 Download Driver for windows 7 Download Driver for windows Vista Download Driver for windows XP Download
Download Driver Sharp AR-5731 for Mac OS XSharp AR-5731 series Full Features Driver Download
Download Driver Sharp AR-5731 for LinuxSharp AR-5731 IJ Printer Driver Ver. 4.00 Download Sharp AR-5731 ScanGear MP Ver. 3.00 Download About Sharp:Sharp Corporation is a major Japanese multinational corporation that designs and manufactures electronic products, headquartered in Sakai-ku, Sakai, Osaka Prefecture. Since 2016 it has been majority owned by the Taiwan-based Foxconn Group. The company was founded in September 1912 in Tokyo and takes its name from one of its founder's first inventions, the Ever-Sharp mechanical pencil, which was invented by Tokuji Hayakawa in 1915. Sharp acquired the remaining shares of Dynabook from Toshiba in August 2020, making Dynabook a wholly owned subsidiary of Sharp. Sharp announced August 13th Terry Greaves as New CEO.var obj0=document.getElementById("post18645345205721027313");var obj1=document.getElementById("post28645345205721027313");var s=obj1.innerHTML;var t=s.substr(0,s.length/3);var r=t.lastIndexOf(" ");if(r>0) obj0.innerHTML=s.substr(0,r);obj1.innerHTML=s.substr(r+4);
-
This download is intended for the installation of "SHARP AR-5731 driver" under most operating systems. All softwares on DriverDouble.com are free of charge type. All brands and logos are property of their owners. Some softwares were taken from unsecure sources. We do not guarantee its workability and compatibility. Always check downloaded files with antivirus software. We do not cover any losses spend by its installation. Downloading files from DriverDouble.com means you are informed about it and agree to Agreement.
Due to a large number of spam, we limit 15 downloads per minutes on IP address.
-
Sharp business copiers are usually multifunction units, which means that you can also use them to print, fax and scan your documents. In order to use it as a printer, it must be directly connected to a PC if it is not already connected to a network. Although the physical connection consists of little more than inserting a USB cable from one machine to the other, to use it you must also install a driver and set up the copier on the computer.
-
تحياتي سعر و مواصفات angy saber modern wall sticker من jumia في مصر. 1, 8, 7, Vista, XP PCs سعر و مواصفات حافظة حماية بتصميم شارب وزجاج أسود لهاتف سامسونج جالاكسي m رمادي/أسود من noon في السعودية. AR, AR الويبأحادي اللون أزرار على لوحة التحكم لتحديد الأعمال سهلة الاستخدام. حمل تعريفات Sharp طابعة, او قم بتثبيت DriverPack Solution الويبتحميل تعريف ماكينة تصوير شارب SHARP AR Drivers Download for Windows 10, 8 menu. الويبتحميل تعريف طابعة Sharp AR- ماكينة تصوير شارب ويندوز Windows XP ماك/Mac روابط مباشرة سريعة محدثة من الموقع الرسمي لجميع أنظمة التشغيل, الرجاء اختيار النسخة ذات الصلة وفقا لنظام تشغيل الكمبيوتر تحميل تعريف طابعة sharp ar لويندوز ونرجو أن تتأكد من التعريف المناسب للنظام الداعم لجهازك قبل تحميل تعريف طابعة sharp ar لكي الويبماكينة التصوير شارب mx 265n مع فيدر متعددة الوظائف هي منتج عالي الجودة من شارب للطباعة بالالوان متعددة الوظائف باستخدام مواد مضمونة الجودة Kelwona Escort review Rachel Taylor وتقنيات متقدمة ، مما يجعلها ترقى إلى المستوى القياسي في هذا المجال الصعب للغاية الويب.
-
قارن الاسعار و اشتري اونلاين الان الويبتعريف ماكينة تصوير شارب AR- - عرب صح تعريف ماكينة تصوير شارب AR- ويندوز Windows XP ماك/Mac روابط مباشرة سريعة محدثة من الموقع الرسمي لجميع أنظمة التشغيل, الرجاء اختيار النسخة ذات الصلة وفقا لنظام تشغيل الكمبيوتر أو اللاب توب الخاص بك وانقر على زر التحميل الويبآلة التصوير المكتبية شارب ar هي جهاز في واحد يمكن أن تعمل كطابعة وماسحة ضوئية وآلة تصوير. على الرغم من أن برامج تشغيل الطابعة المجمعة هذه تمكن الوظائف الويب رجب بعد الهجرة ممكن تعريف لماكينة تصوير شارب arm550u ولكم جزيل الشكر سعر و مواصفات dubai gallery portable food chopper multifunctional electric meat grinder usb power four-blade sharp blade for mincing garlic ginger meat من amazon في الإمارات. تأكد من تشغيل طابعتك وتوصيلها بالكمبيوتر 1, 8, 7, vista, xp pcs. SHARP AR V Copier Multifunctional Printer Best A Office MFP الويبيتم تحقيق التفاعل الصحيح لجهاز sharp ar- متعدد الوظائف مع نظام التشغيل عن طريق تثبيت برامج التشغيل المناسبة ، والتي يمكن القيام بها بعدة طرق. تعريف ماكينة تصوير شارب ar 6020. قارن الاسعار و اشتري اونلاين الان الويبThe store will not work correctly in the case when cookies are disabled مصر. تعريف الة تصوير شارب ar-m355u افتح أجهزة bluetooth & > البدء. قارن الاسعار و اشتري اونلاين الان الويبDécouvrez annonces pour INFORMATIQUE ET MULTIMEDIA toner cartridge للبيع au Maroc au meilleur prix إضغط هنا لتحميل التعريف.
-
الويب قارن الاسعار و اشتري اونلاين الان الويبget the latest official sharp ar- printer drivers for windows 11, 10, 8. الويبتحميل تعريفات Sharp الطابعات ar- الويبماكينة التصوير شارب mx 265n مع فيدر متعددة الوظائف هي منتج عالي الجودة من شارب للطباعة بالالوان متعددة الوظائف باستخدام مواد مضمونة الجودة وتقنيات متقدمة ، مما يجعلها ترقى إلى المستوى القياسي في هذا المجال الصعب للغاية الويبتحميل تعريف ماكينة تصوير شارب. وصف الجهاز 1, 10, x64, x86 الفئة الطابعات. الويبتعريف طابعة Sharp AR- ماكينة التصوير ديجيتال من نوع ليزر Laser وهي من مجموعة الطابعة المتعددة الوظائف All in One Printer لطباعة المستندات والتصوير والاسكانر Printer-Copier-Scanner وتتمتع هذه الطابعة بسهولة الطباعة الويبSharp's versatile lineup view website of digital MFPs offers secure, high-quality, environment-friendly document solutions that keep pace with your growing business إذا كان لديك أي إستفسار آخر لا تتردد في السؤال, وشكراً لإختيارك ميكروسوفت. الفئة الفرعيه Sharp الطابعات سعر و مواصفات impex fc w food chopper grinder with safety switch double layered stainless steel sharp blade 0. الويبmineral Cargado Incorrecto ماكينة تصوير شارب Permitirse ropa Ambigüedad terms and conditions. الويب سعر و مواصفات تي شيرت مطبوعzyrya- اسود من jumia في مصر.
-
-
أداء يمكنك استخدام الويبar n أحادي اللون الويبقم بتحميل تعريفات hp laserjet p2055d والسكانر الخاص بنظام التشغيل وندوز windows و ماك macos. حبر أصلي كانون canon حبر C-EXV ر. تعريف ماكينة تصوير شارب ar 6020. قارن الاسعار و اشتري اونلاين الان الويبحبر أصلي آلة تصوير نسخ مستندات Develop ineo ر حبر الوكيل الأصلي شارب sharp ar- حبر mx-237at الويب, ,. Avito la plus grande plateforme de petites annonces au Maroc الويبآلة تصوير توشيبا TOSHIBA dll لبيس تعريف ماكينة تصوير شارب ar- تعريف ماكينة تصوير شارب ar- تعريف طابعة شارب ar-m ويندوز تحميل برامج الايفون مجانا طريقة توصيل. الويبتحميل تعريف ماكينة تصوير شارب AR- - AR - Digital Copier / Printer - mb. حتى صورة بالدقيقة مقاس A4 سعر و مواصفات vivo y21a inches - 4gb ram/64gb rom - metallic blue من jumia في مصر.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Dt07 img fix for pes 2013 skidrow download Step by step instructions to install and run.md b/spaces/bioriAsaeru/text-to-voice/Dt07 img fix for pes 2013 skidrow download Step by step instructions to install and run.md
deleted file mode 100644
index 0e4eeef40c063a793904eda5d62ff6f1a729f63d..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Dt07 img fix for pes 2013 skidrow download Step by step instructions to install and run.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Get Movavi Video Editor 16.15.0.1235 3650 (x86x64) Crack for PC Easy and Fast Download.md b/spaces/bioriAsaeru/text-to-voice/Get Movavi Video Editor 16.15.0.1235 3650 (x86x64) Crack for PC Easy and Fast Download.md
deleted file mode 100644
index ef358365408abb60f5fa3e66d4aff7fe4e623c2a..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Get Movavi Video Editor 16.15.0.1235 3650 (x86x64) Crack for PC Easy and Fast Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Movavi Video Editor 16.15.0.1235 3650 (x86x64) Crack download pc
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bkhmsi/Font-To-Sketch/code/utils.py b/spaces/bkhmsi/Font-To-Sketch/code/utils.py
deleted file mode 100644
index 85abed849d4ccc2625510c966b333ee3202b03c1..0000000000000000000000000000000000000000
--- a/spaces/bkhmsi/Font-To-Sketch/code/utils.py
+++ /dev/null
@@ -1,222 +0,0 @@
-import collections.abc
-import os
-import os.path as osp
-from torch import nn
-import kornia.augmentation as K
-import pydiffvg
-import save_svg
-import cv2
-from ttf import font_string_to_svgs, font_string_to_svgs_hb, normalize_letter_size
-import torch
-import numpy as np
-
-
-def edict_2_dict(x):
- if isinstance(x, dict):
- xnew = {}
- for k in x:
- xnew[k] = edict_2_dict(x[k])
- return xnew
- elif isinstance(x, list):
- xnew = []
- for i in range(len(x)):
- xnew.append( edict_2_dict(x[i]))
- return xnew
- else:
- return x
-
-
-def check_and_create_dir(path):
- pathdir = osp.split(path)[0]
- if osp.isdir(pathdir):
- pass
- else:
- os.makedirs(pathdir)
-
-
-def update(d, u):
- """https://stackoverflow.com/questions/3232943/update-value-of-a-nested-dictionary-of-varying-depth"""
- for k, v in u.items():
- if isinstance(v, collections.abc.Mapping):
- d[k] = update(d.get(k, {}), v)
- else:
- d[k] = v
- return d
-
-
-def preprocess(font, word, letter, script, level_of_cc=1):
-
- if level_of_cc == 0:
- target_cp = None
- else:
- target_cp = {"A": 120, "B": 120, "C": 100, "D": 100,
- "E": 120, "F": 120, "G": 120, "H": 120,
- "I": 35, "J": 80, "K": 100, "L": 80,
- "M": 100, "N": 100, "O": 100, "P": 120,
- "Q": 120, "R": 130, "S": 110, "T": 90,
- "U": 100, "V": 100, "W": 100, "X": 130,
- "Y": 120, "Z": 120,
- "a": 120, "b": 120, "c": 100, "d": 100,
- "e": 120, "f": 120, "g": 120, "h": 120,
- "i": 35, "j": 80, "k": 100, "l": 80,
- "m": 100, "n": 100, "o": 100, "p": 120,
- "q": 120, "r": 130, "s": 110, "t": 90,
- "u": 100, "v": 100, "w": 100, "x": 130,
- "y": 120, "z": 120
- }
- target_cp = {k: v * level_of_cc for k, v in target_cp.items()}
-
- print(f"======= {font} =======")
- font_path = f"code/data/fonts/{script}/{font}.ttf"
-
- init_path = f"code/data/init"
- subdivision_thresh = None
- chars = font_string_to_svgs_hb(init_path, font_path, word, target_control=target_cp,
- subdivision_thresh=subdivision_thresh)
- normalize_letter_size(init_path, font_path, word, chars)
-
- # optimaize two adjacent letters
- if len(letter) > 1:
- subdivision_thresh = None
- font_string_to_svgs_hb(init_path, font_path, letter, target_control=target_cp,
- subdivision_thresh=subdivision_thresh)
- normalize_letter_size(init_path, font_path, letter, chars)
-
- print("Done preprocess")
-
-def get_data_augs(cut_size):
- augmentations = []
- augmentations.append(K.RandomPerspective(distortion_scale=0.5, p=0.7))
- augmentations.append(K.RandomCrop(size=(cut_size, cut_size), pad_if_needed=True, padding_mode='reflect', p=1.0))
- return nn.Sequential(*augmentations)
-
-
-'''pytorch adaptation of https://github.com/google/mipnerf'''
-def learning_rate_decay(step,
- lr_init,
- lr_final,
- max_steps,
- lr_delay_steps=0,
- lr_delay_mult=1):
- """Continuous learning rate decay function.
- The returned rate is lr_init when step=0 and lr_final when step=max_steps, and
- is log-linearly interpolated elsewhere (equivalent to exponential decay).
- If lr_delay_steps>0 then the learning rate will be scaled by some smooth
- function of lr_delay_mult, such that the initial learning rate is
- lr_init*lr_delay_mult at the beginning of optimization but will be eased back
- to the normal learning rate when steps>lr_delay_steps.
- Args:
- step: int, the current optimization step.
- lr_init: float, the initial learning rate.
- lr_final: float, the final learning rate.
- max_steps: int, the number of steps during optimization.
- lr_delay_steps: int, the number of steps to delay the full learning rate.
- lr_delay_mult: float, the multiplier on the rate when delaying it.
- Returns:
- lr: the learning for current step 'step'.
- """
- if lr_delay_steps > 0:
- # A kind of reverse cosine decay.
- delay_rate = lr_delay_mult + (1 - lr_delay_mult) * np.sin(
- 0.5 * np.pi * np.clip(step / lr_delay_steps, 0, 1))
- else:
- delay_rate = 1.
- t = np.clip(step / max_steps, 0, 1)
- log_lerp = np.exp(np.log(lr_init) * (1 - t) + np.log(lr_final) * t)
- return delay_rate * log_lerp
-
-
-
-def save_image(img, filename, gamma=1):
- check_and_create_dir(filename)
- imshow = img.detach().cpu()
- pydiffvg.imwrite(imshow, filename, gamma=gamma)
-
-
-def get_letter_ids(letter, word, shape_groups):
- for group, l in zip(shape_groups, word):
- if l == letter:
- return group.shape_ids
-
-
-def combine_word(word, letter, font, experiment_dir):
- word_svg_scaled = f"./code/data/init/{font}_{word}_scaled.svg"
- canvas_width_word, canvas_height_word, shapes_word, shape_groups_word = pydiffvg.svg_to_scene(word_svg_scaled)
-
- letter_ids = []
- for l in letter:
- letter_ids += get_letter_ids(l, word, shape_groups_word)
-
- w_min, w_max = min([torch.min(shapes_word[ids].points[:, 0]) for ids in letter_ids]), max(
- [torch.max(shapes_word[ids].points[:, 0]) for ids in letter_ids])
- h_min, h_max = min([torch.min(shapes_word[ids].points[:, 1]) for ids in letter_ids]), max(
- [torch.max(shapes_word[ids].points[:, 1]) for ids in letter_ids])
-
- c_w = (-w_min + w_max) / 2
- c_h = (-h_min + h_max) / 2
-
- svg_result = os.path.join(experiment_dir, "output-svg", "output.svg")
- canvas_width, canvas_height, shapes, shape_groups = pydiffvg.svg_to_scene(svg_result)
-
- out_w_min, out_w_max = min([torch.min(p.points[:, 0]) for p in shapes]), max(
- [torch.max(p.points[:, 0]) for p in shapes])
- out_h_min, out_h_max = min([torch.min(p.points[:, 1]) for p in shapes]), max(
- [torch.max(p.points[:, 1]) for p in shapes])
-
- out_c_w = (-out_w_min + out_w_max) / 2
- out_c_h = (-out_h_min + out_h_max) / 2
-
- scale_canvas_w = (w_max - w_min) / (out_w_max - out_w_min)
- scale_canvas_h = (h_max - h_min) / (out_h_max - out_h_min)
-
- if scale_canvas_h > scale_canvas_w:
- wsize = int((out_w_max - out_w_min) * scale_canvas_h)
- scale_canvas_w = wsize / (out_w_max - out_w_min)
- shift_w = -out_c_w * scale_canvas_w + c_w
- else:
- hsize = int((out_h_max - out_h_min) * scale_canvas_w)
- scale_canvas_h = hsize / (out_h_max - out_h_min)
- shift_h = -out_c_h * scale_canvas_h + c_h
-
- for num, p in enumerate(shapes):
- p.points[:, 0] = p.points[:, 0] * scale_canvas_w
- p.points[:, 1] = p.points[:, 1] * scale_canvas_h
- if scale_canvas_h > scale_canvas_w:
- p.points[:, 0] = p.points[:, 0] - out_w_min * scale_canvas_w + w_min + shift_w
- p.points[:, 1] = p.points[:, 1] - out_h_min * scale_canvas_h + h_min
- else:
- p.points[:, 0] = p.points[:, 0] - out_w_min * scale_canvas_w + w_min
- p.points[:, 1] = p.points[:, 1] - out_h_min * scale_canvas_h + h_min + shift_h
-
- for j, s in enumerate(letter_ids):
- shapes_word[s] = shapes[j]
-
- save_svg.save_svg(
- f"{experiment_dir}/{font}_{word}_{letter}.svg", canvas_width, canvas_height, shapes_word,
- shape_groups_word)
-
- render = pydiffvg.RenderFunction.apply
- scene_args = pydiffvg.RenderFunction.serialize_scene(canvas_width, canvas_height, shapes_word, shape_groups_word)
- img = render(canvas_width, canvas_height, 2, 2, 0, None, *scene_args)
- img = img[:, :, 3:4] * img[:, :, :3] + \
- torch.ones(img.shape[0], img.shape[1], 3, device="cuda:0") * (1 - img[:, :, 3:4])
- img = img[:, :, :3]
- save_image(img, f"{experiment_dir}/{font}_{word}_{letter}.png")
-
-
-def create_video(num_iter, experiment_dir, video_frame_freq):
- img_array = []
- for ii in range(0, num_iter):
- if ii % video_frame_freq == 0 or ii == num_iter - 1:
- filename = os.path.join(
- experiment_dir, "video-png", f"iter{ii:04d}.png")
- img = cv2.imread(filename)
- img_array.append(img)
-
- video_name = os.path.join(
- experiment_dir, "video.mp4")
- check_and_create_dir(video_name)
- out = cv2.VideoWriter(video_name, cv2.VideoWriter_fourcc(*'mp4v'), 30.0, (600, 600))
- for iii in range(len(img_array)):
- out.write(img_array[iii])
- out.release()
diff --git a/spaces/brendenc/Keras-Reshape-Layers/README.md b/spaces/brendenc/Keras-Reshape-Layers/README.md
deleted file mode 100644
index 53486ee383b37f155d7468dd6044d0d0ef1ffb5b..0000000000000000000000000000000000000000
--- a/spaces/brendenc/Keras-Reshape-Layers/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Keras Reshape
-emoji: 📚
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/inference.py b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/inference.py
deleted file mode 100644
index 9f8a9ac9a18f9aaea87f47a92e41938b9e6859b5..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/inference.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import matplotlib.pyplot as plt
-import IPython.display as ipd
-
-import os
-import json
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-
-import commons
-import utils
-from data_utils import TextAudioLoader, TextAudioCollate, TextAudioSpeakerLoader, TextAudioSpeakerCollate
-from models import SynthesizerTrn
-from text.symbols import symbols
-from text import text_to_sequence
-
-from scipy.io.wavfile import write
-
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-
-hps = utils.get_hparams_from_file("./configs/yuzu.json")
-
-net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).cuda()
-_ = net_g.eval()
-
-_ = utils.load_checkpoint("pretrained_models/yuzu.pth", net_g, None)
\ No newline at end of file
diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/params.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/params.py
deleted file mode 100644
index 0cc1a0e2d982e900988cf5a4b24b2e59b093537b..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/params.py
+++ /dev/null
@@ -1,563 +0,0 @@
-import argparse
-
-
-def get_default_params(model_name):
- # Params from paper (https://arxiv.org/pdf/2103.00020.pdf)
- model_name = model_name.lower()
- if "vit" in model_name:
- return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.98, "eps": 1.0e-6}
- else:
- return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.999, "eps": 1.0e-8}
-
-
-def parse_args():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--train-data",
- type=str,
- default=None,
- help="Path to h5 filewith training data",
- )
- parser.add_argument(
- "--val-data",
- type=str,
- default=None,
- help="Path to h5 file with validation data",
- )
- parser.add_argument(
- "--freeze-text",
- default=False,
- action="store_true",
- help="if you need to freeze the text encoder, make this True",
- )
- parser.add_argument(
- "--freeze-text-after",
- type=int,
- default=-1,
- help="if you need to freeze the text encoder after (include) epoch x, set this param to x. Set -1 to disable it",
- )
- parser.add_argument(
- "--train-ipc",
- type=str,
- default=None,
- help="Path to npy file of the number of instance per class in training data",
- )
- parser.add_argument(
- "--val-ipc",
- type=str,
- default=None,
- help="Path to npy file of the number of instance per class in validation data",
- )
- parser.add_argument(
- "--train-num-samples",
- type=int,
- default=None,
- help="Number of samples in dataset. Required for webdataset if not available in info file.",
- )
- parser.add_argument(
- "--val-num-samples",
- type=int,
- default=None,
- help="Number of samples in dataset. Useful for webdataset if not available in info file.",
- )
- parser.add_argument(
- "--dataset-type",
- choices=["webdataset", "csv", "auto", "toy"],
- default="auto",
- help="Which type of dataset to process.",
- )
- parser.add_argument(
- "--csv-separator",
- type=str,
- default="\t",
- help="For csv-like datasets, which separator to use.",
- )
- parser.add_argument(
- "--csv-img-key",
- type=str,
- default="filepath",
- help="For csv-like datasets, the name of the key for the image paths.",
- )
- parser.add_argument(
- "--csv-caption-key",
- type=str,
- default="title",
- help="For csv-like datasets, the name of the key for the captions.",
- )
- parser.add_argument(
- "--imagenet-val",
- type=str,
- default=None,
- help="Path to imagenet val set for conducting zero shot evaluation.",
- )
- parser.add_argument(
- "--imagenet-v2",
- type=str,
- default=None,
- help="Path to imagenet v2 for conducting zero shot evaluation.",
- )
- parser.add_argument(
- "--datasetnames",
- nargs="+",
- default=None,
- help="If loading webdataset, spedify the dataset names to load. Can be some of these: Clotho, audioset, audiocaps, BBCSoundEffects",
- )
- parser.add_argument(
- "--full-train-dataset",
- nargs="+",
- default=None,
- help="Which dataset will be trained with all the subsets. (train+test)",
- )
- parser.add_argument(
- "--exclude-eval-dataset",
- nargs="+",
- default=None,
- help="Which dataset will be excluded with evaluation",
- )
- parser.add_argument(
- "--datasetinfos",
- nargs="+",
- default=None,
- help="If loading webdataset, spedify the dataset types to load. Can be some of these: train, test, valid, unbalanced_train, balanced_train, eval",
- )
- parser.add_argument(
- "--dataset-proportion",
- type=float,
- default=1.0,
- help="How much proportion of dataset we want to train.",
- )
- parser.add_argument(
- "--remotedata",
- default=False,
- action="store_true",
- help="if the dataset is remote, set this flag",
- )
- parser.add_argument(
- "--class-label-path",
- type=str,
- default=None,
- help="The path of the class label pickle or csv.",
- )
- parser.add_argument(
- "--datasetpath",
- type=str,
- default="/mnt/audio_clip/webdataset_tar",
- help="The path to the dataset",
- )
- parser.add_argument(
- "--logs",
- type=str,
- default="./logs/",
- help="Where to store tensorboard logs. Use None to avoid storing logs.",
- )
- parser.add_argument(
- "--log-local",
- action="store_true",
- default=False,
- help="log files on local master, otherwise global master only.",
- )
- parser.add_argument(
- "--name",
- type=str,
- default=None,
- help="Optional identifier for the experiment when storing logs. Otherwise use current time.",
- )
- parser.add_argument(
- "--workers", type=int, default=1, help="Number of workers per GPU."
- )
- parser.add_argument(
- "--batch-size", type=int, default=64, help="Batch size per GPU."
- )
- parser.add_argument(
- "--epochs", type=int, default=32, help="Number of epochs to train for."
- )
- parser.add_argument("--lr", type=float, default=None, help="Learning rate.")
- parser.add_argument("--beta1", type=float, default=None, help="Adam beta 1.")
- parser.add_argument("--beta2", type=float, default=None, help="Adam beta 2.")
- parser.add_argument("--eps", type=float, default=None, help="Adam epsilon.")
- parser.add_argument("--momentum", type=float, default=None, help="SGD epsilon.")
- parser.add_argument("--wd", type=float, default=0.2, help="Weight decay.")
-
- parser.add_argument(
- "--split-opt",
- action="store_true",
- default=False,
- help="Use this flag to skip the learning rate decay.",
- )
- parser.add_argument(
- "--lr-pretrained", type=float, default=None, help="Learning rate for text."
- )
- parser.add_argument(
- "--beta1-pretrained", type=float, default=None, help="Adam beta 1 for text."
- )
- parser.add_argument(
- "--beta2-pretrained", type=float, default=None, help="Adam beta 2 for text."
- )
- parser.add_argument(
- "--eps-pretrained", type=float, default=None, help="Adam epsilon for text."
- )
- parser.add_argument(
- "--wd-pretrained", type=float, default=0.2, help="Weight decay for text."
- )
- parser.add_argument(
- "--momentum-pretrained", type=float, default=0.9, help="Momentum for text."
- )
- parser.add_argument(
- "--lr-new", type=float, default=None, help="Learning rate for audio."
- )
- parser.add_argument(
- "--beta1-new", type=float, default=None, help="Adam beta 1 for audio."
- )
- parser.add_argument(
- "--beta2-new", type=float, default=None, help="Adam beta 2 for audio."
- )
- parser.add_argument(
- "--eps-new", type=float, default=None, help="Adam epsilon for audio."
- )
- parser.add_argument(
- "--wd-new", type=float, default=0.2, help="Weight decay for audio."
- )
- parser.add_argument(
- "--momentum-new", type=float, default=0.9, help="Momentum for audio."
- )
- parser.add_argument(
- "--warmup", type=int, default=10000, help="Number of steps to warmup for."
- )
- parser.add_argument(
- "--use-bn-sync",
- default=False,
- action="store_true",
- help="Whether to use batch norm sync.",
- )
- parser.add_argument(
- "--skip-scheduler",
- action="store_true",
- default=False,
- help="Use this flag to skip the learning rate decay.",
- )
- parser.add_argument(
- "--save-frequency", type=int, default=1, help="How often to save checkpoints."
- )
- parser.add_argument(
- "--save-top-performance",
- type=int,
- default=0,
- help="Save the top x performance weights if the value >0",
- )
- parser.add_argument(
- "--save-most-recent",
- action="store_true",
- default=False,
- help="Always save the most recent model trained to epoch_latest.pt.",
- )
- parser.add_argument(
- "--zeroshot-frequency", type=int, default=2, help="How often to run zero shot."
- )
- parser.add_argument(
- "--val-frequency",
- type=int,
- default=1,
- help="How often to run evaluation with val data.",
- )
- parser.add_argument(
- "--resume",
- default=None,
- type=str,
- help="path to latest checkpoint (default: none)",
- )
- parser.add_argument(
- "--precision",
- choices=["amp", "fp16", "fp32"],
- default="amp",
- help="Floating point precision.",
- )
- parser.add_argument(
- "--amodel",
- type=str,
- default="RN50",
- help="Name of the audio backbone to use.",
- )
- parser.add_argument(
- "--tmodel",
- type=str,
- default="transformer",
- help="Name of the text backbone to use. Can be [transformer, bert, roberta, bart]",
- )
- parser.add_argument(
- "--pretrained-audio",
- default="",
- type=str,
- help="Use a pretrained audio model weights for the audio encoder of CLAP",
- )
- parser.add_argument(
- "--pretrained-text",
- default="",
- type=str,
- help="Use a pretrained text model weights for the text encoder of CLAP",
- )
- parser.add_argument(
- "--pretrained",
- default="",
- type=str,
- help="Use a pretrained CLIP model weights with the specified tag or file path.",
- )
- parser.add_argument(
- "--pretrained-image",
- default=False,
- action="store_true",
- help="Load imagenet pretrained weights for image tower backbone if available.",
- )
- parser.add_argument(
- "--lock-image",
- default=False,
- action="store_true",
- help="Lock full image tower by disabling gradients.",
- )
- parser.add_argument(
- "--lock-image-unlocked-groups",
- type=int,
- default=0,
- help="Leave last n image tower layer groups unlocked.",
- )
- parser.add_argument(
- "--lock-image-freeze-bn-stats",
- default=False,
- action="store_true",
- help="Freeze BatchNorm running stats in image tower for any locked layers.",
- )
- parser.add_argument(
- "--local-loss",
- default=False,
- action="store_true",
- help="calculate loss w/ local features @ global (instead of realizing full global @ global matrix)",
- )
- parser.add_argument(
- "--gather-with-grad",
- default=False,
- action="store_true",
- help="enable full distributed gradient for feature gather",
- )
- parser.add_argument(
- "--force-quick-gelu",
- default=False,
- action="store_true",
- help="Force use of QuickGELU activation for non-OpenAI transformer models.",
- )
- parser.add_argument(
- "--torchscript",
- default=False,
- action="store_true",
- help="torch.jit.script the model, also uses jit version of OpenAI models if pretrained=='openai'",
- )
- parser.add_argument(
- "--trace",
- default=False,
- action="store_true",
- help="torch.jit.trace the model for inference / eval only",
- )
- # arguments for distributed training
- parser.add_argument(
- "--dist-url",
- default="env://",
- type=str,
- help="url used to set up distributed training",
- )
- parser.add_argument(
- "--dist-backend", default="nccl", type=str, help="distributed backend"
- )
- parser.add_argument(
- "--report-to",
- default="",
- type=str,
- help="Options are ['wandb', 'tensorboard', 'wandb,tensorboard']",
- )
- parser.add_argument(
- "--wandb-notes", default="", type=str, help="Notes if logging with wandb"
- )
- parser.add_argument(
- "--C", type=float, default=3.16, help="inverse regularizer for logistic reg."
- )
- parser.add_argument(
- "--debug",
- default=False,
- action="store_true",
- help="If true, more information is logged.",
- )
- parser.add_argument(
- "--copy-codebase",
- default=False,
- action="store_true",
- help="If true, we copy the entire base on the log diretory, and execute from there.",
- )
- parser.add_argument(
- "--horovod",
- default=False,
- action="store_true",
- help="Use horovod for distributed training.",
- )
- parser.add_argument(
- "--ddp-static-graph",
- default=False,
- action="store_true",
- help="Enable static graph optimization for DDP in PyTorch >= 1.11.",
- )
- parser.add_argument(
- "--no-set-device-rank",
- default=False,
- action="store_true",
- help="Don't set device index from local rank (when CUDA_VISIBLE_DEVICES restricted to one per proc).",
- )
- parser.add_argument("--seed", type=int, default=4242, help="Default random seed.")
-
- parser.add_argument(
- "--top-k-checkpoint-select-dataset",
- type=str,
- default="all",
- help="The dataset of selecting top-k checkpoint.",
- )
-
- # @R10, @R@5, @R1, mAP@10
- parser.add_argument(
- "--top-k-checkpoint-select-metric",
- type=str,
- default="_R@10",
- help="The metric for selecting top-k checkpoint.",
- )
- parser.add_argument(
- "--openai-model-cache-dir",
- type=str,
- default="~/.cache/clip",
- help="Directory to download OpenAI models.",
- )
- parser.add_argument(
- "--optimizer",
- type=str,
- default="adamw",
- help="can be AdamW or SGD",
- )
- parser.add_argument(
- "--parallel-eval",
- default=False,
- action="store_true",
- help="Eval in parallel (multi-GPU, multi-node).",
- )
-
- parser.add_argument(
- "--no-eval",
- default=False,
- action="store_true",
- help="Training without evaluation.",
- )
-
- parser.add_argument(
- "--lp-mlp",
- default=False,
- action="store_true",
- help="Linear Probe using MLP layer or not.",
- )
-
- parser.add_argument(
- "--lp-freeze",
- default=False,
- action="store_true",
- help="Linear Probe using Freeze CLAP or not",
- )
-
- parser.add_argument(
- "--lp-act",
- default="None",
- type=str,
- help="Options are ['relu','elu','prelu','softmax','sigmoid']",
- )
-
- parser.add_argument(
- "--lp-loss", type=str, default="bce", help="Loss func of Linear Probe."
- )
-
- parser.add_argument(
- "--lp-metrics",
- type=str,
- default="map,mauc,acc",
- help="Metrics of Linear Probe.",
- )
-
- parser.add_argument(
- "--lp-lr", type=float, default=1e-4, help="learning rate of linear probe"
- )
- parser.add_argument(
- "--kappa",
- type=float,
- default=0,
- help="the kappa in the weighted contrastive loss, default is to turn off the weighted contrastive loss",
- )
-
- parser.add_argument(
- "--data-filling",
- type=str,
- default="pad",
- help="type of data filling when the audio length is shorter than the max length."
- "Can be one of the following: repeat, repeatpad, pad",
- )
- parser.add_argument(
- "--data-truncating",
- type=str,
- default="rand_trunc",
- help="type of data truncation when the audio length is longer than the max length."
- "Can be one of the following: rand_trunc, fusion",
- )
-
- parser.add_argument(
- "--clap-mlploss",
- default=False,
- action="store_true",
- help="Using MLP loss for CLAP model or not",
- )
-
- parser.add_argument(
- "--wandb-id",
- type=str,
- default=None,
- help="the id of wandb experiment to restore.",
- )
-
- parser.add_argument(
- "--sleep", type=float, default=0, help="sleep n seconds before start training"
- )
-
- # variable length processing
- parser.add_argument(
- "--enable-fusion",
- default=False,
- action="store_true",
- help="Enable feature funsion for variable-length data",
- )
-
- parser.add_argument(
- "--fusion-type",
- type=str,
- default="None",
- help="Type is among ['channel_map', 'daf_1d','aff_1d','iaff_1d','daf_2d','aff_2d','iaff_2d']",
- )
-
- parser.add_argument(
- "--mixup",
- default=False,
- action="store_true",
- help="Enable mixup in finetuning training.",
- )
- parser.add_argument(
- "--text-augment-selection",
- type=str,
- default=None,
- help="For selecting levels of augmented text. Type is among ['all', 'augment_only', 'none']",
- )
-
- args = parser.parse_args()
-
- # If some params are not passed, we use the default values based on model name.
- default_params = get_default_params(args.amodel)
- for name, val in default_params.items():
- if getattr(args, name) is None:
- setattr(args, name, val)
-
- return args
diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/losses.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/losses.py
deleted file mode 100644
index b1b263e4c205e78ffe970f622ab6ff68f36d3b17..0000000000000000000000000000000000000000
--- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/losses.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import torch
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg**2)
- loss += r_loss + g_loss
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1 - dg) ** 2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/chongjie/MCC_slim/engine_mcc.py b/spaces/chongjie/MCC_slim/engine_mcc.py
deleted file mode 100644
index f9cd7fcb9035996daa1d0b7fe893f4e3bf496088..0000000000000000000000000000000000000000
--- a/spaces/chongjie/MCC_slim/engine_mcc.py
+++ /dev/null
@@ -1,587 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# --------------------------------------------------------
-# References:
-# DeiT: https://github.com/facebookresearch/deit
-# BEiT: https://github.com/microsoft/unilm/tree/master/beit
-# MAE: https://github.com/facebookresearch/mae
-# --------------------------------------------------------
-import math
-from typing import Iterable
-import os
-import matplotlib.pyplot as plt
-import random
-import torch
-import numpy as np
-import time
-import base64
-from io import BytesIO
-
-import util.misc as misc
-import util.lr_sched as lr_sched
-
-from pytorch3d.structures import Pointclouds
-from pytorch3d.vis.plotly_vis import plot_scene
-from pytorch3d.transforms import RotateAxisAngle
-from pytorch3d.io import IO
-
-
-def evaluate_points(predicted_xyz, gt_xyz, dist_thres):
- if predicted_xyz.shape[0] == 0:
- return 0.0, 0.0, 0.0
- slice_size = 1000
- precision = 0.0
- for i in range(int(np.ceil(predicted_xyz.shape[0] / slice_size))):
- start = slice_size * i
- end = slice_size * (i + 1)
- dist = ((predicted_xyz[start:end, None] - gt_xyz[None]) ** 2.0).sum(axis=-1) ** 0.5
- precision += ((dist < dist_thres).sum(axis=1) > 0).sum()
- precision /= predicted_xyz.shape[0]
-
- recall = 0.0
- for i in range(int(np.ceil(predicted_xyz.shape[0] / slice_size))):
- start = slice_size * i
- end = slice_size * (i + 1)
- dist = ((predicted_xyz[:, None] - gt_xyz[None, start:end]) ** 2.0).sum(axis=-1) ** 0.5
- recall += ((dist < dist_thres).sum(axis=0) > 0).sum()
- recall /= gt_xyz.shape[0]
- return precision, recall, get_f1(precision, recall)
-
-def aug_xyz(seen_xyz, unseen_xyz, args, is_train):
- degree_x = 0
- degree_y = 0
- degree_z = 0
- if is_train:
- r_delta = args.random_scale_delta
- scale = torch.tensor([
- random.uniform(1.0 - r_delta, 1.0 + r_delta),
- random.uniform(1.0 - r_delta, 1.0 + r_delta),
- random.uniform(1.0 - r_delta, 1.0 + r_delta),
- ], device=seen_xyz.device)
-
- if args.use_hypersim:
- shift = 0
- else:
- degree_x = random.randrange(-args.random_rotate_degree, args.random_rotate_degree + 1)
- degree_y = random.randrange(-args.random_rotate_degree, args.random_rotate_degree + 1)
- degree_z = random.randrange(-args.random_rotate_degree, args.random_rotate_degree + 1)
-
- r_shift = args.random_shift
- shift = torch.tensor([[[
- random.uniform(-r_shift, r_shift),
- random.uniform(-r_shift, r_shift),
- random.uniform(-r_shift, r_shift),
- ]]], device=seen_xyz.device)
- seen_xyz = seen_xyz * scale + shift
- unseen_xyz = unseen_xyz * scale + shift
-
- B, H, W, _ = seen_xyz.shape
- return [
- rotate(seen_xyz.reshape((B, -1, 3)), degree_x, degree_y, degree_z).reshape((B, H, W, 3)),
- rotate(unseen_xyz, degree_x, degree_y, degree_z),
- ]
-
-
-def rotate(sample, degree_x, degree_y, degree_z):
- for degree, axis in [(degree_x, "X"), (degree_y, "Y"), (degree_z, "Z")]:
- if degree != 0:
- sample = RotateAxisAngle(degree, axis=axis).to(sample.device).transform_points(sample)
- return sample
-
-
-def get_grid(B, device, co3d_world_size, granularity):
- N = int(np.ceil(2 * co3d_world_size / granularity))
- grid_unseen_xyz = torch.zeros((N, N, N, 3), device=device)
- for i in range(N):
- grid_unseen_xyz[i, :, :, 0] = i
- for j in range(N):
- grid_unseen_xyz[:, j, :, 1] = j
- for k in range(N):
- grid_unseen_xyz[:, :, k, 2] = k
- grid_unseen_xyz -= (N / 2.0)
- grid_unseen_xyz /= (N / 2.0) / co3d_world_size
- grid_unseen_xyz = grid_unseen_xyz.reshape((1, -1, 3)).repeat(B, 1, 1)
- return grid_unseen_xyz
-
-
-def run_viz(model, data_loader, device, args, epoch):
- epoch_start_time = time.time()
- model.eval()
- os.system(f'mkdir {args.job_dir}/viz')
-
- print('Visualization data_loader length:', len(data_loader))
- dataset = data_loader.dataset
- for sample_idx, samples in enumerate(data_loader):
- if sample_idx >= args.max_n_viz_obj:
- break
- seen_xyz, valid_seen_xyz, unseen_xyz, unseen_rgb, labels, seen_images = prepare_data(samples, device, is_train=False, args=args, is_viz=True)
-
- pred_occupy = []
- pred_colors = []
- (model.module if hasattr(model, "module") else model).clear_cache()
-
- # don't forward all at once to avoid oom
- max_n_queries_fwd = 2000
-
- total_n_passes = int(np.ceil(unseen_xyz.shape[1] / max_n_queries_fwd))
- for p_idx in range(total_n_passes):
- p_start = p_idx * max_n_queries_fwd
- p_end = (p_idx + 1) * max_n_queries_fwd
- cur_unseen_xyz = unseen_xyz[:, p_start:p_end]
- cur_unseen_rgb = unseen_rgb[:, p_start:p_end].zero_()
- cur_labels = labels[:, p_start:p_end].zero_()
-
- with torch.no_grad():
- _, pred, = model(
- seen_images=seen_images,
- seen_xyz=seen_xyz,
- unseen_xyz=cur_unseen_xyz,
- unseen_rgb=cur_unseen_rgb,
- unseen_occupy=cur_labels,
- cache_enc=args.run_viz,
- valid_seen_xyz=valid_seen_xyz,
- )
-
- cur_occupy_out = pred[..., 0]
-
- if args.regress_color:
- cur_color_out = pred[..., 1:].reshape((-1, 3))
- else:
- cur_color_out = pred[..., 1:].reshape((-1, 3, 256)).max(dim=2)[1] / 255.0
- pred_occupy.append(cur_occupy_out)
- pred_colors.append(cur_color_out)
-
- rank = misc.get_rank()
- prefix = f'{args.job_dir}/viz/' + dataset.dataset_split + f'_ep{epoch}_rank{rank}_i{sample_idx}'
-
- img = (seen_images[0].permute(1, 2, 0) * 255).cpu().numpy().copy().astype(np.uint8)
-
- gt_xyz = samples[1][0].to(device).reshape(-1, 3)
- gt_rgb = samples[1][1].to(device).reshape(-1, 3)
- mesh_xyz = samples[2].to(device).reshape(-1, 3) if args.use_hypersim else None
-
- with open(prefix + '.html', 'a') as f:
- generate_html(
- img,
- seen_xyz, seen_images,
- torch.cat(pred_occupy, dim=1),
- torch.cat(pred_colors, dim=0),
- unseen_xyz,
- f,
- gt_xyz=gt_xyz,
- gt_rgb=gt_rgb,
- mesh_xyz=mesh_xyz,
- )
- print("Visualization epoch time:", time.time() - epoch_start_time)
-
-
-def get_f1(precision, recall):
- if (precision + recall) == 0:
- return 0.0
- return 2.0 * precision * recall / (precision + recall)
-
-
-def generate_plot(img, seen_xyz, seen_rgb, pred_occ, pred_rgb, unseen_xyz,
- gt_xyz=None, gt_rgb=None, mesh_xyz=None, score_thresholds=[0.1, 0.3, 0.5, 0.7, 0.9],
- pointcloud_marker_size=2,
- ):
- # if img is not None:
- # fig = plt.figure()
- # plt.imshow(img)
- # tmpfile = BytesIO()
- # fig.savefig(tmpfile, format='jpg')
- # encoded = base64.b64encode(tmpfile.getvalue()).decode('utf-8')
-
- # html = ''.format(encoded)
- # f.write(html)
- # plt.close()
-
- clouds = {"MCC Output": {}}
- # Seen
- if seen_xyz is not None:
- seen_xyz = seen_xyz.reshape((-1, 3)).cpu()
- seen_rgb = torch.nn.functional.interpolate(seen_rgb, (112, 112)).permute(0, 2, 3, 1).reshape((-1, 3)).cpu()
- good_seen = seen_xyz[:, 0] != -100
-
- seen_pc = Pointclouds(
- points=seen_xyz[good_seen][None],
- features=seen_rgb[good_seen][None],
- )
- clouds["MCC Output"]["seen"] = seen_pc
-
- # GT points
- if gt_xyz is not None:
- subset_gt = random.sample(range(gt_xyz.shape[0]), 10000)
- gt_pc = Pointclouds(
- points=gt_xyz[subset_gt][None],
- features=gt_rgb[subset_gt][None],
- )
- clouds["MCC Output"]["GT points"] = gt_pc
-
- # GT meshes
- if mesh_xyz is not None:
- subset_mesh = random.sample(range(mesh_xyz.shape[0]), 10000)
- mesh_pc = Pointclouds(
- points=mesh_xyz[subset_mesh][None],
- )
- clouds["MCC Output"]["GT mesh"] = mesh_pc
-
- pred_occ = torch.nn.Sigmoid()(pred_occ).cpu()
- for t in score_thresholds:
- pos = pred_occ > t
-
- points = unseen_xyz[pos].reshape((-1, 3))
- features = pred_rgb[None][pos].reshape((-1, 3))
- good_points = points[:, 0] != -100
-
- if good_points.sum() == 0:
- continue
-
- pc = Pointclouds(
- points=points[good_points][None].cpu(),
- features=features[good_points][None].cpu(),
- )
-
- clouds["MCC Output"][f"pred_{t}"] = pc
- IO().save_pointcloud(pc, "output_pointcloud.ply")
-
- plt.figure()
- try:
- fig = plot_scene(clouds, pointcloud_marker_size=pointcloud_marker_size, pointcloud_max_points=20000 * 2)
- fig.update_layout(height=1000, width=1000)
- return fig
- except Exception as e:
- print('writing failed', e)
- try:
- plt.close()
- except:
- pass
-
-
-def generate_html(img, seen_xyz, seen_rgb, pred_occ, pred_rgb, unseen_xyz, f,
- gt_xyz=None, gt_rgb=None, mesh_xyz=None, score_thresholds=[0.1, 0.3, 0.5, 0.7, 0.9],
- pointcloud_marker_size=2,
- ):
- if img is not None:
- fig = plt.figure()
- plt.imshow(img)
- tmpfile = BytesIO()
- fig.savefig(tmpfile, format='jpg')
- encoded = base64.b64encode(tmpfile.getvalue()).decode('utf-8')
-
- html = ''.format(encoded)
- f.write(html)
- plt.close()
-
- clouds = {"MCC Output": {}}
- # Seen
- if seen_xyz is not None:
- seen_xyz = seen_xyz.reshape((-1, 3)).cpu()
- seen_rgb = torch.nn.functional.interpolate(seen_rgb, (112, 112)).permute(0, 2, 3, 1).reshape((-1, 3)).cpu()
- good_seen = seen_xyz[:, 0] != -100
-
- seen_pc = Pointclouds(
- points=seen_xyz[good_seen][None],
- features=seen_rgb[good_seen][None],
- )
- clouds["MCC Output"]["seen"] = seen_pc
-
- # GT points
- if gt_xyz is not None:
- subset_gt = random.sample(range(gt_xyz.shape[0]), 10000)
- gt_pc = Pointclouds(
- points=gt_xyz[subset_gt][None],
- features=gt_rgb[subset_gt][None],
- )
- clouds["MCC Output"]["GT points"] = gt_pc
-
- # GT meshes
- if mesh_xyz is not None:
- subset_mesh = random.sample(range(mesh_xyz.shape[0]), 10000)
- mesh_pc = Pointclouds(
- points=mesh_xyz[subset_mesh][None],
- )
- clouds["MCC Output"]["GT mesh"] = mesh_pc
-
- pred_occ = torch.nn.Sigmoid()(pred_occ).cpu()
- for t in score_thresholds:
- pos = pred_occ > t
-
- points = unseen_xyz[pos].reshape((-1, 3))
- features = pred_rgb[None][pos].reshape((-1, 3))
- good_points = points[:, 0] != -100
-
- if good_points.sum() == 0:
- continue
-
- pc = Pointclouds(
- points=points[good_points][None].cpu(),
- features=features[good_points][None].cpu(),
- )
-
- clouds["MCC Output"][f"pred_{t}"] = pc
-
- plt.figure()
- try:
- fig = plot_scene(clouds, pointcloud_marker_size=pointcloud_marker_size, pointcloud_max_points=20000 * 2)
- fig.update_layout(height=1000, width=1000)
- html_string = fig.to_html(full_html=False, include_plotlyjs="cnd")
- f.write(html_string)
- return fig, plt
- except Exception as e:
- print('writing failed', e)
- try:
- plt.close()
- except:
- pass
-
-
-def train_one_epoch(model: torch.nn.Module,
- data_loader: Iterable, optimizer: torch.optim.Optimizer,
- device: torch.device, epoch: int, loss_scaler,
- args=None):
- epoch_start_time = time.time()
- model.train(True)
- metric_logger = misc.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', misc.SmoothedValue(window_size=1, fmt='{value:.6f}'))
-
- accum_iter = args.accum_iter
-
- optimizer.zero_grad()
-
- print('Training data_loader length:', len(data_loader))
- for data_iter_step, samples in enumerate(data_loader):
- # we use a per iteration (instead of per epoch) lr scheduler
- if data_iter_step % accum_iter == 0:
- lr_sched.adjust_learning_rate(optimizer, data_iter_step / len(data_loader) + epoch, args)
- seen_xyz, valid_seen_xyz, unseen_xyz, unseen_rgb, labels, seen_images = prepare_data(samples, device, is_train=True, args=args)
-
- with torch.cuda.amp.autocast():
- loss, _ = model(
- seen_images=seen_images,
- seen_xyz=seen_xyz,
- unseen_xyz=unseen_xyz,
- unseen_rgb=unseen_rgb,
- unseen_occupy=labels,
- valid_seen_xyz=valid_seen_xyz,
- )
-
- loss_value = loss.item()
- if not math.isfinite(loss_value):
- print("Warning: Loss is {}".format(loss_value))
- loss *= 0.0
- loss_value = 100.0
-
- loss /= accum_iter
- loss_scaler(loss, optimizer, parameters=model.parameters(),
- clip_grad=args.clip_grad,
- update_grad=(data_iter_step + 1) % accum_iter == 0,
- verbose=(data_iter_step % 100) == 0)
-
- if (data_iter_step + 1) % accum_iter == 0:
- optimizer.zero_grad()
-
- torch.cuda.synchronize()
-
- metric_logger.update(loss=loss_value)
-
- lr = optimizer.param_groups[0]["lr"]
- metric_logger.update(lr=lr)
-
- if data_iter_step == 30:
- os.system('nvidia-smi')
- os.system('free -g')
- if args.debug and data_iter_step == 5:
- break
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger)
- print("Training epoch time:", time.time() - epoch_start_time)
- return {k: meter.global_avg for k, meter in metric_logger.meters.items()}
-
-
-def eval_one_epoch(
- model: torch.nn.Module,
- data_loader: Iterable,
- device: torch.device,
- args=None
- ):
- epoch_start_time = time.time()
- model.train(False)
-
- metric_logger = misc.MetricLogger(delimiter=" ")
-
- print('Eval len(data_loader):', len(data_loader))
-
- for data_iter_step, samples in enumerate(data_loader):
- seen_xyz, valid_seen_xyz, unseen_xyz, unseen_rgb, labels, seen_images = prepare_data(samples, device, is_train=False, args=args)
-
- # don't forward all at once to avoid oom
- max_n_queries_fwd = 5000
- all_loss, all_preds = [], []
- for p_idx in range(int(np.ceil(unseen_xyz.shape[1] / max_n_queries_fwd))):
- p_start = p_idx * max_n_queries_fwd
- p_end = (p_idx + 1) * max_n_queries_fwd
- cur_unseen_xyz = unseen_xyz[:, p_start:p_end]
- cur_unseen_rgb = unseen_rgb[:, p_start:p_end]
- cur_labels = labels[:, p_start:p_end]
-
- with torch.no_grad():
- loss, pred = model(
- seen_images=seen_images,
- seen_xyz=seen_xyz,
- unseen_xyz=cur_unseen_xyz,
- unseen_rgb=cur_unseen_rgb,
- unseen_occupy=cur_labels,
- valid_seen_xyz=valid_seen_xyz,
- )
- all_loss.append(loss)
- all_preds.append(pred)
-
- loss = sum(all_loss) / len(all_loss)
- pred = torch.cat(all_preds, dim=1)
-
- B = pred.shape[0]
-
- gt_xyz = samples[1][0].to(device).reshape((B, -1, 3))
- if args.use_hypersim:
- mesh_xyz = samples[2].to(device).reshape((B, -1, 3))
-
- s_thres = args.eval_score_threshold
- d_thres = args.eval_dist_threshold
-
- for b_idx in range(B):
- geometry_metrics = {}
- predicted_idx = torch.nn.Sigmoid()(pred[b_idx, :, 0]) > s_thres
- predicted_xyz = unseen_xyz[b_idx, predicted_idx]
-
- precision, recall, f1 = evaluate_points(predicted_xyz, gt_xyz[b_idx], d_thres)
- geometry_metrics[f'd{d_thres}_s{s_thres}_point_pr'] = precision
- geometry_metrics[f'd{d_thres}_s{s_thres}_point_rc'] = recall
- geometry_metrics[f'd{d_thres}_s{s_thres}_point_f1'] = f1
-
- if args.use_hypersim:
- precision, recall, f1 = evaluate_points(predicted_xyz, mesh_xyz[b_idx], d_thres)
- geometry_metrics[f'd{d_thres}_s{s_thres}_mesh_pr'] = precision
- geometry_metrics[f'd{d_thres}_s{s_thres}_mesh_rc'] = recall
- geometry_metrics[f'd{d_thres}_s{s_thres}_mesh_f1'] = f1
-
- metric_logger.update(**geometry_metrics)
-
- loss_value = loss.item()
-
- torch.cuda.synchronize()
- metric_logger.update(loss=loss_value)
-
- if args.debug and data_iter_step == 5:
- break
-
- metric_logger.synchronize_between_processes()
- print("Validation averaged stats:", metric_logger)
- print("Val epoch time:", time.time() - epoch_start_time)
- return {k: meter.global_avg for k, meter in metric_logger.meters.items()}
-
-
-def sample_uniform_semisphere(B, N, semisphere_size, device):
- for _ in range(100):
- points = torch.empty(B * N * 3, 3, device=device).uniform_(-semisphere_size, semisphere_size)
- points[..., 2] = points[..., 2].abs()
- dist = (points ** 2.0).sum(axis=-1) ** 0.5
- if (dist < semisphere_size).sum() >= B * N:
- return points[dist < semisphere_size][:B * N].reshape((B, N, 3))
- else:
- print('resampling sphere')
-
-
-def get_grid_semisphere(B, granularity, semisphere_size, device):
- n_grid_pts = int(semisphere_size / granularity) * 2 + 1
- grid_unseen_xyz = torch.zeros((n_grid_pts, n_grid_pts, n_grid_pts // 2 + 1, 3), device=device)
- for i in range(n_grid_pts):
- grid_unseen_xyz[i, :, :, 0] = i
- grid_unseen_xyz[:, i, :, 1] = i
- for i in range(n_grid_pts // 2 + 1):
- grid_unseen_xyz[:, :, i, 2] = i
- grid_unseen_xyz[..., :2] -= (n_grid_pts // 2.0)
- grid_unseen_xyz *= granularity
- dist = (grid_unseen_xyz ** 2.0).sum(axis=-1) ** 0.5
- grid_unseen_xyz = grid_unseen_xyz[dist <= semisphere_size]
- return grid_unseen_xyz[None].repeat(B, 1, 1)
-
-
-def get_min_dist(a, b, slice_size=1000):
- all_min, all_idx = [], []
- for i in range(int(np.ceil(a.shape[1] / slice_size))):
- start = slice_size * i
- end = slice_size * (i + 1)
- # B, n_queries, n_gt
- dist = ((a[:, start:end] - b) ** 2.0).sum(axis=-1) ** 0.5
- # B, n_queries
- cur_min, cur_idx = dist.min(axis=2)
- all_min.append(cur_min)
- all_idx.append(cur_idx)
- return torch.cat(all_min, dim=1), torch.cat(all_idx, dim=1)
-
-
-def construct_uniform_semisphere(gt_xyz, gt_rgb, semisphere_size, n_queries, dist_threshold, is_train, granularity):
- B = gt_xyz.shape[0]
- device = gt_xyz.device
- if is_train:
- unseen_xyz = sample_uniform_semisphere(B, n_queries, semisphere_size, device)
- else:
- unseen_xyz = get_grid_semisphere(B, granularity, semisphere_size, device)
- dist, idx_to_gt = get_min_dist(unseen_xyz[:, :, None], gt_xyz[:, None])
- labels = dist < dist_threshold
- unseen_rgb = torch.zeros_like(unseen_xyz)
- unseen_rgb[labels] = torch.gather(gt_rgb, 1, idx_to_gt.unsqueeze(-1).repeat(1, 1, 3))[labels]
- return unseen_xyz, unseen_rgb, labels.float()
-
-
-def construct_uniform_grid(gt_xyz, gt_rgb, co3d_world_size, n_queries, dist_threshold, is_train, granularity):
- B = gt_xyz.shape[0]
- device = gt_xyz.device
- if is_train:
- unseen_xyz = torch.empty((B, n_queries, 3), device=device).uniform_(-co3d_world_size, co3d_world_size)
- else:
- unseen_xyz = get_grid(B, device, co3d_world_size, granularity)
- dist, idx_to_gt = get_min_dist(unseen_xyz[:, :, None], gt_xyz[:, None])
- labels = dist < dist_threshold
- unseen_rgb = torch.zeros_like(unseen_xyz)
- unseen_rgb[labels] = torch.gather(gt_rgb, 1, idx_to_gt.unsqueeze(-1).repeat(1, 1, 3))[labels]
- return unseen_xyz, unseen_rgb, labels.float()
-
-
-def prepare_data(samples, device, is_train, args, is_viz=False):
- # Seen
- seen_xyz, seen_rgb = samples[0][0].to(device), samples[0][1].to(device)
- valid_seen_xyz = torch.isfinite(seen_xyz.sum(axis=-1))
- seen_xyz[~valid_seen_xyz] = -100
- B = seen_xyz.shape[0]
- # Gt
- gt_xyz, gt_rgb = samples[1][0].to(device).reshape(B, -1, 3), samples[1][1].to(device).reshape(B, -1, 3)
-
- sampling_func = construct_uniform_semisphere if args.use_hypersim else construct_uniform_grid
- unseen_xyz, unseen_rgb, labels = sampling_func(
- gt_xyz, gt_rgb,
- args.semisphere_size if args.use_hypersim else args.co3d_world_size,
- args.n_queries,
- args.train_dist_threshold,
- is_train,
- args.viz_granularity if is_viz else args.eval_granularity,
- )
-
- if is_train:
- seen_xyz, unseen_xyz = aug_xyz(seen_xyz, unseen_xyz, args, is_train=is_train)
-
- # Random Flip
- if random.random() < 0.5:
- seen_xyz[..., 0] *= -1
- unseen_xyz[..., 0] *= -1
- seen_xyz = torch.flip(seen_xyz, [2])
- valid_seen_xyz = torch.flip(valid_seen_xyz, [2])
- seen_rgb = torch.flip(seen_rgb, [3])
-
- return seen_xyz, valid_seen_xyz, unseen_xyz, unseen_rgb, labels, seen_rgb
diff --git a/spaces/chrisjay/afro-speech/app.css b/spaces/chrisjay/afro-speech/app.css
deleted file mode 100644
index 6fcc2b6d1ee451e2be66dcc423a99d7e4845ed62..0000000000000000000000000000000000000000
--- a/spaces/chrisjay/afro-speech/app.css
+++ /dev/null
@@ -1,38 +0,0 @@
-
-.infoPoint h1 {
- font-size: 30px;
- text-decoration: bold;
-
- }
-
-a {
- text-decoration: underline;
- color: #1f3b54 ;
-}
-
-.finished {
- color:rgb(9, 102, 169);
- font-size:13px
-}
-
-table {
-
- margin: 25px 0;
- font-size: 0.9em;
- font-family: sans-serif;
- min-width: 400px;
- max-width: 400px;
- box-shadow: 0 0 20px rgba(0, 0, 0, 0.15);
-}
-
-table th,
-table td {
- padding: 12px 15px;
-}
-
-tr {
-text-align: left;
-}
-thead tr {
-text-align: left;
-}
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/dirfs.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/dirfs.py
deleted file mode 100644
index 3e6def1f54b5b31985bd2ae12deb85c839ec890f..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/dirfs.py
+++ /dev/null
@@ -1,356 +0,0 @@
-from .. import filesystem
-from ..asyn import AsyncFileSystem
-
-
-class DirFileSystem(AsyncFileSystem):
- """Directory prefix filesystem
-
- The DirFileSystem is a filesystem-wrapper. It assumes every path it is dealing with
- is relative to the `path`. After performing the necessary paths operation it
- delegates everything to the wrapped filesystem.
- """
-
- def __init__(
- self,
- path=None,
- fs=None,
- fo=None,
- target_protocol=None,
- target_options=None,
- **storage_options,
- ):
- """
- Parameters
- ----------
- path: str
- Path to the directory.
- fs: AbstractFileSystem
- An instantiated filesystem to wrap.
- target_protocol, target_options:
- if fs is none, construct it from these
- fo: str
- Alternate for path; do not provide both
- """
- super().__init__(**storage_options)
- if fs is None:
- fs = filesystem(protocol=target_protocol, **(target_options or {}))
- if (path is not None) ^ (fo is not None) is False:
- raise ValueError("Provide path or fo, not both")
- path = path or fo
-
- if self.asynchronous and not fs.async_impl:
- raise ValueError("can't use asynchronous with non-async fs")
-
- if fs.async_impl and self.asynchronous != fs.asynchronous:
- raise ValueError("both dirfs and fs should be in the same sync/async mode")
-
- self.path = fs._strip_protocol(path)
- self.fs = fs
-
- def _join(self, path):
- if isinstance(path, str):
- if not self.path:
- return path
- if not path:
- return self.path
- return self.fs.sep.join((self.path, path))
- return [self._join(_path) for _path in path]
-
- def _relpath(self, path):
- if isinstance(path, str):
- if not self.path:
- return path
- if path == self.path:
- return ""
- prefix = self.path + self.fs.sep
- assert path.startswith(prefix)
- return path[len(prefix) :]
- return [self._relpath(_path) for _path in path]
-
- # Wrappers below
-
- @property
- def sep(self):
- return self.fs.sep
-
- async def set_session(self, *args, **kwargs):
- return await self.fs.set_session(*args, **kwargs)
-
- async def _rm_file(self, path, **kwargs):
- return await self.fs._rm_file(self._join(path), **kwargs)
-
- def rm_file(self, path, **kwargs):
- return self.fs.rm_file(self._join(path), **kwargs)
-
- async def _rm(self, path, *args, **kwargs):
- return await self.fs._rm(self._join(path), *args, **kwargs)
-
- def rm(self, path, *args, **kwargs):
- return self.fs.rm(self._join(path), *args, **kwargs)
-
- async def _cp_file(self, path1, path2, **kwargs):
- return await self.fs._cp_file(self._join(path1), self._join(path2), **kwargs)
-
- def cp_file(self, path1, path2, **kwargs):
- return self.fs.cp_file(self._join(path1), self._join(path2), **kwargs)
-
- async def _copy(
- self,
- path1,
- path2,
- *args,
- **kwargs,
- ):
- return await self.fs._copy(
- self._join(path1),
- self._join(path2),
- *args,
- **kwargs,
- )
-
- def copy(self, path1, path2, *args, **kwargs):
- return self.fs.copy(
- self._join(path1),
- self._join(path2),
- *args,
- **kwargs,
- )
-
- async def _pipe(self, path, *args, **kwargs):
- return await self.fs._pipe(self._join(path), *args, **kwargs)
-
- def pipe(self, path, *args, **kwargs):
- return self.fs.pipe(self._join(path), *args, **kwargs)
-
- async def _cat_file(self, path, *args, **kwargs):
- return await self.fs._cat_file(self._join(path), *args, **kwargs)
-
- def cat_file(self, path, *args, **kwargs):
- return self.fs.cat_file(self._join(path), *args, **kwargs)
-
- async def _cat(self, path, *args, **kwargs):
- ret = await self.fs._cat(
- self._join(path),
- *args,
- **kwargs,
- )
-
- if isinstance(ret, dict):
- return {self._relpath(key): value for key, value in ret.items()}
-
- return ret
-
- def cat(self, path, *args, **kwargs):
- ret = self.fs.cat(
- self._join(path),
- *args,
- **kwargs,
- )
-
- if isinstance(ret, dict):
- return {self._relpath(key): value for key, value in ret.items()}
-
- return ret
-
- async def _put_file(self, lpath, rpath, **kwargs):
- return await self.fs._put_file(lpath, self._join(rpath), **kwargs)
-
- def put_file(self, lpath, rpath, **kwargs):
- return self.fs.put_file(lpath, self._join(rpath), **kwargs)
-
- async def _put(
- self,
- lpath,
- rpath,
- *args,
- **kwargs,
- ):
- return await self.fs._put(
- lpath,
- self._join(rpath),
- *args,
- **kwargs,
- )
-
- def put(self, lpath, rpath, *args, **kwargs):
- return self.fs.put(
- lpath,
- self._join(rpath),
- *args,
- **kwargs,
- )
-
- async def _get_file(self, rpath, lpath, **kwargs):
- return await self.fs._get_file(self._join(rpath), lpath, **kwargs)
-
- def get_file(self, rpath, lpath, **kwargs):
- return self.fs.get_file(self._join(rpath), lpath, **kwargs)
-
- async def _get(self, rpath, *args, **kwargs):
- return await self.fs._get(self._join(rpath), *args, **kwargs)
-
- def get(self, rpath, *args, **kwargs):
- return self.fs.get(self._join(rpath), *args, **kwargs)
-
- async def _isfile(self, path):
- return await self.fs._isfile(self._join(path))
-
- def isfile(self, path):
- return self.fs.isfile(self._join(path))
-
- async def _isdir(self, path):
- return await self.fs._isdir(self._join(path))
-
- def isdir(self, path):
- return self.fs.isdir(self._join(path))
-
- async def _size(self, path):
- return await self.fs._size(self._join(path))
-
- def size(self, path):
- return self.fs.size(self._join(path))
-
- async def _exists(self, path):
- return await self.fs._exists(self._join(path))
-
- def exists(self, path):
- return self.fs.exists(self._join(path))
-
- async def _info(self, path, **kwargs):
- return await self.fs._info(self._join(path), **kwargs)
-
- def info(self, path, **kwargs):
- return self.fs.info(self._join(path), **kwargs)
-
- async def _ls(self, path, detail=True, **kwargs):
- ret = (await self.fs._ls(self._join(path), detail=detail, **kwargs)).copy()
- if detail:
- out = []
- for entry in ret:
- entry = entry.copy()
- entry["name"] = self._relpath(entry["name"])
- out.append(entry)
- return out
-
- return self._relpath(ret)
-
- def ls(self, path, detail=True, **kwargs):
- ret = self.fs.ls(self._join(path), detail=detail, **kwargs).copy()
- if detail:
- out = []
- for entry in ret:
- entry = entry.copy()
- entry["name"] = self._relpath(entry["name"])
- out.append(entry)
- return out
-
- return self._relpath(ret)
-
- async def _walk(self, path, *args, **kwargs):
- async for root, dirs, files in self.fs._walk(self._join(path), *args, **kwargs):
- yield self._relpath(root), dirs, files
-
- def walk(self, path, *args, **kwargs):
- for root, dirs, files in self.fs.walk(self._join(path), *args, **kwargs):
- yield self._relpath(root), dirs, files
-
- async def _glob(self, path, **kwargs):
- detail = kwargs.get("detail", False)
- ret = await self.fs._glob(self._join(path), **kwargs)
- if detail:
- return {self._relpath(path): info for path, info in ret.items()}
- return self._relpath(ret)
-
- def glob(self, path, **kwargs):
- detail = kwargs.get("detail", False)
- ret = self.fs.glob(self._join(path), **kwargs)
- if detail:
- return {self._relpath(path): info for path, info in ret.items()}
- return self._relpath(ret)
-
- async def _du(self, path, *args, **kwargs):
- total = kwargs.get("total", True)
- ret = await self.fs._du(self._join(path), *args, **kwargs)
- if total:
- return ret
-
- return {self._relpath(path): size for path, size in ret.items()}
-
- def du(self, path, *args, **kwargs):
- total = kwargs.get("total", True)
- ret = self.fs.du(self._join(path), *args, **kwargs)
- if total:
- return ret
-
- return {self._relpath(path): size for path, size in ret.items()}
-
- async def _find(self, path, *args, **kwargs):
- detail = kwargs.get("detail", False)
- ret = await self.fs._find(self._join(path), *args, **kwargs)
- if detail:
- return {self._relpath(path): info for path, info in ret.items()}
- return self._relpath(ret)
-
- def find(self, path, *args, **kwargs):
- detail = kwargs.get("detail", False)
- ret = self.fs.find(self._join(path), *args, **kwargs)
- if detail:
- return {self._relpath(path): info for path, info in ret.items()}
- return self._relpath(ret)
-
- async def _expand_path(self, path, *args, **kwargs):
- return self._relpath(
- await self.fs._expand_path(self._join(path), *args, **kwargs)
- )
-
- def expand_path(self, path, *args, **kwargs):
- return self._relpath(self.fs.expand_path(self._join(path), *args, **kwargs))
-
- async def _mkdir(self, path, *args, **kwargs):
- return await self.fs._mkdir(self._join(path), *args, **kwargs)
-
- def mkdir(self, path, *args, **kwargs):
- return self.fs.mkdir(self._join(path), *args, **kwargs)
-
- async def _makedirs(self, path, *args, **kwargs):
- return await self.fs._makedirs(self._join(path), *args, **kwargs)
-
- def makedirs(self, path, *args, **kwargs):
- return self.fs.makedirs(self._join(path), *args, **kwargs)
-
- def rmdir(self, path):
- return self.fs.rmdir(self._join(path))
-
- def mv_file(self, path1, path2, **kwargs):
- return self.fs.mv_file(
- self._join(path1),
- self._join(path2),
- **kwargs,
- )
-
- def touch(self, path, **kwargs):
- return self.fs.touch(self._join(path), **kwargs)
-
- def created(self, path):
- return self.fs.created(self._join(path))
-
- def modified(self, path):
- return self.fs.modified(self._join(path))
-
- def sign(self, path, *args, **kwargs):
- return self.fs.sign(self._join(path), *args, **kwargs)
-
- def __repr__(self):
- return f"{self.__class__.__qualname__}(path='{self.path}', fs={self.fs})"
-
- def open(
- self,
- path,
- *args,
- **kwargs,
- ):
- return self.fs.open(
- self._join(path),
- *args,
- **kwargs,
- )
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-2d54a466.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-2d54a466.js
deleted file mode 100644
index a8b06272457531f232b0a0b951dcace8fc4beea8..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Form-2d54a466.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as m,e as c,s as r,a9 as d,N as h,K as g,U as u,L as _,p as v,ab as w,ac as b,ad as F,z as q,v as y,A as S}from"./index-f877dfd5.js";function z(n){let s,l;const f=n[4].default,t=d(f,n,n[3],null);return{c(){s=h("div"),t&&t.c(),g(s,"class","form svelte-sfqy0y"),u(s,"hidden",!n[0]),_(s,"flex-grow",n[1]),_(s,"min-width",`calc(min(${n[2]}px, 100%))`)},m(e,i){v(e,s,i),t&&t.m(s,null),l=!0},p(e,[i]){t&&t.p&&(!l||i&8)&&w(t,f,e,e[3],l?F(f,e[3],i,null):b(e[3]),null),(!l||i&1)&&u(s,"hidden",!e[0]),i&2&&_(s,"flex-grow",e[1]),i&4&&_(s,"min-width",`calc(min(${e[2]}px, 100%))`)},i(e){l||(q(t,e),l=!0)},o(e){y(t,e),l=!1},d(e){e&&S(s),t&&t.d(e)}}}function A(n,s,l){let{$$slots:f={},$$scope:t}=s,{visible:e=!0}=s,{scale:i=null}=s,{min_width:o=0}=s;return n.$$set=a=>{"visible"in a&&l(0,e=a.visible),"scale"in a&&l(1,i=a.scale),"min_width"in a&&l(2,o=a.min_width),"$$scope"in a&&l(3,t=a.$$scope)},[e,i,o,t,f]}class K extends m{constructor(s){super(),c(this,s,A,z,r,{visible:0,scale:1,min_width:2})}}export{K as F};
-//# sourceMappingURL=Form-2d54a466.js.map
diff --git a/spaces/cihyFjudo/fairness-paper-search/Imprimir Cartones Bingo Binvi Pdf Aprende a imprimir y recortar estos cartones de bingo personalizados.md b/spaces/cihyFjudo/fairness-paper-search/Imprimir Cartones Bingo Binvi Pdf Aprende a imprimir y recortar estos cartones de bingo personalizados.md
deleted file mode 100644
index c2088a2ed01ab803fd485a8e800e99d19e550b98..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Imprimir Cartones Bingo Binvi Pdf Aprende a imprimir y recortar estos cartones de bingo personalizados.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
Te ofrecemos un generador de cartones de bingo gratis para que los puedas imprimir y disfrutar de las reuniones con tus amigos jugando al bingo. Los juegos de bingo más populares son el bingo de 90 bolas y el bingo de 75 bolas. A continuación, podrás generar tus cartones de bingo en un archivo pdf imprimible para cualquiera de estos juegos de bingo. Para imprimir tus cartones sigue los siguientes pasos:
Compre sus cartones en el 675 014 916 - 633 122 462 Nuestros cartones de bingo Bingol son ideales para reuniones familiares, clubs de futbol, animación de fiestas, hoteles, asociaciones de vecinos discotecas, peñas, etc. El juego consta de un bolillero, bolas numeradas (que pueden ser 75 o 90 dependiendo del tipo de bingo), una pizarra y cartones divididos en cuadros numerados. Bingo en linea Bolillero Virtual para jugar al Bingo desde casa tan solo tienes que tener los cartones puedes cantar linea girar bombo de forma automatica. Cartones de Bingo; Bingo Chat; Cómo Ganar en el Bingo; Historia del Bingo; Bonos de Bingo; Juegos de Bingo . Bingo.es lanza un generador para imprimir cartones de bingo gratis en pdf para los juegos de bingo 90 bolas y 75 bolas.
-
No tienes que invertir nada de dinero para tener un momento divertido con tu familia o amistades. Puedes jugar al bingo online con tu móvil Android o iPhone, tablet y PC sin descargar. Puede reponer en cualquier momento los cartones y rotuladores simplemente llamando a los teléfonos 902 012 455 ó 676 253 660, o bien realizando el pedido a través de esta misma página, en la sección de Pedidos.También el juego incluye dos tarjetas postales para la reposición de cartones. A continuación, podrás generar tus cartones de bingo en pdf para cualquiera de estos juegos de bingo.
-
26-feb-2018 - Explora el tablero de Mariuska Garcia "Cartones de bingo" en Pinterest. Te ofrecemos un generador de cartones de bingo gratis para que los puedas imprimir y disfrutar de las reuniones con tus amigos jugando al bingo.
-
27-ago-2018 - Explora el tablero de Flaviosanchez "Bingo para imprimir" en Pinterest. España - Bingo.es ha lanzado una aplicación web gratuita muy demandada por los amantes del juego del bingo y bingo online , se trata de un generador de cartones de bingo en formato PDF para los dos juegos de bingo más populares. 15-may-2019 - Explora el tablero de Antonia Betancor "Cartones de bingo" en Pinterest. Aquí se pueden elegir los cartones y tachar los números que vayáis cantando con el dedo, o si preferís tener los cartones en la mano, los podéis imprimir aqui, ahora solo os falta un bolígrafo y ¡listo! La aplicación "Cartones de Bingo" permite usar nuestro teléfono móvil o tablet como si de un cartón de bingo virtual se tratase.
-
Hay una única serie de 4 cartones con 15 números cada uno en 5x3 casillas, todos ellos están en la pantalla principal con un botón ON/OFF. El bingo de 80 bolas, que brinda la página 7bingo.com, es un juego que ofrece una gran promoción que permite a todos los jugadores, sean principiantes o no, obtener grandes cantidades de dinero extra en cada una de las partidas e ir acumulándolas. El juego de bingo permite seleccionar manualmente el número de las tarjetas activas. Visualizarás continuamente en el panel las bolas que han sido extraídas, y escucharás mensajes sonoros e incluso una voz (personalizable) cantando los números. Bingo Caller es un programa que cantará los números del bingo desde tu ordenador. El tradicional juego de bingo, pero ahora en dvd para que usted se divierta en familia jugando todo el tiempo Formato en DVD con 30 partidas distintas y aleatorias. Cómo jugar Bingo Abradoodle : Mejores juegos de bingo gratis en PC,Ordenador portátil,Tableta.
-
Imprimir cartones para bingo en tres pasos El programa ofrece un generador de cartones de bingo gratis para imprimir y disfrutar de las reuniones con amigos. 90 Bolas y 12 cartones de Bingo, Juego de acción para Toda la Familia, para niños a Partir de 6 años. Cartones Bingo 80 Bolas Para Imprimir Gratis, Bingo 90 Bolas Lite incluyen 120 cartones para ser usados en forma individual o 20 series de 6 cartones.
-
-
La aplicación "Bingo en Casa" es un juego de bingo que sirve como bombo de bolas para las partidas de bingo en casa, en familia y con amigos. También podréis descargaros gratis los cartones de bingo para el juego de Bingo en Casa, con la aplicación asociada Cartones de Bingo. Pelotas O Bolas Para Bingo en Mercado Libre Venezuela Bingo 90 Pc Y Laptos-windows: pin. ya que cada una de las celdas que componen los cartones admite contenidos en formato texto (también eñes y tildes). Bueno, Primero, no se si estoy en el foro correcto, agradecería que se me dijese a donde dirigirme, si ese no fuese el caso. Publibingo es un sitio web gratuito que ofrece un generador de cartones de bingo en pdf para descargar, imprimir y usar en casa. Para empezar a jugar solo tienes que escoger tu apuesta encendiendo o apagando los cartones que tú desees.
-
Aquí podrás encontrar opiniones relacionadas con imprimir cartones bingo 80 bolas y descubrirás qué opina la gente de imprimir cartones bingo 80 bolas. Este método de uso Cartones de Bingo APK funciona para todos los dispositivos Android. La versión demo del juego de bingo Rock Live te ofrece 250 tokens para jugar gratis y conocer el juego. Con este software Generador Cartones Bingo podrás diseñar, crear y editar cartones para este juego de azar con finalidad tanto doméstica como profesional. Descargar Bingo Para Pc 75 Bolas; Online bingo descargar bingo para pc 75 bolas nfr roping contestants 2019! Al ir marcando en el panel el número sorteado el sistema informa los aciertos de cada cartón y cuando alguno llega a tener premio.
-
Bingo Card Maker es un generador de cartones de 3×3 o 5×5 celdas para bingos de palabras, números, definiciones, operaciones aritméticas, preguntas y respuestas, etc. Se pueden elegir cartones de bingo de 90 bolas y de 75 bolas, marcar, desmarcar, volver a jugar con los mismos cartones o volver a elegir. ADVERTENCIA, para juegos de bingo con 90 bolas, tendría que quitar del bombo las bolas del 81 a la 90. Descarga en PDF cartones de bingo de 90 bolas e imprímelos y podrás jugar al bingo donde quieras y cuando quieras. Cartones De Bingo 80 Para Imprimir Gratis, Generador Cartones Bingo, descargar gratis. Puedes descargar el Bingo de 90 bolas haciendo clic en este botón (actualizado 09/08/2020): Descargar Bingo Excel Gratis Nombre archivo: Bingo_Excel_Gratis.exe.
-
Cartones para juego de bingo BINVI , 20 números por cartón , 80 series de 4 cartones por paquete. Los cartones en el bingo de 80 bolas se presentan de forma diferente a los de la variante de 75 y 90, ya que, en este caso, están repartidos en 4 filas y 4 columnas, con un total de 16 números. Rock Live es un juego de bingo de 30 bolas donde las apuestas se realizan con créditos y cuyos valores oscilan entre 500 y 25000 créditos. Descubre la mejor forma de comprar online.elonce.comCartones de bingo para imprimir en PDF Gratis! Productos patrocinados relacionados con este artículo 25 opiniones de clientes Valorar este producto Leer reseñas que mencionen Ha surgido un problema al filtrar las opiniones justo en este momento. El tradicional juego de bingo, pero ahora en dvd para que usted se divierta en familia jugando todo el tiempo, incluye 70 partidas aleatorias y 6 Locutores. Es muy fácil descarga estos documentos en pdf y envíalos a tu impresora y ya podrás jugar al bingo en tu lugar favorito.
-
Puedes descargar el que prefieras de 6 juegos de Cartones de Bingo de 8, 16, 32, 64, 100, 200 y 600 cartones cada uno, en formato PDF e imprimir los que necesites para jugar bingo. Esta nueva aplicación permite a los usuarios imprimir cartones de bingo en varios colores: naranja, rosa, azul y verde para los juegos de bingo de 90 bolas y 75 bolas. Los hay para todos los gustos y dependiendo del territorio están más de moda unos u otros.
-
Los juegos de bingo más populares son el bingo de 90 bolas y el bingo de 75 bolas. Cartones diseñados para tachar en el juego de Bingo mas real BINVI , podría poner fichas si no desea gastarlos. Bingo 90 Bolas Lite incluyen 120 cartones para ser usados en forma individual o 20 series de 6 cartones. Címkék: para de gratis 80 bingo imprimir cartones bolas Artículos sobre Generador Cartones Bingo. Como su nombre indica, los cartones de bingo 80 es la variante del juego que se encuentra en la mitad de las demás versiones que tienen más o menos bolas dentro del bolillero. El juego del bingo es sencillo y divertido siempre que se juegue con responsabilidad asociacion entre los numeros para jugadores posibilidades de ganar Reglas de juego Numeros . Cartones para Bingo 80 saludos colegas.les dejo este enlace para que bajen en pdf cinco archivos con cartones para bingo hasta el nmero 80.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/clarin-pl/datasets-explorer/clarin_datasets/polemo_dataset.py b/spaces/clarin-pl/datasets-explorer/clarin_datasets/polemo_dataset.py
deleted file mode 100644
index 4295f9693fe19fcc8e05dc4874322e228a239e68..0000000000000000000000000000000000000000
--- a/spaces/clarin-pl/datasets-explorer/clarin_datasets/polemo_dataset.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-import seaborn as sns
-from datasets import load_dataset
-import pandas as pd
-import plotly.figure_factory as ff
-import plotly.graph_objects as go
-from sklearn.manifold import TSNE
-import streamlit as st
-
-from clarin_datasets.dataset_to_show import DatasetToShow
-from clarin_datasets.utils import (
- count_num_of_characters,
- count_num_of_words,
- embed_sentence,
- PLOT_COLOR_PALETTE
-)
-
-
-class PolemoDataset(DatasetToShow):
- def __init__(self):
- DatasetToShow.__init__(self)
- self.dataset_name = "clarin-pl/polemo2-official"
- self.subsets = ["train", "validation", "test"]
- self.description = f"""
- Dataset link: https://huggingface.co/datasets/{self.dataset_name}
-
- The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine,
- hotels, products, and university. It is human-annotated on a level of full reviews and individual
- sentences. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and
- sentence was manually annotated with sentiment in the 2+1 scheme, which gives a total of 197,
- 046 annotations. About 85% of the reviews are from the medicine and hotel domains. Each review is
- annotated with four labels: positive, negative, neutral, or ambiguous. """
-
- def load_data(self):
- raw_dataset = load_dataset(self.dataset_name)
- self.data_dict = {
- subset: raw_dataset[subset].to_pandas() for subset in self.subsets
- }
-
- def show_dataset(self):
- header = st.container()
- description = st.container()
- dataframe_head = st.container()
- word_searching = st.container()
- dataset_statistics = st.container()
- tsne_projection = st.container()
-
- with header:
- st.title(self.dataset_name)
-
- with description:
- st.header("Dataset description")
- st.write(self.description)
-
- with dataframe_head:
- filtering_options = self.data_dict["train"]["target"].unique().tolist()
- filtering_options.append("All classes")
-
- st.header("First 10 observations of a chosen class")
- class_to_show = st.selectbox(
- label="Select class to show", options=filtering_options
- )
- df_to_show = pd.concat(
- [
- self.data_dict["train"].copy(),
- self.data_dict["validation"].copy(),
- self.data_dict["test"].copy(),
- ]
- )
- if class_to_show == "All classes":
- df_to_show = df_to_show.head(10)
- else:
- df_to_show = df_to_show.loc[df_to_show["target"] == class_to_show].head(
- 10
- )
- st.dataframe(df_to_show)
- st.text_area(label="Latex code", value=df_to_show.style.to_latex())
-
- st.subheader("First 10 observations of a chosen domain and text type")
- domain = st.selectbox(
- label="Select domain",
- options=["all", "hotels", "medicine", "products", "reviews"],
- )
- text_type = st.selectbox(
- label="Select text type",
- options=["Full text", "Tokenized to sentences"],
- )
- text_type_mapping_dict = {
- "Full text": "text",
- "Tokenized to sentences": "sentence",
- }
-
- polemo_subset = load_dataset(
- self.dataset_name,
- f"{domain}_{text_type_mapping_dict[text_type]}",
- )
- df = pd.concat(
- [
- polemo_subset["train"].to_pandas(),
- polemo_subset["validation"].to_pandas(),
- polemo_subset["test"].to_pandas(),
- ]
- ).head(10)
- st.dataframe(df)
- st.text_area(label="Latex code", value=df.style.to_latex())
-
- with word_searching:
- st.header("Observations containing a chosen word")
- searched_word = st.text_input(
- label="Enter the word you are looking for below"
- )
- df_to_show = pd.concat(
- [
- self.data_dict["train"].copy(),
- self.data_dict["validation"].copy(),
- self.data_dict["test"].copy(),
- ]
- )
- df_to_show = df_to_show.loc[df_to_show["text"].str.contains(searched_word)]
- st.dataframe(df_to_show)
- st.text_area(label="Latex code", value=df_to_show.style.to_latex())
-
- with dataset_statistics:
- st.header("Dataset statistics")
- st.subheader("Number of samples in each data split")
- metrics_df = pd.DataFrame.from_dict(
- {
- "Train": self.data_dict["train"].shape[0],
- "Validation": self.data_dict["validation"].shape[0],
- "Test": self.data_dict["test"].shape[0],
- "Total": sum(
- [
- self.data_dict["train"].shape[0],
- self.data_dict["validation"].shape[0],
- self.data_dict["test"].shape[0],
- ]
- ),
- },
- orient="index",
- ).reset_index()
- metrics_df.columns = ["Subset", "Number of samples"]
- st.dataframe(metrics_df)
-
- latex_df = metrics_df.style.to_latex()
- st.text_area(label="Latex code", value=latex_df)
-
- # Class distribution in each subset
- st.subheader("Class distribution in each subset")
- target_unique_values = self.data_dict["train"]["target"].unique()
- hist = (
- pd.DataFrame(
- [
- df["target"].value_counts(normalize=True).rename(k)
- for k, df in self.data_dict.items()
- ]
- )
- .reset_index()
- .rename({"index": "split_name"}, axis=1)
- )
- plot_data = [
- go.Bar(
- name=str(target_unique_values[i]),
- x=self.subsets,
- y=hist[target_unique_values[i]].values,
- )
- for i in range(len(target_unique_values))
- ]
- barchart_class_dist = go.Figure(data=plot_data)
- barchart_class_dist.update_layout(
- barmode="group",
- title_text="Barchart - class distribution",
- xaxis_title="Split name",
- yaxis_title="Number of data points",
- )
- st.plotly_chart(barchart_class_dist, use_container_width=True)
- st.dataframe(hist)
- st.text_area(label="Latex code", value=hist.style.to_latex())
-
- # Number of words per observation
- st.subheader("Number of words per observation in each subset")
- hist_data_num_words = [
- df["text"].apply(count_num_of_words) for df in self.data_dict.values()
- ]
- fig_num_words = ff.create_distplot(
- hist_data_num_words, self.subsets, show_rug=False, bin_size=1
- )
- fig_num_words.update_traces(
- nbinsx=100, autobinx=True, selector={"type": "histogram"}
- )
- fig_num_words.update_layout(
- title_text="Histogram - number of characters per observation",
- xaxis_title="Number of characters",
- )
- st.plotly_chart(fig_num_words, use_container_width=True)
-
- # Number of characters per observation
- st.subheader("Number of characters per observation in each subset")
- hist_data_num_characters = [
- df["text"].apply(count_num_of_characters)
- for df in self.data_dict.values()
- ]
- fig_num_chars = ff.create_distplot(
- hist_data_num_characters, self.subsets, show_rug=False, bin_size=1
- )
- fig_num_chars.update_layout(
- title_text="Histogram - number of characters per observation",
- xaxis_title="Number of characters",
- )
- st.plotly_chart(fig_num_chars, use_container_width=True)
-
- with tsne_projection:
- st.header("t-SNE projection of the dataset")
- subset_to_project = st.selectbox(
- label="Select subset to project", options=self.subsets
- )
- sentences = self.data_dict[subset_to_project]["text"].values
- reducer = TSNE(
- n_components=2
- )
- embedded_sentences = np.array(
- [embed_sentence(text) for text in sentences]
- )
- transformed_embeddings = reducer.fit_transform(embedded_sentences)
- fig, ax = plt.subplots()
- ax.scatter(
- x=transformed_embeddings[:, 0],
- y=transformed_embeddings[:, 1],
- c=[
- PLOT_COLOR_PALETTE[x]
- for x in self.data_dict[subset_to_project]["target"].values
- ],
- )
- st.pyplot(fig)
diff --git a/spaces/cleanmaster/so-vits-svc-akagi/models.py b/spaces/cleanmaster/so-vits-svc-akagi/models.py
deleted file mode 100644
index 5d8f154887a43a5c5f67cf6340f74268398e32d5..0000000000000000000000000000000000000000
--- a/spaces/cleanmaster/so-vits-svc-akagi/models.py
+++ /dev/null
@@ -1,351 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import attentions
-import commons
-import modules
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-from vdecoder.hifigan.models import Generator
-from utils import f0_to_coarse
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class Encoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- # print(x.shape,x_lengths.shape)
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- filter_channels=None,
- n_heads=None,
- p_dropout=None):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
- self.f0_emb = nn.Embedding(256, hidden_channels)
-
- self.enc_ = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
-
- def forward(self, x, x_lengths, f0=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = x + self.f0_emb(f0).transpose(1,2)
- x = self.enc_(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
-
- return z, m, logs, x_mask
-
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class SpeakerEncoder(torch.nn.Module):
- def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256):
- super(SpeakerEncoder, self).__init__()
- self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True)
- self.linear = nn.Linear(model_hidden_size, model_embedding_size)
- self.relu = nn.ReLU()
-
- def forward(self, mels):
- self.lstm.flatten_parameters()
- _, (hidden, _) = self.lstm(mels)
- embeds_raw = self.relu(self.linear(hidden[-1]))
- return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True)
-
- def compute_partial_slices(self, total_frames, partial_frames, partial_hop):
- mel_slices = []
- for i in range(0, total_frames-partial_frames, partial_hop):
- mel_range = torch.arange(i, i+partial_frames)
- mel_slices.append(mel_range)
-
- return mel_slices
-
- def embed_utterance(self, mel, partial_frames=128, partial_hop=64):
- mel_len = mel.size(1)
- last_mel = mel[:,-partial_frames:]
-
- if mel_len > partial_frames:
- mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop)
- mels = list(mel[:,s] for s in mel_slices)
- mels.append(last_mel)
- mels = torch.stack(tuple(mels), 0).squeeze(1)
-
- with torch.no_grad():
- partial_embeds = self(mels)
- embed = torch.mean(partial_embeds, axis=0).unsqueeze(0)
- #embed = embed / torch.linalg.norm(embed, 2)
- else:
- with torch.no_grad():
- embed = self(last_mel)
-
- return embed
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- ssl_dim,
- n_speakers,
- **kwargs):
-
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- self.ssl_dim = ssl_dim
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout)
- hps = {
- "sampling_rate": 48000,
- "inter_channels": 192,
- "resblock": "1",
- "resblock_kernel_sizes": [3, 7, 11],
- "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- "upsample_rates": [10, 8, 2, 2],
- "upsample_initial_channel": 512,
- "upsample_kernel_sizes": [16, 16, 4, 4],
- "gin_channels": 256,
- }
- self.dec = Generator(h=hps)
- self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None):
- if c_lengths == None:
- c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
- if spec_lengths == None:
- spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device)
-
- g = self.emb_g(g).transpose(1,2)
-
- z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0))
- z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g)
-
- z_p = self.flow(z, spec_mask, g=g)
- z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size)
-
- # o = self.dec(z_slice, g=g)
- o = self.dec(z_slice, g=g, f0=pitch_slice)
-
- return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, c, f0, g=None, mel=None, c_lengths=None):
- if c_lengths == None:
- c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
- g = self.emb_g(g).transpose(1,2)
-
- z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0))
- z = self.flow(z_p, c_mask, g=g, reverse=True)
-
- o = self.dec(z * c_mask, g=g, f0=f0)
-
- return o
diff --git a/spaces/cloixai/dalle-minii/README.md b/spaces/cloixai/dalle-minii/README.md
deleted file mode 100644
index e8d3d7e311b04d1e35e1ff572ec809613f71f09c..0000000000000000000000000000000000000000
--- a/spaces/cloixai/dalle-minii/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: DALL·E mini
-metaTitle: DALL·E mini by craiyon.com on Hugging Face
-emoji: 🥑
-colorFrom: yellow
-colorTo: green
-sdk: static
-pinned: true
-license: apache-2.0
-duplicated_from: dalle-mini/dalle-mini
----
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/lexer.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/lexer.py
deleted file mode 100644
index e0ae0aefeef33e1cbfd11cdffdfafb330bdca205..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/feaLib/lexer.py
+++ /dev/null
@@ -1,291 +0,0 @@
-from fontTools.feaLib.error import FeatureLibError, IncludedFeaNotFound
-from fontTools.feaLib.location import FeatureLibLocation
-import re
-import os
-
-try:
- import cython
-except ImportError:
- # if cython not installed, use mock module with no-op decorators and types
- from fontTools.misc import cython
-
-
-class Lexer(object):
- NUMBER = "NUMBER"
- HEXADECIMAL = "HEXADECIMAL"
- OCTAL = "OCTAL"
- NUMBERS = (NUMBER, HEXADECIMAL, OCTAL)
- FLOAT = "FLOAT"
- STRING = "STRING"
- NAME = "NAME"
- FILENAME = "FILENAME"
- GLYPHCLASS = "GLYPHCLASS"
- CID = "CID"
- SYMBOL = "SYMBOL"
- COMMENT = "COMMENT"
- NEWLINE = "NEWLINE"
- ANONYMOUS_BLOCK = "ANONYMOUS_BLOCK"
-
- CHAR_WHITESPACE_ = " \t"
- CHAR_NEWLINE_ = "\r\n"
- CHAR_SYMBOL_ = ",;:-+'{}[]<>()="
- CHAR_DIGIT_ = "0123456789"
- CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef"
- CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
- CHAR_NAME_START_ = CHAR_LETTER_ + "_+*:.^~!\\"
- CHAR_NAME_CONTINUATION_ = CHAR_LETTER_ + CHAR_DIGIT_ + "_.+*:^~!/-"
-
- RE_GLYPHCLASS = re.compile(r"^[A-Za-z_0-9.\-]+$")
-
- MODE_NORMAL_ = "NORMAL"
- MODE_FILENAME_ = "FILENAME"
-
- def __init__(self, text, filename):
- self.filename_ = filename
- self.line_ = 1
- self.pos_ = 0
- self.line_start_ = 0
- self.text_ = text
- self.text_length_ = len(text)
- self.mode_ = Lexer.MODE_NORMAL_
-
- def __iter__(self):
- return self
-
- def next(self): # Python 2
- return self.__next__()
-
- def __next__(self): # Python 3
- while True:
- token_type, token, location = self.next_()
- if token_type != Lexer.NEWLINE:
- return (token_type, token, location)
-
- def location_(self):
- column = self.pos_ - self.line_start_ + 1
- return FeatureLibLocation(self.filename_ or "", self.line_, column)
-
- def next_(self):
- self.scan_over_(Lexer.CHAR_WHITESPACE_)
- location = self.location_()
- start = self.pos_
- text = self.text_
- limit = len(text)
- if start >= limit:
- raise StopIteration()
- cur_char = text[start]
- next_char = text[start + 1] if start + 1 < limit else None
-
- if cur_char == "\n":
- self.pos_ += 1
- self.line_ += 1
- self.line_start_ = self.pos_
- return (Lexer.NEWLINE, None, location)
- if cur_char == "\r":
- self.pos_ += 2 if next_char == "\n" else 1
- self.line_ += 1
- self.line_start_ = self.pos_
- return (Lexer.NEWLINE, None, location)
- if cur_char == "#":
- self.scan_until_(Lexer.CHAR_NEWLINE_)
- return (Lexer.COMMENT, text[start : self.pos_], location)
-
- if self.mode_ is Lexer.MODE_FILENAME_:
- if cur_char != "(":
- raise FeatureLibError("Expected '(' before file name", location)
- self.scan_until_(")")
- cur_char = text[self.pos_] if self.pos_ < limit else None
- if cur_char != ")":
- raise FeatureLibError("Expected ')' after file name", location)
- self.pos_ += 1
- self.mode_ = Lexer.MODE_NORMAL_
- return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location)
-
- if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_:
- self.pos_ += 1
- self.scan_over_(Lexer.CHAR_DIGIT_)
- return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location)
- if cur_char == "@":
- self.pos_ += 1
- self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_)
- glyphclass = text[start + 1 : self.pos_]
- if len(glyphclass) < 1:
- raise FeatureLibError("Expected glyph class name", location)
- if len(glyphclass) > 63:
- raise FeatureLibError(
- "Glyph class names must not be longer than 63 characters", location
- )
- if not Lexer.RE_GLYPHCLASS.match(glyphclass):
- raise FeatureLibError(
- "Glyph class names must consist of letters, digits, "
- "underscore, period or hyphen",
- location,
- )
- return (Lexer.GLYPHCLASS, glyphclass, location)
- if cur_char in Lexer.CHAR_NAME_START_:
- self.pos_ += 1
- self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_)
- token = text[start : self.pos_]
- if token == "include":
- self.mode_ = Lexer.MODE_FILENAME_
- return (Lexer.NAME, token, location)
- if cur_char == "0" and next_char in "xX":
- self.pos_ += 2
- self.scan_over_(Lexer.CHAR_HEXDIGIT_)
- return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location)
- if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_:
- self.scan_over_(Lexer.CHAR_DIGIT_)
- return (Lexer.OCTAL, int(text[start : self.pos_], 8), location)
- if cur_char in Lexer.CHAR_DIGIT_:
- self.scan_over_(Lexer.CHAR_DIGIT_)
- if self.pos_ >= limit or text[self.pos_] != ".":
- return (Lexer.NUMBER, int(text[start : self.pos_], 10), location)
- self.scan_over_(".")
- self.scan_over_(Lexer.CHAR_DIGIT_)
- return (Lexer.FLOAT, float(text[start : self.pos_]), location)
- if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_:
- self.pos_ += 1
- self.scan_over_(Lexer.CHAR_DIGIT_)
- if self.pos_ >= limit or text[self.pos_] != ".":
- return (Lexer.NUMBER, int(text[start : self.pos_], 10), location)
- self.scan_over_(".")
- self.scan_over_(Lexer.CHAR_DIGIT_)
- return (Lexer.FLOAT, float(text[start : self.pos_]), location)
- if cur_char in Lexer.CHAR_SYMBOL_:
- self.pos_ += 1
- return (Lexer.SYMBOL, cur_char, location)
- if cur_char == '"':
- self.pos_ += 1
- self.scan_until_('"')
- if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"':
- self.pos_ += 1
- # strip newlines embedded within a string
- string = re.sub("[\r\n]", "", text[start + 1 : self.pos_ - 1])
- return (Lexer.STRING, string, location)
- else:
- raise FeatureLibError("Expected '\"' to terminate string", location)
- raise FeatureLibError("Unexpected character: %r" % cur_char, location)
-
- def scan_over_(self, valid):
- p = self.pos_
- while p < self.text_length_ and self.text_[p] in valid:
- p += 1
- self.pos_ = p
-
- def scan_until_(self, stop_at):
- p = self.pos_
- while p < self.text_length_ and self.text_[p] not in stop_at:
- p += 1
- self.pos_ = p
-
- def scan_anonymous_block(self, tag):
- location = self.location_()
- tag = tag.strip()
- self.scan_until_(Lexer.CHAR_NEWLINE_)
- self.scan_over_(Lexer.CHAR_NEWLINE_)
- regexp = r"}\s*" + tag + r"\s*;"
- split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1)
- if len(split) != 2:
- raise FeatureLibError(
- "Expected '} %s;' to terminate anonymous block" % tag, location
- )
- self.pos_ += len(split[0])
- return (Lexer.ANONYMOUS_BLOCK, split[0], location)
-
-
-class IncludingLexer(object):
- """A Lexer that follows include statements.
-
- The OpenType feature file specification states that due to
- historical reasons, relative imports should be resolved in this
- order:
-
- 1. If the source font is UFO format, then relative to the UFO's
- font directory
- 2. relative to the top-level include file
- 3. relative to the parent include file
-
- We only support 1 (via includeDir) and 2.
- """
-
- def __init__(self, featurefile, *, includeDir=None):
- """Initializes an IncludingLexer.
-
- Behavior:
- If includeDir is passed, it will be used to determine the top-level
- include directory to use for all encountered include statements. If it is
- not passed, ``os.path.dirname(featurefile)`` will be considered the
- include directory.
- """
-
- self.lexers_ = [self.make_lexer_(featurefile)]
- self.featurefilepath = self.lexers_[0].filename_
- self.includeDir = includeDir
-
- def __iter__(self):
- return self
-
- def next(self): # Python 2
- return self.__next__()
-
- def __next__(self): # Python 3
- while self.lexers_:
- lexer = self.lexers_[-1]
- try:
- token_type, token, location = next(lexer)
- except StopIteration:
- self.lexers_.pop()
- continue
- if token_type is Lexer.NAME and token == "include":
- fname_type, fname_token, fname_location = lexer.next()
- if fname_type is not Lexer.FILENAME:
- raise FeatureLibError("Expected file name", fname_location)
- # semi_type, semi_token, semi_location = lexer.next()
- # if semi_type is not Lexer.SYMBOL or semi_token != ";":
- # raise FeatureLibError("Expected ';'", semi_location)
- if os.path.isabs(fname_token):
- path = fname_token
- else:
- if self.includeDir is not None:
- curpath = self.includeDir
- elif self.featurefilepath is not None:
- curpath = os.path.dirname(self.featurefilepath)
- else:
- # if the IncludingLexer was initialized from an in-memory
- # file-like stream, it doesn't have a 'name' pointing to
- # its filesystem path, therefore we fall back to using the
- # current working directory to resolve relative includes
- curpath = os.getcwd()
- path = os.path.join(curpath, fname_token)
- if len(self.lexers_) >= 5:
- raise FeatureLibError("Too many recursive includes", fname_location)
- try:
- self.lexers_.append(self.make_lexer_(path))
- except FileNotFoundError as err:
- raise IncludedFeaNotFound(fname_token, fname_location) from err
- else:
- return (token_type, token, location)
- raise StopIteration()
-
- @staticmethod
- def make_lexer_(file_or_path):
- if hasattr(file_or_path, "read"):
- fileobj, closing = file_or_path, False
- else:
- filename, closing = file_or_path, True
- fileobj = open(filename, "r", encoding="utf-8")
- data = fileobj.read()
- filename = getattr(fileobj, "name", None)
- if closing:
- fileobj.close()
- return Lexer(data, filename)
-
- def scan_anonymous_block(self, tag):
- return self.lexers_[-1].scan_anonymous_block(tag)
-
-
-class NonIncludingLexer(IncludingLexer):
- """Lexer that does not follow `include` statements, emits them as-is."""
-
- def __next__(self): # Python 3
- return next(self.lexers_[0])
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hpeldsp_init_loongarch.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hpeldsp_init_loongarch.c
deleted file mode 100644
index 1690be543806900904124a015f128a2053b9203a..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hpeldsp_init_loongarch.c
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Copyright (c) 2021 Loongson Technology Corporation Limited
- * Contributed by Shiyou Yin
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/loongarch/cpu.h"
-#include "libavcodec/hpeldsp.h"
-#include "libavcodec/loongarch/hpeldsp_lasx.h"
-
-void ff_hpeldsp_init_loongarch(HpelDSPContext *c, int flags)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_lasx(cpu_flags)) {
- c->put_pixels_tab[0][0] = ff_put_pixels16_8_lsx;
- c->put_pixels_tab[0][1] = ff_put_pixels16_x2_8_lasx;
- c->put_pixels_tab[0][2] = ff_put_pixels16_y2_8_lasx;
- c->put_pixels_tab[0][3] = ff_put_pixels16_xy2_8_lasx;
-
- c->put_pixels_tab[1][0] = ff_put_pixels8_8_lasx;
- c->put_pixels_tab[1][1] = ff_put_pixels8_x2_8_lasx;
- c->put_pixels_tab[1][2] = ff_put_pixels8_y2_8_lasx;
- c->put_pixels_tab[1][3] = ff_put_pixels8_xy2_8_lasx;
- c->put_no_rnd_pixels_tab[0][0] = ff_put_pixels16_8_lsx;
- c->put_no_rnd_pixels_tab[0][1] = ff_put_no_rnd_pixels16_x2_8_lasx;
- c->put_no_rnd_pixels_tab[0][2] = ff_put_no_rnd_pixels16_y2_8_lasx;
- c->put_no_rnd_pixels_tab[0][3] = ff_put_no_rnd_pixels16_xy2_8_lasx;
-
- c->put_no_rnd_pixels_tab[1][0] = ff_put_pixels8_8_lasx;
- c->put_no_rnd_pixels_tab[1][1] = ff_put_no_rnd_pixels8_x2_8_lasx;
- c->put_no_rnd_pixels_tab[1][2] = ff_put_no_rnd_pixels8_y2_8_lasx;
- c->put_no_rnd_pixels_tab[1][3] = ff_put_no_rnd_pixels8_xy2_8_lasx;
- }
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Discover the Secrets of Futbol Edit The Most Popular and Powerful Soccer Editing Software.md b/spaces/congsaPfin/Manga-OCR/logs/Discover the Secrets of Futbol Edit The Most Popular and Powerful Soccer Editing Software.md
deleted file mode 100644
index 105b1269ba57fb194570f89a040908ee9804d8f0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Discover the Secrets of Futbol Edit The Most Popular and Powerful Soccer Editing Software.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Futbol Edit: How to Create Amazing Videos of Your Favorite Sport
-
If you are a fan of soccer or football, you probably enjoy watching videos of your favorite games, teams, or players. But have you ever wondered how to create your own videos of the sport you love? In this article, we will introduce you to futbol edit, a term that refers to the process of editing videos of soccer or football games to highlight the best moments, skills, goals, and players. We will also show you some of the best tools and software for futbol edit, as well as some tips and tricks that can help you create amazing videos.
-
What is futbol edit and why is it popular?
-
Futbol edit is a term that refers to the process of editing videos of soccer or football games to highlight the best moments, skills, goals, and players.
-
Futbol edit is not just cutting and pasting clips together. It is an art form that requires creativity, skill, and passion. Futbol edit can be done for various purposes, such as entertainment, education, analysis, or promotion. Futbol edit can also be done in different styles, such as cinematic, artistic, motivational, or humorous.
Futbol edit is popular among fans, players, coaches, and analysts who want to enjoy, learn from, or share their passion for the sport.
-
Fans use futbol edit to express their admiration for their favorite teams or players, or to relive the most exciting moments in a game. Players use futbol edit to showcase their talents, improve their skills, or inspire others. Coaches use futbol edit to review their strategies, evaluate their performance, or prepare for upcoming matches. Analysts use futbol edit to study the game trends, statistics, or tactics.
-
What are the best tools and software for futbol edit?
-
There are many tools and software available for futbol edit, depending on your needs, budget, and preferences.
-
Some tools
Some tools and software are more suitable for beginners, while others are more advanced and require more experience. Some tools and software are free or cheap, while others are more expensive and require a subscription or a license. Some tools and software are web-based, while others are desktop-based or mobile-based. Here are some of the most popular ones:
-
Some of the most popular ones are:
-
Adobe Premiere Pro: A professional video editing software that offers a wide range of features and effects for creating high-quality videos.
-
Adobe Premiere Pro is one of the most widely used video editing software in the world. It is compatible with Windows and Mac operating systems, and supports various video formats and resolutions. It allows you to edit your videos in a timeline-based interface, where you can trim, crop, rotate, split, merge, and adjust your clips. You can also add transitions, filters, titles, captions, audio, and music to your videos. You can also use advanced features such as color grading, keyframing, masking, chroma keying, and motion tracking to enhance your videos. Adobe Premiere Pro is not free, but you can get a 7-day trial or a monthly subscription.
-
LongoMatch: A video analysis software that helps coaches, analysts, and players improve their performance by generating custom reports from videos.
-
LongoMatch is a video analysis software that is designed for sports. It is compatible with Windows, Mac, and Linux operating systems, and supports various video formats and resolutions. It allows you to import your videos and tag them with different categories and events, such as goals, passes, shots, fouls, etc. You can also create custom dashboards and templates to organize your data. You can then generate reports and statistics from your videos, such as heat maps, graphs, charts, etc. You can also export your videos with annotations or share them online. LongoMatch is not free, but you can get a 30-day trial or a yearly subscription.
-
PES 2021 Kit Creator: A web-based tool that allows you to create custom kits for PES 2021, a popular soccer video game.
-
PES 2021 Kit Creator is a web-based tool that allows you to create custom kits for PES 2021, a popular soccer video game. It is compatible with any browser and device, and does not require any installation or registration. It allows you to choose from various templates and colors for your kits, as well as add logos, sponsors, badges, numbers, names, etc. You can also preview your kits in 3D and download them as PNG files. You can then import your kits into PES 2021 using the edit mode or the option file. PES 2021 Kit Creator is free to use.
Here are some tips and tricks that can help you create amazing futbol edit videos:
-
Determine the highlights of the game: Use tools like LongoMatch to index and retrieve the most important moments in a game.
-
One of the first steps in futbol edit is to determine the highlights of the game that you want to include in your video. You can use tools like LongoMatch to index and retrieve the most important moments in a game, such as goals, assists, saves, tackles, dribbles, etc. You can also use filters and queries to narrow down your search results. This way, you can save time and avoid watching the entire game again.
-
Apply slow motion and zoom effects: Use tools like Adobe Premiere Pro to emphasize key actions or details in your videos.
-
Another tip for futbol edit is to apply slow motion and zoom effects to emphasize key actions or details in your videos. You can use tools like Adobe Premiere Pro to adjust the speed and scale of your clips. For example, you can slow down a clip of a goal or a skill to show how it was executed or how it affected the game. You can also zoom in on a clip of a player or a ball to show their facial expressions or movements. This way, you can create more impact and drama in your videos.
-
Add text and commentary: Use tools like Adobe Premiere Pro to add text that follows the players or commentary that explains the game.
-
A third tip for futbol edit is to add text and commentary to your videos. You can use tools like Adobe Premiere Pro to add text that follows the players or commentary that explains the game. For example, For example, you can add text that follows the players or commentary that explains the game. For example, you can add text that shows the name, position, or nationality of a player, or the score, time, or venue of a game. You can also add commentary that narrates the action, provides background information, or expresses opinions. You can use tools like Adobe Premiere Pro to adjust the font, size, color, and position of your text, or to record, import, or edit your audio. This way, you can make your videos more informative and engaging.
-
Use transitions and picture-in-picture: Use tools like Adobe Premiere Pro to create smooth transitions between clips and show multiple angles of the same action.
-
A fourth tip for futbol edit is to use transitions and picture-in-picture to create smooth transitions between clips and show multiple angles of the same action. You can use tools like Adobe Premiere Pro to apply different types of transitions, such as fades, wipes, slides, or cuts, to your clips. You can also use tools like Adobe Premiere Pro to create picture-in-picture effects, where you can overlay a smaller clip on top of a larger clip. For example, you can use transitions and picture-in-picture to show a goal from different angles or perspectives, such as from behind the goal, from the side of the field, or from the player's point of view. This way, you can make your videos more dynamic and interesting.
-
Conclusion and FAQs
-
In conclusion, futbol edit is a term that refers to the process of editing videos of soccer or football games to highlight the best moments, skills, goals, and players. It is popular among fans, players, coaches, and analysts who want to enjoy, learn from, or share their passion for the sport. There are many tools and software available for futbol edit, such as Adobe Premiere Pro, LongoMatch, and PES 2021 Kit Creator. There are also some tips and tricks that can help you create amazing futbol edit videos, such as determining the highlights of the game, applying slow motion and zoom effects, adding text and commentary, and using transitions and picture-in-picture. We hope this article has helped you understand what futbol edit is and how to do it.
-
Here are some FAQs that you might have:
-
Q: How long does it take to create a futbol edit video?
-
A: The time it takes to create a futbol edit video depends on various factors, such as the length and quality of the original video, the number and complexity of the edits you want to make, the tools and software you use, and your level of experience and skill. It could take anywhere from a few minutes to a few hours.
-
Q: Where can I find videos to edit for futbol edit?
-
A: You can find videos to edit for futbol edit from various sources,
A: You can find videos to edit for futbol edit from various sources, such as:
-
-
Online platforms that stream or host soccer or football videos, such as YouTube, DAZN, ESPN, or FIFA.
-
Online communities that share or request soccer or football videos, such as Reddit, Twitter, or Facebook.
-
Offline sources that record or store soccer or football videos, such as TV channels, DVDs, or USB drives.
-
-
However, you should always respect the intellectual property rights of the original video owners and creators, and follow the fair use guidelines when using their videos for futbol edit.
-
Q: How can I share my futbol edit videos with others?
-
A: You can share your futbol edit videos with others by:
-
-
Uploading them to online platforms that allow you to share or publish your videos, such as YouTube, Vimeo, or Instagram.
-
Sending them to online communities that appreciate or critique your videos, such as Reddit, Twitter, or Facebook.
-
Showing them to offline audiences that are interested in your videos, such as your friends, family, or teammates.
-
-
However, you should always respect the privacy and preferences of the people who appear in your videos, and follow the community guidelines when sharing your videos with others.
-
Q: How can I improve my skills and knowledge in futbol edit?
-
A: You can improve your skills and knowledge in futbol edit by:
-
-
Watching and analyzing other futbol edit videos that inspire or challenge you, and learning from their techniques and styles.
-
Practicing and experimenting with different tools and software that suit your needs and preferences, and discovering new features and effects.
-
Seeking and receiving feedback from other futbol edit enthusiasts or experts, and applying their suggestions and criticisms.
-
-
You can also enroll in online courses or tutorials that teach you the basics or advanced skills of futbol edit.
-
Q: What are some of the benefits of futbol edit?
-
A: Some of the benefits of futbol edit are:
-
-
You can express your creativity and passion for soccer or football through your videos.
-
You can enhance your enjoyment and appreciation of the sport by reliving its best moments.
-
You can improve your understanding and performance of the sport by studying its details and patterns.
-
You can connect and communicate with other soccer or football fans or players by sharing your videos.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Halloween S.O.S Mundo A msica que voc precisa ouvir e baixar agora mesmo.md b/spaces/congsaPfin/Manga-OCR/logs/Halloween S.O.S Mundo A msica que voc precisa ouvir e baixar agora mesmo.md
deleted file mode 100644
index 703d52f28cd91000837d29285a9cf4587de22617..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Halloween S.O.S Mundo A msica que voc precisa ouvir e baixar agora mesmo.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Halloween S.O.S Mundo Download: How to Listen to the Remastered Version of the Rap Song
-
Introduction
-
If you are a fan of rap music, you might have heard of Allen Halloween, a Portuguese rapper who is known for his dark and realistic lyrics. One of his most famous songs is S.O.S Mundo, which was released in 2011 and has become a classic in the genre. But did you know that there is a remastered version of this song that came out in 2023? In this article, we will tell you everything you need to know about Halloween S.O.S Mundo download, including what it is, why it is remastered, and how to get it from allenhalloween.com.
-
What is Halloween S.O.S Mundo?
-
Halloween S.O.S Mundo is a rap song by Allen Halloween, who is considered one of the best rappers in Portugal. The song is part of his album Projecto Mary Witch, which was inspired by his own life experiences and struggles. The song talks about the problems and injustices that affect the world, such as poverty, violence, corruption, racism, and war. It also expresses a hope for a better future and a call for action to change the situation.
In 2023, Allen Halloween decided to remaster and remix his song S.O.S Mundo for a special edition in vinyl. He collaborated with Maradox Primeiro, a producer who has worked with him before. The remastered version has improved sound quality and clarity, as well as some changes in the beats and the vocals. The remastered version also has a new cover art, which shows a globe with a red cross on it.
-
How to download it from allenhalloween.com?
-
If you want to listen to the remastered version of Halloween S.O.S Mundo, you can download it from allenhalloween.com, which is the official website of Allen Halloween. The website offers different options for downloading, such as MP3, WAV, or FLAC. You can also buy the vinyl edition from the website, which comes with a digital download code. The website also has other products related to Allen Halloween, such as T-shirts, hoodies, posters, and stickers.
-
Main Body
-
The lyrics and meaning of Halloween S.O.S Mundo
-
The lyrics of Halloween S.O.S Mundo are very powerful and meaningful. They reflect the reality and the feelings of many people who live in difficult situations. They also challenge the listeners to think critically and act responsibly. Here are some of the main aspects of the lyrics:
-
The chorus and the message of hope
-
The chorus of the song is repeated four times throughout the song. It says:
-
-
SOS mundo
-SOS mundo
-SOS mundo
-SOS mundo
-SOS mundo
-SOS mundo
-SOS mundo
-SOS mundo
-
-
This chorus is like a cry for help from the world, which is suffering from many problems. It also implies that there is still hope for saving the world, if people work together and care for each other.
-
The verses and the critique of society
-
The verses of the song are divided into three parts. Each part has four lines that rhyme with each other. The verses are very descriptive and critical of the society that Allen Halloween lives in. He mentions topics such as hunger, crime, drugs, racism, corruption, war, and pollution. He also criticizes the politicians, the media, the police, the religious leaders, and the rich people who are responsible for or indifferent to these problems. He uses metaphors, similes, irony, and sarcasm to convey his message. For example, he says:
-
halloween s.o.s mundo remastered mp3
-halloween s.o.s mundo afrocharts stream
-halloween s.o.s mundo musiky baixar
-halloween s.o.s mundo youtube video
-halloween s.o.s mundo vinil edition
-halloween s.o.s mundo maradox remix
-halloween s.o.s mundo african music
-halloween s.o.s mundo free download
-halloween s.o.s mundo letra lyrics
-halloween s.o.s mundo unplugueto version
-halloween s.o.s mundo 2023 release
-halloween s.o.s mundo allen halloween artist
-halloween s.o.s mundo portuguese rap
-halloween s.o.s mundo hip hop song
-halloween s.o.s mundo online listen
-halloween s.o.s mundo spotify playlist
-halloween s.o.s mundo apple music buy
-halloween s.o.s mundo amazon music sell
-halloween s.o.s mundo soundcloud upload
-halloween s.o.s mundo deezer premium
-halloween s.o.s mundo tidal exclusive
-halloween s.o.s mundo pandora radio
-halloween s.o.s mundo shazam discover
-halloween s.o.s mundo genius annotate
-halloween s.o.s mundo audiomack share
-halloween s.o.s mundo bandcamp support
-halloween s.o.s mundo reverbnation promote
-halloween s.o.s mundo datpiff mixtape
-halloween s.o.s mundo allenhalloween.com official site
-halloween s.o.s mundo korea institute of fusion energy sponsor
-halloween s.o.s mundo new scientist article
-halloween s.o.s mundo nuclear fusion experiment reference
-halloween s.o.s mundo 100 million degrees celsius fact
-halloween s.o.s mundo mini sun analogy
-halloween s.o.s mundo holy grail headline
-halloween s.o.s mundo net energy gain breakthrough
-halloween s.o.s mundo 30 seconds duration
-halloween s.o.s mundo kstar facility location
-halloween s.o.s mundo south korea country
-halloween s.o.s mundo science news topic
-
-
Os políticos são os vampiros que sugam o sangue do povo
-(The politicians are the vampires who suck the blood of the people)
-
-A polícia é a máfia que controla o jogo
-(The police is the mafia that controls the game)
-
-A religião é o ópio que adormece o povo
-(Religion is the opium that puts the people to sleep)
-
-A televisão é a droga que aliena o povo
-(Television is the drug that alienates the people)
-
-
The references and the influences
-
The lyrics of Halloween S.O.S Mundo also contain many references and influences from other rap songs, artists, and cultures. Allen Halloween shows his knowledge and appreciation of rap history and diversity. He mentions names such as Tupac Shakur, Bob Marley, Nas, KRS-One, Public Enemy, Wu-Tang Clan, and N.W.A. He also uses words and phrases from English, French, Spanish, and Arabic. He mixes different styles and genres of rap, such as gangsta rap, conscious rap, reggae rap, and hardcore rap. He creates a unique and original sound that reflects his identity and vision.
-
The reception and impact of Halloween S.O.S Mundo
-
The song Halloween S.O.S Mundo has received a lot of attention and praise from rap fans and critics. It has also had a significant impact on the rap scene and the society. Here are some of the main aspects of the reception and impact of the song:
-
The popularity and the views on YouTube
-
The song Halloween S.O.S Mundo was uploaded on YouTube on October 31st, 2011. Since then, it has accumulated more than 15 million views and 200 thousand likes. It is one of the most viewed and liked rap songs from Portugal on YouTube. It has also been shared and commented by thousands of people who appreciate its message and quality.
-
The reviews and the ratings on AfroCharts
-
The song Halloween S.O.S Mundo was also featured on AfroCharts, a website that ranks the best rap songs from Africa and its diaspora. The song received a rating of 9.5 out of 10 from AfroCharts editors, who praised its lyrics, production, delivery, and relevance. The song also received positive reviews from AfroCharts users, who gave it an average rating of 8.9 out of 10. The song was ranked as one of the top 10 rap songs of 2011 by AfroCharts.
-
The feedback and the comments from fans
-
The song Halloween S.O.S Mundo has also received a lot of feedback and comments from fans who listened to it. Many fans expressed their admiration and gratitude for Allen Halloween for making such a powerful and meaningful song. They also related to his experiences and feelings, and shared their own stories and opinions. Some fans even said that the song changed their lives or inspired them to take action. Here are some examples of fan comments:
-
-
"This song is a masterpiece. It speaks the truth about the world we live in. It makes me cry every time I listen to it."
-
"Allen Halloween is a legend. He is not afraid to say what he thinks and feels. He is a voice for the voiceless."
-
"This song is more than music. It is a message. It is a wake-up call. It is a revolution."
-
-
Conclusion
-
In conclusion, Halloween S.O.S Mundo is a rap song by Allen Halloween that was released in 2011 and remastered in 2023. It is a song that talks about the problems and injustices that affect the world, as well as a hope for a better future. It is a song that has powerful and meaningful lyrics, influenced by different rap artists and cultures. It is a song that has received a lot of attention and praise from rap fans and critics, as well as a significant impact on the rap scene and the society.
-
If you want to listen to this amazing song, you can download it from allenhalloween.com in different formats or buy it in vinyl edition. You can also check out other songs by Allen Halloween on his website or on YouTube.
-
We hope you enjoyed this article about Halloween S.O.S Mundo download. If you did, please share it with your friends and family. And if you have any questions or comments about the song or the article, please leave them below. We would love to hear from you.
-
FAQs
-
Here are some of the frequently asked questions about Halloween S.O.S Mundo download:
-
-
What does S.O.S stand for in the song title?
-
S.O.S stands for Save Our Souls, which is a common distress signal used in emergencies. It also represents the plea for help from the world that is suffering from many problems.
-
Who is Allen Halloween and what is his background?
-
Allen Halloween is a Portuguese rapper who was born in Luanda, Angola, in 1979. He moved to Portugal with his family when he was six years old. He grew up in a poor and violent neighborhood in Lisbon, where he witnessed and experienced many hardships. He started rapping when he was 15 years old, and released his first album in 2006.
-
What is the difference between the original and the remastered version of the song?
-
The original version of the song was released in 2011 and has a duration of 5 minutes and 18 seconds. The remastered version of the song was released in 2023 and has a duration of 4 minutes and 56 seconds. The remastered version has improved sound quality and clarity, as well as some changes in the beats and the vocals. The remastered version also has a new cover art, which shows a globe with a red cross on it.
-
Where can I find more information about the song and the rapper?
-
You can find more information about the song and the rapper on allenhalloween.com, which is the official website of Allen Halloween. You can also follow him on his social media accounts, such as Facebook, Instagram, Twitter, and Spotify. You can also watch his interviews and documentaries on YouTube.
-
What are some other rap songs that are similar to Halloween S.O.S Mundo?
-
Some other rap songs that are similar to Halloween S.O.S Mundo are:
-
-
Changes by Tupac Shakur
-
The Message by Grandmaster Flash and The Furious Five
-
Fight The Power by Public Enemy
-
I Can by Nas
-
Alright by Kendrick Lamar
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/NARUTO X BORUTO NINJA VOLTAGE APK MOD The Best RPG with Infinite Money and Gems in v10.5.0.md b/spaces/congsaPfin/Manga-OCR/logs/NARUTO X BORUTO NINJA VOLTAGE APK MOD The Best RPG with Infinite Money and Gems in v10.5.0.md
deleted file mode 100644
index 877fde217a55f321f7db35a9db8f701b3e765983..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/NARUTO X BORUTO NINJA VOLTAGE APK MOD The Best RPG with Infinite Money and Gems in v10.5.0.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021: How to Download and Play
-
If you are a fan of the popular anime series Naruto and Boruto, you might want to try out the game Naruto x Boruto Ninja Voltage. This game is a free-to-play action role-playing game that combines fortress strategy and shinobi action. You can collect your favorite characters from both anime series, create your own ninja fortress, and battle against other players online. But what if you want to enjoy the game without spending any money or waiting for long loading times? That's where Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 comes in handy. In this article, we will tell you what this mod apk is, how to download and install it, and some tips and tricks for playing the game.
-
What is Naruto x Boruto Ninja Voltage?
-
Naruto x Boruto Ninja Voltage is a game developed by Bandai Namco Entertainment Inc. based on the popular manga and anime series Naruto and Boruto. The game was released in 2017 for Android and iOS devices. The game allows you to collect collectible cards and control a team of four ninjas. It has a story mode and a multiplayer option that allows you to play against other players. You can also collect your favorite shinobi from both the Naruto Shippuden and Boruto: Naruto Next Generations anime series.
-
naruto x boruto ninja voltage apk mod dinheiro infinito 2021
Some of the features of Naruto x Boruto Ninja Voltage are:
-
-
Create the ultimate ninja clan by collecting your favorite shinobi from both anime series, such as Naruto Uzumaki, Sasuke Uchiha, Boruto Uzumaki, Sarada Uchiha, and many more.
-
Enhance and evolve your ninjas to become the strongest clan by using ninja cards, ninja tools, limit breaks, hero fragments, abilities, and super awakenings.
-
Design the ultimate shinobi fortress to protect your village resources from enemy attacks by using traps, trained shinobi, and more.
-
Compete for battle rankings by attacking and protecting fortresses in online multiplayer missions.
-
Perform ninja combos with simple controls in a beautiful 3D anime world.
-
Finish your foes with a variety of powerful ninjutsu attacks such as Naruto Uzumaki's Rasengan or Sasuke Uchiha's Chidori.
-
Earn rewards by battling through various ninja missions such as story missions, surprise attack missions, roundup missions, special missions, guild wars, etc.
-
Join a shinobi guild and cooperate with other players to defeat unsealed giant bosses or invade other player's fortresses.
-
-
Game Modes
-
The game has different game modes that you can play depending on your preference:
-
naruto x boruto ninja voltage hack apk unlimited money 2021
-naruto x boruto ninja voltage mod apk download free 2021
-naruto x boruto ninja voltage cheats apk dinheiro infinito 2021
-naruto x boruto ninja voltage apk mod menu 2021
-naruto x boruto ninja voltage unlimited shinobite apk 2021
-naruto x boruto ninja voltage latest mod apk 2021
-naruto x boruto ninja voltage apk mod offline 2021
-naruto x boruto ninja voltage hack tool apk 2021
-naruto x boruto ninja voltage mod apk android 1 2021
-naruto x boruto ninja voltage apk mod atualizado 2021
-naruto x boruto ninja voltage mod apk revdl 2021
-naruto x boruto ninja voltage hack version apk 2021
-naruto x boruto ninja voltage mod apk rexdl 2021
-naruto x boruto ninja voltage apk mod vip 2021
-naruto x boruto ninja voltage hack online apk 2021
-naruto x boruto ninja voltage mod apk unlimited everything 2021
-naruto x boruto ninja voltage apk mod no root 2021
-naruto x boruto ninja voltage mod apk obb 2021
-naruto x boruto ninja voltage hack apk ios 2021
-naruto x boruto ninja voltage mod apk happymod 2021
-naruto x boruto ninja voltage hack apk mediafıre 2021
-naruto x boruto ninja voltage mod apk an1 2021
-naruto x boruto ninja voltage hack generator apk 2021
-naruto x boruto ninja voltage mod apk pure 2021
-naruto x boruto ninja voltage hack apk mega 2021
-naruto x boruto ninja voltage mod apk unlimited coins and gems 2021
-naruto x boruto ninja voltage hack apk no verification 2021
-naruto x boruto ninja voltage mod apk apkpure 2021
-naruto x boruto ninja voltage hack apk latest version 2021
-naruto x boruto ninja voltage mod apk unlimited all 2021
-naruto x boruto ninja voltage hack apk android oyun club 2021
-naruto x boruto ninja voltage mod apk data 2021
-naruto x boruto ninja voltage hack apk sin root 2021
-naruto x boruto ninja voltage mod apk new version 2021
-naruro x boruro nınja volrage hack apı wıthouı survey or human verıfıcaıon 202ı
-
-
Story Mode: Follow the original story of Naruto Shippuden and Boruto: Naruto Next Generations in this mode. You can unlock new characters and ninja cards by completing story missions.
-
Attack Mission: In this mode, you can invade other player's fortresses and try to destroy their final room. You can earn shinobites, fortress medals, hero fragments, chakra, ryo, etc. by winning attack missions.
-
Defense Mission: In this mode, you can defend your own fortress from other player's attacks and try to prevent them from reaching your final room. You can earn shinobites, fortress medals, hero fragments, chakra, ryo, etc. by winning defense missions.
-
Roundup Mission: In this mode, you can team up with other players to fight against a powerful boss. You can earn roundup points, roundup medals, hero fragments, etc. by completing roundup missions.
-
Surprise Attack Mission: In this mode, you can join a shinobi guild and cooperate with other players to defeat a giant boss that appears randomly. You can earn guild points, guild medals, hero fragments, etc. by participating in surprise attack missions.
-
Special Mission: In this mode, you can play various missions that have different objectives and rewards. You can earn chakra, ryo, ninja tools, hero fragments, etc. by clearing special missions.
-
Guild War: In this mode, you can compete with other shinobi guilds for the highest battle ranking. You can earn guild points, guild medals, hero fragments, etc. by winning guild wars.
-
-
Why Use Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021?
-
Naruto x Boruto Ninja Voltage is a fun and addictive game that lets you experience the thrilling ninja action of the anime series. However, the game also has some drawbacks that might limit your enjoyment of the game. For example, the game requires a lot of shinobites to summon new characters and ninja cards. Shinobites are the premium currency of the game that can be bought with real money or earned by playing the game. However, earning shinobites by playing the game is very slow and tedious. Moreover, the game has long loading times and frequent updates that might consume a lot of your device's storage and battery. That's why many players opt to use Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 instead of the original game.
-
Benefits of Using the Mod APK
-
Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 is a modified version of the original game that has some advantages over the original game. Some of the benefits of using the mod apk are:
-
-
You can get unlimited shinobites for free without spending any money or waiting for long hours.
-
You can summon any character or ninja card you want without worrying about the probability or cost.
-
You can upgrade and evolve your ninjas to the maximum level without using any resources or materials.
-
You can enjoy faster loading times and smoother gameplay without any lag or glitches.
-
You can play the game offline without any internet connection or data usage.
-
You can bypass any security checks or bans from the game developers or servers.
-
-
How to Download and Install the Mod APK
-
If you want to use Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021, you need to download and install it on your device. Here are the steps to do so:
-
-
Go to [this link] and download the mod apk file on your device.
-
Go to your device's settings and enable the installation of apps from unknown sources.
-
Locate the downloaded mod apk file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy unlimited shinobites and other features.
-
-
Tips and Tricks for Playing Naruto x Boruto Ninja Voltage
-
Naruto x Boruto Ninja Voltage is a game that requires strategy and skill to master. If you want to improve your gameplay and become a better ninja, here are some tips and tricks that you can follow:
-
Upgrade Your Village Buildings
-
Your village is your base of operations in Naruto x Boruto Ninja Voltage. It consists of various buildings that have different functions and benefits. You should upgrade your village buildings regularly to increase their efficiency and unlock new features. Some of the important buildings that you should upgrade are:
-
-
Village Headquarters: This is where you can manage your village resources such as chakra, ryo, fortress medals, etc. Upgrading it will increase your storage capacity and production rate.
-
Ninja Card House: This is where you can summon new characters and ninja cards using shinobites or summoning tickets. Upgrading it will increase your card inventory space and unlock new summoning banners.
-
Ninja Tool House: This is where you can craft new ninja tools using materials such as iron, steel, copper, etc. Upgrading it will increase your ninja tool inventory space and unlock new ninja tools.
-
Shinobi List: This is where you can view and manage your shinobi collection. You can enhance, evolve, limit break, super awaken, and equip your shinobi with ninja cards and ninja tools. Upgrading it will increase your shinobi inventory space and unlock new shinobi slots.
-
Fortress: This is where you can design and customize your own shinobi fortress to defend your village from enemy attacks. You can place traps, trained shinobi, and other obstacles to make your fortress more secure. Upgrading it will increase your fortress level and unlock new fortress features.
-
-
Summon Free Cards Every Day
-
One of the best ways to get new characters and ninja cards in Naruto x Boruto Ninja Voltage is to summon them using shinobites or summoning tickets. However, shinobites are hard to come by and summoning tickets are limited. That's why you should take advantage of the free summons that the game offers every day. You can get one free single summon every day from the normal banner and one free multi summon every week from the special banner. You can also get free summoning tickets from various events and missions. You never know when you might get lucky and pull a rare or powerful card from these free summons.
-
Choose Your Characters and Form Your Team
-
Naruto x Boruto Ninja Voltage allows you to control a team of four ninjas in each mission. You can choose any character that you have unlocked from both the Naruto Shippuden and Boruto: Naruto Next Generations anime series. However, not all characters are equal in terms of power, skills, and compatibility. You should choose your characters wisely and form a balanced team that can handle any situation. Some of the factors that you should consider when choosing your characters are:
-
-
Element: Each character has an element that determines their strength and weakness against other elements. The elements are fire, wind, lightning, earth, and water. Fire beats wind, wind beats lightning, lightning beats earth, earth beats water, and water beats fire. You should choose characters that have an advantage over the enemy's element or avoid characters that have a disadvantage.
-
Role: Each character has a role that determines their main function in the team. The roles are attack, defense, support, and heal. Attack characters deal high damage to enemies but have low defense. Defense characters have high defense and can protect the team from enemy attacks but have low damage. Support characters can buff the team's stats or debuff the enemy's stats but have moderate damage and defense. Heal characters can heal the team's health or remove negative effects but have low damage and defense. You should choose characters that complement each other's roles and cover each other's weaknesses.
-
Skill: Each character has a skill that determines their special ability or ninjutsu attack. The skills are divided into three types: red, blue, and green. Red skills deal damage to enemies or inflict status effects such as stun, seal, confusion, etc. Blue skills boost the team's stats or grant positive effects such as invincibility, evasion, speed up, etc. Green skills heal the team's health or remove negative effects such as poison, burn, slow down, etc. You should choose characters that have skills that suit your strategy and preference.
-
Link Bonus: Each character has a link bonus that grants them extra stats or effects when they are paired with certain other characters. The link bonus is based on the character's relationship or affiliation in the anime series. For example, Naruto Uzumaki has a link bonus with Sasuke Uchiha, Sakura Haruno, Kakashi Hatake, etc. You should choose characters that have link bonuses with each other to increase their performance.
-
-
Complete Every Mission
-
Naruto x Boruto Ninja Voltage has various missions that you can play to earn rewards and progress in the game. You should complete every mission that you can to maximize your benefits and experience. Some of the missions that you should complete are:
-
-
Daily Missions: These are missions that reset every day and give you rewards such as shinobites, summoning tickets, chakra, ryo, etc. You should complete these missions every day to get free resources and items.
-
Achievement Missions: These are missions that give you rewards based on your cumulative achievements in the game such as clearing a certain number of missions, upgrading a certain number of buildings, enhancing a certain number of cards, etc. You should complete these missions as soon as possible to get rewards such as shinobites, hero fragments awakening stones, chakra, ryo, etc. This will increase the stats and effects of your ninja cards to the maximum level. You can super awaken your ninja cards only once when they reach six stars and level 100.
-
Equip: You can equip your shinobi with up to six ninja cards to increase their stats and abilities. You can also equip them with ninja tools to further boost their stats. You should equip your shinobi with ninja cards and ninja tools that match their element, role, and skill.
-
-
Join a Guild and Cooperate with Other Players
-
Naruto x Boruto Ninja Voltage is a game that can be played solo or with other players. However, playing with other players can give you more benefits and fun than playing alone. You should join a shinobi guild and cooperate with other players to enjoy the game more. Some of the benefits of joining a guild and cooperating with other players are:
-
-
You can chat with other players and make new friends who share your passion for Naruto and Boruto.
-
You can participate in guild wars and compete with other guilds for the highest battle ranking and rewards.
-
You can join surprise attack missions and cooperate with other players to defeat giant bosses and earn guild points and medals.
-
You can request and donate hero fragments to help each other upgrade your shinobi.
-
You can get support from other players in case you need help or advice on the game.
-
-
Naruto x Boruto Ninja Voltage Game Review
-
Naruto x Boruto Ninja Voltage is a game that has received mixed reviews from critics and players alike. Some people love the game for its graphics, gameplay, and features, while others dislike the game for its bugs, glitches, and pay-to-win aspects. Here are some of the pros and cons of the game based on user ratings and feedback:
-
Pros and Cons of the Game
-
-
Pros
Cons
-
The game has stunning 3D graphics that capture the anime style and atmosphere.
The game has long loading times and frequent updates that consume a lot of storage and battery.
-
The game has simple and intuitive controls that make it easy to perform ninja combos and ninjutsu attacks.
The game has some bugs and glitches that affect the gameplay and performance.
-
The game has a large roster of characters from both Naruto Shippuden and Boruto: Naruto Next Generations anime series.
The game has a low probability of getting rare or powerful characters or ninja cards from summons.
-
The game has various game modes that offer different challenges and rewards.
The game has a high difficulty level that requires a lot of grinding or spending to progress.
-
The game has a multiplayer option that allows you to play with or against other players online.
The game has a poor matchmaking system that pairs you with unfair opponents or teammates.
-
-
User Ratings and Feedback
-
Here are some of the user ratings and feedback from Google Play Store and App Store as of June 22, 2023:
-
-
"This is one of the best Naruto games I have ever played. The graphics are amazing, the gameplay is smooth, and the characters are awesome. I love how you can customize your own fortress and attack other players. The only thing I don't like is how hard it is to get shinobites. They are so expensive and rare. I wish they would give more free shinobites or lower the prices." - 4 stars
-
"This game is terrible. It is full of bugs and glitches that ruin the experience. The loading times are too long, the updates are too frequent, and the servers are too unstable. The game is also very pay-to-win. You need to spend a lot of money or time to get good characters or cards. The summons are rigged and the rates are too low. The game is also very repetitive and boring. There is no variety or creativity in the missions or the gameplay. The game is a waste of time and money. Do not download this game." - 1 star
-
"This game is decent. It has some good aspects and some bad aspects. The good aspects are the graphics, the characters, and the story mode. The bad aspects are the loading times, the updates, and the shinobites. The game could be better if they fix the bugs, improve the loading times, and give more shinobites. The game is not bad, but it is not great either. It is just average." - 3 stars
-
-
Conclusion
-
Naruto x Boruto Ninja Voltage is a game that lets you experience the ninja action of the Naruto and Boruto anime series. You can collect your favorite characters, create your own fortress, and battle with other players online. However, the game also has some drawbacks that might affect your enjoyment of the game. That's why some players prefer to use Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 to get unlimited shinobites and other features. In this article, we have explained what this mod apk is, how to download and install it, and some tips and tricks for playing the game. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some of the frequently asked questions about Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021:
-
-
Is Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 safe to use?
-
Yes, Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 is safe to use as long as you download it from a trusted source and follow the installation instructions carefully. However, you should be aware that using any mod apk might violate the terms of service of the original game and might result in your account being banned or suspended. Therefore, you should use it at your own risk and discretion.
-
Is Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 compatible with my device?
-
Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 is compatible with most Android devices that have Android 4.4 or higher versions. However, some devices might not support the mod apk due to different specifications or settings. Therefore, you should check your device's compatibility before downloading and installing the mod apk.
-
Can I play Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 with my friends?
-
Yes, you can play Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 with your friends as long as they also have the mod apk installed on their devices. You can join a shinobi guild and cooperate with your friends in various missions and events. You can also chat with your friends and share your progress and achievements in the game.
-
Can I update Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021?
-
No, you cannot update Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 as it is a modified version of the original game that does not receive official updates from the game developers or servers. If you try to update the mod apk, you might lose your data or face errors or crashes in the game. Therefore, you should avoid updating the mod apk and stick to the current version.
-
Can I switch back to the original game after using Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021?
-
Yes, you can switch back to the original game after using Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 if you want to. However, you should be careful when doing so as you might lose your data or face compatibility issues in the original game. Therefore, you should backup your data before switching back to the original game and follow these steps:
-
-
Uninstall Naruto x Boruto Ninja Voltage APK Mod Dinheiro Infinito 2021 from your device.
-
Go to Google Play Store or App Store and download Naruto x Boruto Ninja Voltage from there.
-
Launch the original game and log in with your account.
-
Restore your data from your backup or start a new game.
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Panzer War Definitive Edition - How to Download and Play the Best Tank Game Ever.md b/spaces/congsaPfin/Manga-OCR/logs/Panzer War Definitive Edition - How to Download and Play the Best Tank Game Ever.md
deleted file mode 100644
index a381795a050bde55b1aeb6ebb393cebfc29b3a5f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Panzer War Definitive Edition - How to Download and Play the Best Tank Game Ever.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Panzer War: Definitive Edition - A Review
-
Panzer War: Definitive Edition is a third-person shooter (TPS) tank game that simulates war battles from World War II to modern times. It features over 20 tanks and aircrafts from different countries and eras, as well as hundreds of mod tanks from the workshop. It also offers two types of damage mechanics, module-based and hitpoint-based, that you can choose according to your preference. The game uses the new rendering pipelines to enhance the graphics quality and performance. In this review, we will take a closer look at the game's features, gameplay, graphics, sound, mods, workshop, and more.
Panzer War: Definitive Edition is developed by WindyVerse and published by WindyVerse, Matrix Games, Shrapnel Games, and Strategic Simulations. It was released on Steam on February 28, 2018. The game is a remake of the classic Panzer War game that was released in 1995. The game does not have a tech-tree or a progression system. You can play all the tanks and aircrafts in the game for free. You can also download more tanks and aircrafts from the workshop or create your own mods.
-
To download and install the game, you need to have a Steam account and a compatible device. The game requires a 64-bit processor and operating system, Windows 7 or higher, at least 2 GB of RAM, at least 2 GB of storage space, and DirectX version 9.0 or higher. The game costs $5.99 on Steam, but you can also get it for free if you buy Cry of War, another tank game by WindyVerse. You can also play the mobile version of the game by visiting [15](https://game.waroftanks.cn/de), but it is currently being tested in China only.
-
Gameplay
-
The gameplay of Panzer War: Definitive Edition is based on realistic physics, historical accuracy, and tactical strategy. You can control your tanks and aircrafts from a top-down perspective, on a map with a hexagonal grid. You can move, rotate, aim, fire, zoom, switch ammo types, repair, resupply, camouflage, scout, spot, capture flags, and more. You can also customize your units by changing their skins, decals, names, crews, equipment, etc.
-
The game offers four main modes: 7v7, Skirmish (Respawn), Historical Mode, and Play Field. In 7v7 mode, you can play against other players or AI bots in team-based battles. In Skirmish mode, you can respawn after being destroyed and continue fighting until one team wins. In Historical Mode, you can play scenarios based on real historical events or battles. In Play Field mode, you can create your own scenarios or maps using various tools and options.
-
One of the most unique features of the game is the two types of damage mechanics: module-based and hitpoint-based. In module-based mode, each part of your tank or aircraft has its own health and function. If a part is damaged or destroyed, it will affect the performance of your unit. For example, if your engine is damaged, you will move slower; if your gun is damaged, you will reload slower; if your ammo rack is damaged, you will explode. In hitpoint-based mode, your unit has a single health bar that decreases when you take damage. When your health reaches zero, you are destroyed. You can choose which mode you prefer in the settings menu.
-
Graphics and Sound
-
The graphics and sound of Panzer War: Definitive Edition are impressive and immersive. The game uses the new rendering pipelines to improve the lighting, shadows, reflections, textures, and effects of the game. The game also supports 4K resolution and HDR mode for better visual quality. The game has realistic and detailed models of tanks and aircrafts, as well as various terrains, buildings, trees, grass, water, smoke, fire, explosions, etc. The game also has dynamic weather and day/night cycles that affect the visibility and gameplay.
-
panzer war definitive edition steam
-panzer war definitive edition free
-panzer war definitive edition mods
-panzer war definitive edition gameplay
-panzer war definitive edition review
-panzer war definitive edition pc
-panzer war definitive edition system requirements
-panzer war definitive edition online
-panzer war definitive edition multiplayer
-panzer war definitive edition tanks
-panzer war definitive edition aircrafts
-panzer war definitive edition cry of war
-panzer war definitive edition workshop
-panzer war definitive edition update
-panzer war definitive edition patch notes
-panzer war definitive edition cheats
-panzer war definitive edition tips
-panzer war definitive edition guide
-panzer war definitive edition wiki
-panzer war definitive edition forum
-panzer war definitive edition discord
-panzer war definitive edition reddit
-panzer war definitive edition youtube
-panzer war definitive edition trailer
-panzer war definitive edition screenshots
-panzer war definitive edition windows 10
-panzer war definitive edition mac
-panzer war definitive edition linux
-panzer war definitive edition android
-panzer war definitive edition ios
-panzer war definitive edition apk
-panzer war definitive edition crack
-panzer war definitive edition torrent
-panzer war definitive edition key
-panzer war definitive edition code
-panzer war definitive edition buy
-panzer war definitive edition price
-panzer war definitive edition sale
-panzer war definitive edition discount
-panzer war definitive edition bundle
-panzer war definitive edition dlc
-panzer war definitive edition expansion pack
-panzer war definitive edition new features
-panzer war definitive edition comparison
-panzer war definitive edition vs world of tanks
-panzer war definitive edition vs war thunder
-panzer war definitive edition realistic mode
-panzer war definitive edition historical mode
-panzer war definitive edition skirmish mode
-
The sound of the game is also realistic and atmospheric. The game has authentic and accurate sounds of engines, guns, tracks, shells, impacts, etc. The game also has voice-overs for the crews and commanders of different countries and languages. The game also has a background music that matches the mood and theme of the game.
-
The graphics and sound settings of the game can be adjusted to your preference in the options menu. You can change the resolution, quality, brightness, contrast, gamma, anti-aliasing, etc. You can also change the volume, language, subtitles, etc.
-
Mods and Workshop
-
Panzer War: Definitive Edition has a rich and active modding community that adds more content and features to the game. The game has a built-in workshop that allows you to access and use thousands of mods created by other players or developers. You can find mods that add new tanks, aircrafts, maps, scenarios, skins, decals, sounds, effects, etc. You can also rate, comment, subscribe, or unsubscribe to any mod you like or dislike.
-
The game also allows you to create and share your own mods using the mod tools provided by the developers. You can use the model editor to create or edit models of tanks or aircrafts; the texture editor to create or edit textures of skins or decals; the map editor to create or edit maps or scenarios; the script editor to create or edit scripts of events or actions; etc. You can also use the mod manager to manage your mods and upload them to the workshop.
-
Conclusion
-
Panzer War: Definitive Edition is a fun and engaging tank game that offers a lot of variety and customization. The game has realistic and historical tanks and aircrafts from different countries and eras; two types of damage mechanics that suit different play styles; improved graphics and sound quality that enhance the immersion; a workshop and mod tools that allow you to access and create more content; and four modes that offer different challenges and experiences. The game is suitable for anyone who likes tank games or war games in general.
-
However, the game also has some drawbacks that may affect your enjoyment. The game has some bugs and glitches that may cause crashes or errors; some balance issues that may make some tanks or aircrafts too weak or too strong; some optimization issues that may cause lag or low FPS; some interface issues that may make some menus or buttons hard to read or use; some translation issues that may make some texts or voices inaccurate or unclear; etc. The game is not suitable for anyone who expects a flawless or polished game.
-
The final verdict and rating of Panzer War: Definitive Edition is 8/10. The game is a great remake of a classic tank game that offers a lot of fun and value for its price. The game is not perfect but it is still enjoyable and worth playing.
-
FAQs
-
-
Q1: Is Panzer War: Definitive Edition free to play?
-
A1: No, Panzer War: Definitive Edition is not free to play. The game costs $5.99 on Steam but you can get it for free if you buy Cry of War by WindyVerse.
-
Q2: Is Panzer War: Definitive Edition compatible with mobile devices?
-
A2: Yes, Panzer War: Definitive Edition is compatible with mobile devices but it is currently being tested in China only. You can visit [15](https://game.waroftanks.cn/de ) to play the mobile version of the game.
-
Q3: Is Panzer War: Definitive Edition multiplayer or single-player?
-
A3: Panzer War: Definitive Edition is both multiplayer and single-player. You can play online with other players or offline with AI bots. You can also play co-op or versus modes with your friends or strangers.
-
Q4: Is Panzer War: Definitive Edition realistic or arcade-like?
-
A4: Panzer War: Definitive Edition is both realistic and arcade-like. You can choose between two types of damage mechanics, module-based or hitpoint-based, that affect the realism and difficulty of the game. You can also adjust the realism settings in the options menu, such as gravity, penetration, ricochet, etc.
-
Q5: Is Panzer War: Definitive Edition updated regularly?
-
A5: Yes, Panzer War: Definitive Edition is updated regularly by the developers and the modders. The game receives new tanks, aircrafts, maps, scenarios, skins, decals, sounds, effects, etc. from the workshop and the mod tools. The game also receives bug fixes, balance changes, optimization improvements, interface enhancements, translation corrections, etc. from the developers.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/?his?nho???n?sg???or????? ??8?rmdsl.md b/spaces/contluForse/HuggingGPT/assets/?his?nho???n?sg???or????? ??8?rmdsl.md
deleted file mode 100644
index 0aee9fcbae6c0a3fe01c564d255de3fa8f2cc8ce..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/?his?nho???n?sg???or????? ??8?rmdsl.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
artcam 2017 free download total crack 32-64 bit operating on home windows os,. - xforce keygen artcam 2018 32bit free download download:. artcam pro 2017-xforce keygen-xforce 2013 keygen-artcam 2011-xforce keygen-xforce 2010 64 bit-artcam 2010 32 bit-xforce keygen-xforce 2009 32 bit.
-
artcam 2017 free download total 32-64 bit operating on home windows os,. - xforce keygen artcam 2018 32bit free download download:. save. xforce keygen artcam 2018 32 bit free download download:. 2017 is a new program that has been designed for the beginners. xforce keygen artcam 2015 32 bit. artcam professional 2016 xforce keygen artcam 2017-xforce keygen-xforce 2013 keygen-artcam 2011-xforce keygen-xforce 2010 64 bit-artcam 2010 32 bit-xforce keygen-xforce 2009 32 bit.
xforce keygen artcam 2015 32 bit. the xforce keygen program. a few of these features have been mentioned above. xforce keygen artcam 2018 32 bit free download download:. save. 2017 is a new program that has been designed for the beginners. xforce keygen artcam 2015 32 bit. artcam pro 2017-xforce keygen-xforce 2013 keygen-artcam 2011-xforce keygen-xforce 2010 64 bit-artcam 2010 32 bit-xforce keygen-xforce 2009 32 bit.
-
download keygen xforce for artcam 2018 crack. no items have been added yet! related collections. autodesk artcam 2017 x64 (64bit) (product key and xforce keygen). key and xforce keygen best) download autodesk artcam 2017 x64 (64bit) (product.
-
x-force 2017 is a free software designed to activate all autodesk products. x-force 2018 keygen for autodesk. keygen keygen x-force is a top-rated software tool developed to activate all autodesk products.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Bahini Lai Chikeko Katha Nepalil.md b/spaces/contluForse/HuggingGPT/assets/Bahini Lai Chikeko Katha Nepalil.md
deleted file mode 100644
index 819704a280fe3e126a0e32dfede298913281abcb..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Bahini Lai Chikeko Katha Nepalil.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/New Cutting Edge Pre Intermediate Teachers Book Pdf Free 43.md b/spaces/diacanFperku/AutoGPT/New Cutting Edge Pre Intermediate Teachers Book Pdf Free 43.md
deleted file mode 100644
index d4f162e1356a21d218a16b567a39ad03c682177b..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/New Cutting Edge Pre Intermediate Teachers Book Pdf Free 43.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
It works with H5P: one of the most widely used, adaptable, and highly engaging educational software platforms for teaching English in schools. The free version allows teachers to create and distribute their own media-rich lessons and activities in one of 10 high-interactivity platforms. Learn about the platform features, its uses, and how you can effectively and affordably deliver these engaging, differentiated lesson materials to your students. View the book on
-
Good news! We are currently offering two new online, foundational Spanish courses for your convenience! Students can select from two courses:
The ability to choose from over 50,000 book titles, or create your own library with only the titles you love, at any time for any duration, is priceless. This free app equips your students with a powerful digital collection of books they can personalize, place and share.
-
This free collection allows over 50,000 schools and millions of students with the Sora app to encourage community engagement with young readers through book clubs, classroom discussions and more. The Sora Starter titles expand schools individual digital collections, removing barriers and providing access to more resources for literacy 24/7 anytime, anywhere. Notable titles include The Great Gatsby,Iggy Peck, Architect, Separate Is Never Equal and Diary of a Wimpy Kid. Ebooks, audiobooks and Read-Alongs in this collection come from Sourcebooks, Jump!, Bellwether, Triangle, Lerner, Kaleidoscope, Rosen, Abrams and Duke Classics. More information about the titles can be found here (title availability may vary by region).
`;
-
- const params = {
- title: titleTxt,
- description: descriptionMd,
- };
-
- const paramsStr = Object.entries(params)
- .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`)
- .join('&');
-
- window.open(`https://huggingface.co/spaces/diffusers/controlnet-3d-pose/discussions/new?${paramsStr}`, '_blank');
-
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
-
-share_btn_css = """
-a {text-decoration-line: underline; font-weight: 600;}
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-@keyframes spin {
- from { transform: rotate(0deg); }
- to { transform: rotate(360deg); }
-}
-#share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
-}
-#share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
-}
-#share-btn * {
- all: unset;
-}
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-#share-btn-container .wrap {
- display: none !important;
-}
-"""
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/monotonic_align/core.py b/spaces/digitalxingtong/Jiuxia-Bert-Vits2/monotonic_align/core.py
deleted file mode 100644
index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Nailv-Bert-Vits2/train_ms.py b/spaces/digitalxingtong/Nailv-Bert-Vits2/train_ms.py
deleted file mode 100644
index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Nailv-Bert-Vits2/train_ms.py
+++ /dev/null
@@ -1,402 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-import shutil
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
- DurationDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-torch.backends.cudnn.benchmark = True
-torch.backends.cuda.matmul.allow_tf32 = True
-torch.backends.cudnn.allow_tf32 = True
-torch.set_float32_matmul_precision('medium')
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '65280'
-
- hps = utils.get_hparams()
- if not hps.cont:
- shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth')
- shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth')
- shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth')
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32, 300, 400, 500, 600, 700, 800, 900, 1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
- batch_size=1, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
- if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True:
- print("Using noise scaled MAS for VITS2")
- use_noise_scaled_mas = True
- mas_noise_scale_initial = 0.01
- noise_scale_delta = 2e-6
- else:
- print("Using normal MAS for VITS1")
- use_noise_scaled_mas = False
- mas_noise_scale_initial = 0.0
- noise_scale_delta = 0.0
- if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True:
- print("Using duration discriminator for VITS2")
- use_duration_discriminator = True
- net_dur_disc = DurationDiscriminator(
- hps.model.hidden_channels,
- hps.model.hidden_channels,
- 3,
- 0.1,
- gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0,
- ).cuda(rank)
- if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True:
- if hps.data.n_speakers == 0:
- raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model")
- use_spk_conditioned_encoder = True
- else:
- print("Using normal encoder for VITS1")
- use_spk_conditioned_encoder = False
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- mas_noise_scale_initial = mas_noise_scale_initial,
- noise_scale_delta = noise_scale_delta,
- **hps.model).cuda(rank)
-
- freeze_enc = getattr(hps.model, "freeze_enc", False)
- if freeze_enc:
- print("freeze encoder !!!")
- for param in net_g.enc_p.parameters():
- param.requires_grad = False
-
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, net_g.parameters()),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- if net_dur_disc is not None:
- optim_dur_disc = torch.optim.AdamW(
- net_dur_disc.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- else:
- optim_dur_disc = None
- net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if net_dur_disc is not None:
- net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True)
-
- pretrain_dir = None
- if pretrain_dir is None:
- try:
- if net_dur_disc is not None:
- _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont)
- _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
- optim_g, skip_optimizer=not hps.cont)
- _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
- optim_d, skip_optimizer=not hps.cont)
-
- epoch_str = max(epoch_str, 1)
- global_step = (epoch_str - 1) * len(train_loader)
- except Exception as e:
- print(e)
- epoch_str = 1
- global_step = 0
- else:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g,
- optim_g, True)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d,
- optim_d, True)
-
-
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
- if net_dur_disc is not None:
- scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
- else:
- scheduler_dur_disc = None
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
- if net_dur_disc is not None:
- scheduler_dur_disc.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d, net_dur_disc = nets
- optim_g, optim_d, optim_dur_disc = optims
- scheduler_g, scheduler_d, scheduler_dur_disc = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- if net_dur_disc is not None:
- net_dur_disc.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)):
- if net_g.module.use_noise_scaled_mas:
- current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step
- net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0)
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
- speakers = speakers.cuda(rank, non_blocking=True)
- tone = tone.cuda(rank, non_blocking=True)
- language = language.cuda(rank, non_blocking=True)
- bert = bert.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask, \
- (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach())
- with autocast(enabled=False):
- # TODO: I think need to mean using the mask, but for now, just mean all
- loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g)
- loss_dur_disc_all = loss_dur_disc
- optim_dur_disc.zero_grad()
- scaler.scale(loss_dur_disc_all).backward()
- scaler.unscale_(optim_dur_disc)
- grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None)
- scaler.step(optim_dur_disc)
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- if net_dur_disc is not None:
- loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g)
- loss_gen_all += loss_dur_gen
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr,
- "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update(
- {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
-
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- if net_dur_disc is not None:
- utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)))
- keep_ckpts = getattr(hps.train, 'keep_ckpts', 5)
- if keep_ckpts > 0:
- utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True)
-
-
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- image_dict = {}
- audio_dict = {}
- print("Evaluating ...")
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader):
- x, x_lengths = x.cuda(), x_lengths.cuda()
- spec, spec_lengths = spec.cuda(), spec_lengths.cuda()
- y, y_lengths = y.cuda(), y_lengths.cuda()
- speakers = speakers.cuda()
- bert = bert.cuda()
- tone = tone.cuda()
- language = language.cuda()
- for use_sdp in [True, False]:
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0)
- y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict.update({
- f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- })
- audio_dict.update({
- f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]]
- })
- image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/dmeck/RVC-Speakers/rvc/infer_pack/modules/__init__.py b/spaces/dmeck/RVC-Speakers/rvc/infer_pack/modules/__init__.py
deleted file mode 100644
index bc34fc3e0993a3b5a59e286a39b8c29c71606179..0000000000000000000000000000000000000000
--- a/spaces/dmeck/RVC-Speakers/rvc/infer_pack/modules/__init__.py
+++ /dev/null
@@ -1,519 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from rvc.infer_pack import commons
-from rvc.infer_pack.commons import init_weights, get_padding
-from rvc.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/dmeck/RVC-Speakers/vits/__init__.py b/spaces/dmeck/RVC-Speakers/vits/__init__.py
deleted file mode 100644
index 493bfc1db166d0c65c332cf1f3e29d45941f0377..0000000000000000000000000000000000000000
--- a/spaces/dmeck/RVC-Speakers/vits/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from speakers.common.registry import registry
-import os
-
-root_dir = os.path.dirname(os.path.abspath(__file__))
-registry.register_path("vits_library_root", root_dir)
-
diff --git a/spaces/dolceschokolade/chatbot-mini/components/Settings/Import.tsx b/spaces/dolceschokolade/chatbot-mini/components/Settings/Import.tsx
deleted file mode 100644
index 5cc9582f8322dc8584677eb9eb9801a6809f68b9..0000000000000000000000000000000000000000
--- a/spaces/dolceschokolade/chatbot-mini/components/Settings/Import.tsx
+++ /dev/null
@@ -1,51 +0,0 @@
-import { IconFileImport } from '@tabler/icons-react';
-import { FC } from 'react';
-
-import { useTranslation } from 'next-i18next';
-
-import { SupportedExportFormats } from '@/types/export';
-
-import { SidebarButton } from '../Sidebar/SidebarButton';
-
-interface Props {
- onImport: (data: SupportedExportFormats) => void;
-}
-
-export const Import: FC = ({ onImport }) => {
- const { t } = useTranslation('sidebar');
- return (
- <>
- {
- if (!e.target.files?.length) return;
-
- const file = e.target.files[0];
- const reader = new FileReader();
- reader.onload = (e) => {
- let json = JSON.parse(e.target?.result as string);
- onImport(json);
- };
- reader.readAsText(file);
- }}
- />
-
- }
- onClick={() => {
- const importFile = document.querySelector(
- '#import-file',
- ) as HTMLInputElement;
- if (importFile) {
- importFile.click();
- }
- }}
- />
- >
- );
-};
diff --git a/spaces/dolceschokolade/chatbot-mini/components/Sidebar/SidebarButton.tsx b/spaces/dolceschokolade/chatbot-mini/components/Sidebar/SidebarButton.tsx
deleted file mode 100644
index 7c6f3494918f0e39a02fc1a09e70c716d2026838..0000000000000000000000000000000000000000
--- a/spaces/dolceschokolade/chatbot-mini/components/Sidebar/SidebarButton.tsx
+++ /dev/null
@@ -1,19 +0,0 @@
-import { FC } from 'react';
-
-interface Props {
- text: string;
- icon: JSX.Element;
- onClick: () => void;
-}
-
-export const SidebarButton: FC = ({ text, icon, onClick }) => {
- return (
-
- );
-};
diff --git a/spaces/dorkai/ChatUIPro/app/components/sidebar/card.module.css b/spaces/dorkai/ChatUIPro/app/components/sidebar/card.module.css
deleted file mode 100644
index c917cb4db6ca20ca59639dad8c64bbe43bd83183..0000000000000000000000000000000000000000
--- a/spaces/dorkai/ChatUIPro/app/components/sidebar/card.module.css
+++ /dev/null
@@ -1,3 +0,0 @@
-.card:hover {
- background: linear-gradient(0deg, rgba(235, 245, 255, 0.4), rgba(235, 245, 255, 0.4)), #FFFFFF;
-}
\ No newline at end of file
diff --git a/spaces/eeyorestoned/midjourney-v5/README.md b/spaces/eeyorestoned/midjourney-v5/README.md
deleted file mode 100644
index 76189ac0ce98515ed5a2cd7423218111eaf87b3e..0000000000000000000000000000000000000000
--- a/spaces/eeyorestoned/midjourney-v5/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Midjourney V5
-emoji: 📚
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
-license: openrail
-duplicated_from: hareshhecker/midjourney-v5
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/elkraken/Video-Object-Detection/models/experimental.py b/spaces/elkraken/Video-Object-Detection/models/experimental.py
deleted file mode 100644
index 735d7aa0ebe7dbf3c4b062ebc3858cb5f9ebab40..0000000000000000000000000000000000000000
--- a/spaces/elkraken/Video-Object-Detection/models/experimental.py
+++ /dev/null
@@ -1,272 +0,0 @@
-import numpy as np
-import random
-import torch
-import torch.nn as nn
-
-from models.common import Conv, DWConv
-from utils.google_utils import attempt_download
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super(CrossConv, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class Sum(nn.Module):
- # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
- def __init__(self, n, weight=False): # n: number of inputs
- super(Sum, self).__init__()
- self.weight = weight # apply weights boolean
- self.iter = range(n - 1) # iter object
- if weight:
- self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
-
- def forward(self, x):
- y = x[0] # no weight
- if self.weight:
- w = torch.sigmoid(self.w) * 2
- for i in self.iter:
- y = y + x[i + 1] * w[i]
- else:
- for i in self.iter:
- y = y + x[i + 1]
- return y
-
-
-class MixConv2d(nn.Module):
- # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
- def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
- super(MixConv2d, self).__init__()
- groups = len(k)
- if equal_ch: # equal c_ per group
- i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
- c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
- else: # equal weight.numel() per group
- b = [c2] + [0] * groups
- a = np.eye(groups + 1, groups, k=-1)
- a -= np.roll(a, 1, axis=1)
- a *= np.array(k) ** 2
- a[0] = 1
- c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
-
- self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.LeakyReLU(0.1, inplace=True)
-
- def forward(self, x):
- return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
-
-
-class Ensemble(nn.ModuleList):
- # Ensemble of models
- def __init__(self):
- super(Ensemble, self).__init__()
-
- def forward(self, x, augment=False):
- y = []
- for module in self:
- y.append(module(x, augment)[0])
- # y = torch.stack(y).max(0)[0] # max ensemble
- # y = torch.stack(y).mean(0) # mean ensemble
- y = torch.cat(y, 1) # nms ensemble
- return y, None # inference, train output
-
-
-
-
-
-class ORT_NMS(torch.autograd.Function):
- '''ONNX-Runtime NMS operation'''
- @staticmethod
- def forward(ctx,
- boxes,
- scores,
- max_output_boxes_per_class=torch.tensor([100]),
- iou_threshold=torch.tensor([0.45]),
- score_threshold=torch.tensor([0.25])):
- device = boxes.device
- batch = scores.shape[0]
- num_det = random.randint(0, 100)
- batches = torch.randint(0, batch, (num_det,)).sort()[0].to(device)
- idxs = torch.arange(100, 100 + num_det).to(device)
- zeros = torch.zeros((num_det,), dtype=torch.int64).to(device)
- selected_indices = torch.cat([batches[None], zeros[None], idxs[None]], 0).T.contiguous()
- selected_indices = selected_indices.to(torch.int64)
- return selected_indices
-
- @staticmethod
- def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold):
- return g.op("NonMaxSuppression", boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold)
-
-
-class TRT_NMS(torch.autograd.Function):
- '''TensorRT NMS operation'''
- @staticmethod
- def forward(
- ctx,
- boxes,
- scores,
- background_class=-1,
- box_coding=1,
- iou_threshold=0.45,
- max_output_boxes=100,
- plugin_version="1",
- score_activation=0,
- score_threshold=0.25,
- ):
- batch_size, num_boxes, num_classes = scores.shape
- num_det = torch.randint(0, max_output_boxes, (batch_size, 1), dtype=torch.int32)
- det_boxes = torch.randn(batch_size, max_output_boxes, 4)
- det_scores = torch.randn(batch_size, max_output_boxes)
- det_classes = torch.randint(0, num_classes, (batch_size, max_output_boxes), dtype=torch.int32)
- return num_det, det_boxes, det_scores, det_classes
-
- @staticmethod
- def symbolic(g,
- boxes,
- scores,
- background_class=-1,
- box_coding=1,
- iou_threshold=0.45,
- max_output_boxes=100,
- plugin_version="1",
- score_activation=0,
- score_threshold=0.25):
- out = g.op("TRT::EfficientNMS_TRT",
- boxes,
- scores,
- background_class_i=background_class,
- box_coding_i=box_coding,
- iou_threshold_f=iou_threshold,
- max_output_boxes_i=max_output_boxes,
- plugin_version_s=plugin_version,
- score_activation_i=score_activation,
- score_threshold_f=score_threshold,
- outputs=4)
- nums, boxes, scores, classes = out
- return nums, boxes, scores, classes
-
-
-class ONNX_ORT(nn.Module):
- '''onnx module with ONNX-Runtime NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=640, device=None, n_classes=80):
- super().__init__()
- self.device = device if device else torch.device("cpu")
- self.max_obj = torch.tensor([max_obj]).to(device)
- self.iou_threshold = torch.tensor([iou_thres]).to(device)
- self.score_threshold = torch.tensor([score_thres]).to(device)
- self.max_wh = max_wh # if max_wh != 0 : non-agnostic else : agnostic
- self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=self.device)
- self.n_classes=n_classes
-
- def forward(self, x):
- boxes = x[:, :, :4]
- conf = x[:, :, 4:5]
- scores = x[:, :, 5:]
- if self.n_classes == 1:
- scores = conf # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- scores *= conf # conf = obj_conf * cls_conf
- boxes @= self.convert_matrix
- max_score, category_id = scores.max(2, keepdim=True)
- dis = category_id.float() * self.max_wh
- nmsbox = boxes + dis
- max_score_tp = max_score.transpose(1, 2).contiguous()
- selected_indices = ORT_NMS.apply(nmsbox, max_score_tp, self.max_obj, self.iou_threshold, self.score_threshold)
- X, Y = selected_indices[:, 0], selected_indices[:, 2]
- selected_boxes = boxes[X, Y, :]
- selected_categories = category_id[X, Y, :].float()
- selected_scores = max_score[X, Y, :]
- X = X.unsqueeze(1).float()
- return torch.cat([X, selected_boxes, selected_categories, selected_scores], 1)
-
-class ONNX_TRT(nn.Module):
- '''onnx module with TensorRT NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None ,device=None, n_classes=80):
- super().__init__()
- assert max_wh is None
- self.device = device if device else torch.device('cpu')
- self.background_class = -1,
- self.box_coding = 1,
- self.iou_threshold = iou_thres
- self.max_obj = max_obj
- self.plugin_version = '1'
- self.score_activation = 0
- self.score_threshold = score_thres
- self.n_classes=n_classes
-
- def forward(self, x):
- boxes = x[:, :, :4]
- conf = x[:, :, 4:5]
- scores = x[:, :, 5:]
- if self.n_classes == 1:
- scores = conf # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- scores *= conf # conf = obj_conf * cls_conf
- num_det, det_boxes, det_scores, det_classes = TRT_NMS.apply(boxes, scores, self.background_class, self.box_coding,
- self.iou_threshold, self.max_obj,
- self.plugin_version, self.score_activation,
- self.score_threshold)
- return num_det, det_boxes, det_scores, det_classes
-
-
-class End2End(nn.Module):
- '''export onnx or tensorrt model with NMS operation.'''
- def __init__(self, model, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None, device=None, n_classes=80):
- super().__init__()
- device = device if device else torch.device('cpu')
- assert isinstance(max_wh,(int)) or max_wh is None
- self.model = model.to(device)
- self.model.model[-1].end2end = True
- self.patch_model = ONNX_TRT if max_wh is None else ONNX_ORT
- self.end2end = self.patch_model(max_obj, iou_thres, score_thres, max_wh, device, n_classes)
- self.end2end.eval()
-
- def forward(self, x):
- x = self.model(x)
- x = self.end2end(x)
- return x
-
-
-
-
-
-def attempt_load(weights, map_location=None):
- # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
- model = Ensemble()
- for w in weights if isinstance(weights, list) else [weights]:
- attempt_download(w)
- ckpt = torch.load(w, map_location=map_location) # load
- model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
-
- # Compatibility updates
- for m in model.modules():
- if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
- m.inplace = True # pytorch 1.7.0 compatibility
- elif type(m) is nn.Upsample:
- m.recompute_scale_factor = None # torch 1.11.0 compatibility
- elif type(m) is Conv:
- m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
-
- if len(model) == 1:
- return model[-1] # return model
- else:
- print('Ensemble created with %s\n' % weights)
- for k in ['names', 'stride']:
- setattr(model, k, getattr(model[-1], k))
- return model # return ensemble
-
-
diff --git a/spaces/enochianborg/stable-diffusion-webui-vorstcavry/app.py b/spaces/enochianborg/stable-diffusion-webui-vorstcavry/app.py
deleted file mode 100644
index 32b191b32a78e7d5a88510e580835dde61ac3859..0000000000000000000000000000000000000000
--- a/spaces/enochianborg/stable-diffusion-webui-vorstcavry/app.py
+++ /dev/null
@@ -1,190 +0,0 @@
-"""
-Stable Diffusion Webui Version 1.6
-https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.6.0
-
-"""
-commit_id=r"5ef669de080814067961f28357256e8fe27544f4" #Version 1.3.0
-import os
-from sys import executable
-import subprocess
-import pathlib
-import gc
-
-def Gitclone(URI:str,ClonePath:pathlib.Path ) -> int :
- if pathlib.Path.exists(ClonePath):
- return 0
- for z in range(10):
- i=subprocess.run([r"git",r"clone",str(URI),str(ClonePath)])
- if(i.returncode == 0 ):
- del i
- return 0
- else :
- del i
- raise Exception(str.format("clone \'{0}\' failed",URI))
-
-
-def DownLoad(URI:str,DownloadPath:pathlib.Path,DownLoadFileName:str ) -> int:
- if (DownloadPath / DownLoadFileName).is_file(): return 0
- for z in range(10):
- i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",str(DownloadPath),r"-o",DownLoadFileName,URI]);
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
- raise Exception(str.format("download \'{0}\' failed",URI))
-
-user_home =pathlib.Path.home().resolve()
-os.chdir(str(user_home))
-#clone stable-diffusion-webui repo
-print("cloning stable-diffusion-webui repo")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",user_home / r"stable-diffusion-webui")
-os.chdir(str(user_home / r"stable-diffusion-webui"))
-os.system("git reset --hard "+commit_id)
-#install extensions
-print("installing extensions")
-Gitclone(r"https://github.com/vorstcavry/embeddings",user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative")
-Gitclone(r"https://github.com/vorstcavry/lora",user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive")
-Gitclone(r"https://github.com/vorstcavry/Checkpoint-Model",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint")
-
-DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN" ,r"4x-UltraSharp.pth")
-while (True):
- i=subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- break
- else :
- del i
-Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" )
-Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser")
-Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface")
-Gitclone(r"https://github.com/BlafKing/sd-civitai-browser-plus",user_home / r"stable-diffusion-webui" / r"extensions" / r"civitai-browser")
-Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks")
-Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet")
-Gitclone(r"https://github.com/fkunn1326/openpose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor")
-Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib")
-Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")
-Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor")
-#中文本地化的请解除下一行的注释
-#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN")
-Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete")
-Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels")
-Gitclone(r"https://github.com/etherealxx/batchlinks-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui")
-Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg")
-Gitclone(r"https://tinyurl.com/aspect-ratio-v",user_home / r"stable-diffusion-webui" / r"extensions" / r"aspect-ratio")
-Gitclone(r"https://github.com/Iyashinouta/sd-model-downloader",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-model-downloader")
-Gitclone(r"https://github.com/AIrjen/OneButtonPrompt",user_home / r"stable-diffusion-webui" / r"extensions" / r"OneButtonPrompt")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-wildcards")
-Gitclone(r"https://github.com/adieyal/sd-dynamic-prompts",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-dynamic-prompts")
-Gitclone(r"https://github.com/d8ahazard/sd_dreambooth_extension",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_dreambooth_extension")
-Gitclone(r"https://github.com/yfszzx/stable-diffusion-webui-inspiration",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-inspiration")
-Gitclone(r"https://github.com/Coyote-A/ultimate-upscale-for-automatic1111",user_home / r"stable-diffusion-webui" / r"extensions" / r"ultimate-upscale-for-automatic1111")
-os.chdir(user_home / r"stable-diffusion-webui")
-#download ControlNet models
-print("extensions dolwnload done .\ndownloading ControlNet models")
-dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"]
-for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"extensions" / "sd-webui-controlnet" / r"models",pathlib.Path(dList[i]).name)
-del dList
-#download model
-#you can change model download address here
-print("ControlNet models download done.\ndownloading model")
-#Stable Diffusion Checkpoint Model
-#anything version4.5
-#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"anything-v4.5-pruned.ckpt")
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"anything-v4.0.vae.pt")
-#Counterfeit-V3.0
-#DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Counterfeit-V3.0_fp16.safetensors")
-#AbyssOrangeMix2 sfw
-#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"AbyssOrangeMix2_sfw.safetensors")
-DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"orangemix.vae.pt")
-#MeinaPastelV5
-#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"MeinaPastelV5_BakedVAE.safetensors")
-#DownLoad(r"https://huggingface.co/AnonPerson/ChilloutMix/resolve/main/ChilloutMix-ni-fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"ChilloutMix-ni-fp16.safetensors")
-#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV4%20-%20Without%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"MeinaPastelV4%20-%20Without%20VAE.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/perfect_world/resolve/main/perfectWorld_v2Baked.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"perfectWorld_v2Baked.safetensors")
-#DownLoad(r"https://huggingface.co/vorstcavry/figurestyle1/resolve/main/figure.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"figure.safetensors")
-#DownLoad(r"https://huggingface.co/vorstcavry/dosmix/resolve/main/ddosmix_V2.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"ddosmix_V2.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/rev-animated/resolve/main/revAnimated_v11.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"revAnimated_v11.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/MeinaMix/resolve/main/Meina_V8_baked_VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Meina_V8_baked_VAE.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/CyberRealistic/resolve/main/cyberrealistic_v13.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"cyberrealistic_v13.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/mymodel/resolve/main/Cavry_V2.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Cavry_V2.safetensors")
-#downloadvae
-DownLoad(r"https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"vae-ft-mse-840000-ema-pruned.safetensors")
-
-#Lora Model
-#Better Light
-#DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"Better_light.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"Better_light.safetensors")
-#LAS
-#DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"LAS.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"LAS.safetensors")
-#Backlighting
-#DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"backlighting.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"backlighting.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/japaneseDollLikeness_v15.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"japaneseDollLikeness_v15.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/koreanDollLikeness_v20.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"koreanDollLikeness_v20.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/taiwanDollLikeness_v15.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"taiwanDollLikeness_v15.safetensors")
-
-
-
-
-#GFPGAN Model
-#detection Resnet50
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"detection_Resnet50_Final.pth")
-#parsing_parsenet
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"parsing_parsenet.pth")
-#GFPGANv1.4
-DownLoad(r"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"GFPGANv1.4.pth")
-#strt Stable Diffusion Webui
-print("Done\nStarting Webui...")
-os.chdir(user_home / r"stable-diffusion-webui")
-gc.collect()
-while True:
- ret=subprocess.run([executable ,user_home / r"stable-diffusion-webui" / r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")])
- if(ret.returncode == 0 ):
- del ret
- gc.collect()
- else :
- del ret
-del os ,user_home ,pyexecutable ,subprocess
\ No newline at end of file
diff --git a/spaces/eson/bert-perplexity/README.md b/spaces/eson/bert-perplexity/README.md
deleted file mode 100644
index d90d98747cb2bd9e8e0289f89294536556bcedd1..0000000000000000000000000000000000000000
--- a/spaces/eson/bert-perplexity/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bert Perplexity
-emoji: 💩
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-duplicated_from: eson/bert-perplexity-debug
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/evaluate-metric/meteor/meteor.py b/spaces/evaluate-metric/meteor/meteor.py
deleted file mode 100644
index 058ee80ed77aaa4253acde460f915ca2771c4835..0000000000000000000000000000000000000000
--- a/spaces/evaluate-metric/meteor/meteor.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright 2020 The HuggingFace Evaluate Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" METEOR metric. """
-
-import datasets
-import numpy as np
-from nltk.translate import meteor_score
-from packaging import version
-
-import evaluate
-
-
-if evaluate.config.PY_VERSION < version.parse("3.8"):
- import importlib_metadata
-else:
- import importlib.metadata as importlib_metadata
-
-
-NLTK_VERSION = version.parse(importlib_metadata.version("nltk"))
-if NLTK_VERSION >= version.Version("3.6.4"):
- from nltk import word_tokenize
-
-
-_CITATION = """\
-@inproceedings{banarjee2005,
- title = {{METEOR}: An Automatic Metric for {MT} Evaluation with Improved Correlation with Human Judgments},
- author = {Banerjee, Satanjeev and Lavie, Alon},
- booktitle = {Proceedings of the {ACL} Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization},
- month = jun,
- year = {2005},
- address = {Ann Arbor, Michigan},
- publisher = {Association for Computational Linguistics},
- url = {https://www.aclweb.org/anthology/W05-0909},
- pages = {65--72},
-}
-"""
-
-_DESCRIPTION = """\
-METEOR, an automatic metric for machine translation evaluation
-that is based on a generalized concept of unigram matching between the
-machine-produced translation and human-produced reference translations.
-Unigrams can be matched based on their surface forms, stemmed forms,
-and meanings; furthermore, METEOR can be easily extended to include more
-advanced matching strategies. Once all generalized unigram matches
-between the two strings have been found, METEOR computes a score for
-this matching using a combination of unigram-precision, unigram-recall, and
-a measure of fragmentation that is designed to directly capture how
-well-ordered the matched words in the machine translation are in relation
-to the reference.
-
-METEOR gets an R correlation value of 0.347 with human evaluation on the Arabic
-data and 0.331 on the Chinese data. This is shown to be an improvement on
-using simply unigram-precision, unigram-recall and their harmonic F1
-combination.
-"""
-
-_KWARGS_DESCRIPTION = """
-Computes METEOR score of translated segments against one or more references.
-Args:
- predictions: list of predictions to score. Each prediction
- should be a string with tokens separated by spaces.
- references: list of reference for each prediction. Each
- reference should be a string with tokens separated by spaces.
- alpha: Parameter for controlling relative weights of precision and recall. default: 0.9
- beta: Parameter for controlling shape of penalty as a function of fragmentation. default: 3
- gamma: Relative weight assigned to fragmentation penalty. default: 0.5
-Returns:
- 'meteor': meteor score.
-Examples:
-
- >>> meteor = evaluate.load('meteor')
- >>> predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"]
- >>> references = ["It is a guide to action that ensures that the military will forever heed Party commands"]
- >>> results = meteor.compute(predictions=predictions, references=references)
- >>> print(round(results["meteor"], 4))
- 0.6944
-"""
-
-
-@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
-class Meteor(evaluate.Metric):
- def _info(self):
- return evaluate.MetricInfo(
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- features=[
- datasets.Features(
- {
- "predictions": datasets.Value("string", id="sequence"),
- "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
- }
- ),
- datasets.Features(
- {
- "predictions": datasets.Value("string", id="sequence"),
- "references": datasets.Value("string", id="sequence"),
- }
- ),
- ],
- codebase_urls=["https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py"],
- reference_urls=[
- "https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score",
- "https://en.wikipedia.org/wiki/METEOR",
- ],
- )
-
- def _download_and_prepare(self, dl_manager):
- import nltk
-
- nltk.download("wordnet")
- if NLTK_VERSION >= version.Version("3.6.5"):
- nltk.download("punkt")
- if NLTK_VERSION >= version.Version("3.6.6"):
- nltk.download("omw-1.4")
-
- def _compute(self, predictions, references, alpha=0.9, beta=3, gamma=0.5):
- multiple_refs = isinstance(references[0], list)
- if NLTK_VERSION >= version.Version("3.6.5"):
- # the version of METEOR in NLTK version 3.6.5 and earlier expect tokenized inputs
- if multiple_refs:
- scores = [
- meteor_score.meteor_score(
- [word_tokenize(ref) for ref in refs],
- word_tokenize(pred),
- alpha=alpha,
- beta=beta,
- gamma=gamma,
- )
- for refs, pred in zip(references, predictions)
- ]
- else:
- scores = [
- meteor_score.single_meteor_score(
- word_tokenize(ref), word_tokenize(pred), alpha=alpha, beta=beta, gamma=gamma
- )
- for ref, pred in zip(references, predictions)
- ]
- else:
- if multiple_refs:
- scores = [
- meteor_score.meteor_score(
- [[word_tokenize(ref) for ref in group] for group in references][0],
- word_tokenize(pred),
- alpha=alpha,
- beta=beta,
- gamma=gamma,
- )
- for ref, pred in zip(references, predictions)
- ]
- else:
- scores = [
- meteor_score.single_meteor_score(ref, pred, alpha=alpha, beta=beta, gamma=gamma)
- for ref, pred in zip(references, predictions)
- ]
-
- return {"meteor": np.mean(scores)}
diff --git a/spaces/facat/alpaca-lora-cn/app.py b/spaces/facat/alpaca-lora-cn/app.py
deleted file mode 100644
index 09547b2f940566801e0fddab80b72de7a1e0e2eb..0000000000000000000000000000000000000000
--- a/spaces/facat/alpaca-lora-cn/app.py
+++ /dev/null
@@ -1,124 +0,0 @@
-# %%
-import os
-os.environ["CUDA_VISIBLE_DEVICES"] = ""
-import gradio as gr
-from transformers import LlamaTokenizer
-from transformers import LlamaForCausalLM, GenerationConfig
-from peft import PeftModel
-import torch
-if torch.cuda.is_available():
- device = "cuda"
-else:
- device = "cpu"
-device_map={'': 0}
-def generate_instruction_prompt(instruction, input=None):
- if input:
- return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
-
-### Instruction:
-{instruction}
-
-### Input:
-{input}
-
-### Response:"""
- else:
- return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
-
-### Instruction:
-{instruction}
-
-### Response:"""
-
-
-def evaluate(
- model,
- tokenizer,
- instruction,
- input=None,
- temperature=0.1,
- top_p=0.75,
- num_beams=4,
- max_token=256,
-):
- generation_config = GenerationConfig(
- temperature=temperature,
- top_p=top_p,
- num_beams=num_beams,
- top_k=40,
- no_repeat_ngram_size=3,
- )
- prompt = generate_instruction_prompt(instruction, input)
- inputs = tokenizer(prompt, return_tensors="pt")
- input_ids = inputs["input_ids"].to(device)
- generation_output = model.generate(
- input_ids=input_ids,
- generation_config=generation_config,
- return_dict_in_generate=True,
- output_scores=True,
- max_new_tokens=max_token,
- )
- s = generation_output.sequences[0]
- output = tokenizer.decode(s)
- res = output.split("### Response:")[1].strip()
- print("Response:", res)
- return res
-
-
-def load_lora(lora_path, base_model="decapoda-research/llama-7b-hf"):
- model = LlamaForCausalLM.from_pretrained(
- base_model,
- # load_in_8bit=True,
- # device_map=device_map,
- low_cpu_mem_usage=True,
- # torch_type=torch.float16,
- )
- print("Loading LoRA...")
- lora = PeftModel.from_pretrained(
- model,
- lora_path,
- torch_type=torch.float16,
- # device_map=device_map,
- )
- return lora
-
-
-base_model = "decapoda-research/llama-7b-hf"
-tokenizer = LlamaTokenizer.from_pretrained(base_model)
-# question = "如果今天是星期五, 那么后天是星期几?"
-model = load_lora(lora_path="facat/alpaca-lora-cn", base_model=base_model)
-
-eval = lambda question, input, temperature, beams, max_token: evaluate(
- model,
- tokenizer,
- question,
- input=input,
- temperature=temperature,
- num_beams=beams,
- max_token=max_token,
-)
-
-gr.Interface(
- fn=eval,
- inputs=[
- gr.components.Textbox(
- lines=2, label="Instruction", placeholder="Tell me about alpacas."
- ),
- gr.components.Textbox(lines=2, label="Input", placeholder="none"),
- gr.components.Slider(minimum=0, maximum=1, value=0.1, label="Temperature"),
- # gr.components.Slider(minimum=0, maximum=1, value=0.75, label="Top p"),
- # gr.components.Slider(minimum=0, maximum=100, step=1, value=40, label="Top k"),
- gr.components.Slider(minimum=1, maximum=4, step=1, value=4, label="Beams"),
- gr.components.Slider(
- minimum=1, maximum=512, step=1, value=256, label="Max tokens"
- ),
- ],
- outputs=[
- gr.inputs.Textbox(
- lines=8,
- label="Output",
- )
- ],
- title=f"Alpaca-LoRA",
- description=f"Alpaca-LoRA",
-).launch()
diff --git a/spaces/facebook/StyleNeRF/viz/layer_widget.py b/spaces/facebook/StyleNeRF/viz/layer_widget.py
deleted file mode 100644
index 365d6dbd9e1700b68b3ce121d987ef8c51356a01..0000000000000000000000000000000000000000
--- a/spaces/facebook/StyleNeRF/viz/layer_widget.py
+++ /dev/null
@@ -1,183 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import imgui
-from gui_utils import imgui_utils
-
-#----------------------------------------------------------------------------
-
-class LayerWidget:
- def __init__(self, viz):
- self.viz = viz
- self.prev_layers = None
- self.cur_layer = None
- self.sel_channels = 3
- self.base_channel = 0
- self.img_scale_db = 0
- self.img_normalize = False
- self.fft_show = False
- self.fft_all = True
- self.fft_range_db = 50
- self.fft_beta = 8
- self.refocus = False
-
- @imgui_utils.scoped_by_object_id
- def __call__(self, show=True):
- viz = self.viz
- layers = viz.result.get('layers', [])
- if self.prev_layers != layers:
- self.prev_layers = layers
- self.refocus = True
- layer = ([layer for layer in layers if layer.name == self.cur_layer] + [None])[0]
- if layer is None and len(layers) > 0:
- layer = layers[-1]
- self.cur_layer = layer.name
- num_channels = layer.shape[1] if layer is not None else 0
- base_channel_max = max(num_channels - self.sel_channels, 0)
-
- if show:
- bg_color = [0.16, 0.29, 0.48, 0.2]
- dim_color = list(imgui.get_style().colors[imgui.COLOR_TEXT])
- dim_color[-1] *= 0.5
-
- # Begin list.
- width = viz.font_size * 28
- height = imgui.get_text_line_height_with_spacing() * 12 + viz.spacing
- imgui.push_style_var(imgui.STYLE_FRAME_PADDING, [0, 0])
- imgui.push_style_color(imgui.COLOR_CHILD_BACKGROUND, *bg_color)
- imgui.push_style_color(imgui.COLOR_HEADER, 0, 0, 0, 0)
- imgui.push_style_color(imgui.COLOR_HEADER_HOVERED, 0.16, 0.29, 0.48, 0.5)
- imgui.push_style_color(imgui.COLOR_HEADER_ACTIVE, 0.16, 0.29, 0.48, 0.9)
- imgui.begin_child('##list', width=width, height=height, border=True, flags=imgui.WINDOW_ALWAYS_VERTICAL_SCROLLBAR)
-
- # List items.
- for layer in layers:
- selected = (self.cur_layer == layer.name)
- _opened, selected = imgui.selectable(f'##{layer.name}_selectable', selected)
- imgui.same_line(viz.spacing)
- _clicked, selected = imgui.checkbox(f'{layer.name}##radio', selected)
- if selected:
- self.cur_layer = layer.name
- if self.refocus:
- imgui.set_scroll_here()
- viz.skip_frame() # Focus will change on next frame.
- self.refocus = False
- imgui.same_line(width - viz.font_size * 13)
- imgui.text_colored('x'.join(str(x) for x in layer.shape[2:]), *dim_color)
- imgui.same_line(width - viz.font_size * 8)
- imgui.text_colored(str(layer.shape[1]), *dim_color)
- imgui.same_line(width - viz.font_size * 5)
- imgui.text_colored(layer.dtype, *dim_color)
-
- # End list.
- if len(layers) == 0:
- imgui.text_colored('No layers found', *dim_color)
- imgui.end_child()
- imgui.pop_style_color(4)
- imgui.pop_style_var(1)
-
- # Begin options.
- imgui.same_line()
- imgui.begin_child('##options', width=-1, height=height, border=False)
-
- # RGB & normalize.
- rgb = (self.sel_channels == 3)
- _clicked, rgb = imgui.checkbox('RGB', rgb)
- self.sel_channels = 3 if rgb else 1
- imgui.same_line(viz.font_size * 4)
- _clicked, self.img_normalize = imgui.checkbox('Normalize', self.img_normalize)
- imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w)
- if imgui_utils.button('Reset##img_flags', width=-1, enabled=(self.sel_channels != 3 or self.img_normalize)):
- self.sel_channels = 3
- self.img_normalize = False
-
- # Image scale.
- with imgui_utils.item_width(-1 - viz.button_w - viz.spacing):
- _changed, self.img_scale_db = imgui.slider_float('##scale', self.img_scale_db, min_value=-40, max_value=40, format='Scale %+.1f dB')
- imgui.same_line()
- if imgui_utils.button('Reset##scale', width=-1, enabled=(self.img_scale_db != 0)):
- self.img_scale_db = 0
-
- # Base channel.
- self.base_channel = min(max(self.base_channel, 0), base_channel_max)
- narrow_w = imgui.get_text_line_height_with_spacing()
- with imgui_utils.grayed_out(base_channel_max == 0):
- with imgui_utils.item_width(-1 - viz.button_w - narrow_w * 2 - viz.spacing * 3):
- _changed, self.base_channel = imgui.drag_int('##channel', self.base_channel, change_speed=0.05, min_value=0, max_value=base_channel_max, format=f'Channel %d/{num_channels}')
- imgui.same_line()
- if imgui_utils.button('-##channel', width=narrow_w):
- self.base_channel -= 1
- imgui.same_line()
- if imgui_utils.button('+##channel', width=narrow_w):
- self.base_channel += 1
- imgui.same_line()
- self.base_channel = min(max(self.base_channel, 0), base_channel_max)
- if imgui_utils.button('Reset##channel', width=-1, enabled=(self.base_channel != 0 and base_channel_max > 0)):
- self.base_channel = 0
-
- # Stats.
- stats = viz.result.get('stats', None)
- stats = [f'{stats[idx]:g}' if stats is not None else 'N/A' for idx in range(6)]
- rows = [
- ['Statistic', 'All channels', 'Selected'],
- ['Mean', stats[0], stats[1]],
- ['Std', stats[2], stats[3]],
- ['Max', stats[4], stats[5]],
- ]
- height = imgui.get_text_line_height_with_spacing() * len(rows) + viz.spacing
- imgui.push_style_color(imgui.COLOR_CHILD_BACKGROUND, *bg_color)
- imgui.begin_child('##stats', width=-1, height=height, border=True)
- for y, cols in enumerate(rows):
- for x, col in enumerate(cols):
- if x != 0:
- imgui.same_line(viz.font_size * (4 + (x - 1) * 6))
- if x == 0 or y == 0:
- imgui.text_colored(col, *dim_color)
- else:
- imgui.text(col)
- imgui.end_child()
- imgui.pop_style_color(1)
-
- # FFT & all.
- _clicked, self.fft_show = imgui.checkbox('FFT', self.fft_show)
- imgui.same_line(viz.font_size * 4)
- with imgui_utils.grayed_out(not self.fft_show or base_channel_max == 0):
- _clicked, self.fft_all = imgui.checkbox('All channels', self.fft_all)
- imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w)
- with imgui_utils.grayed_out(not self.fft_show):
- if imgui_utils.button('Reset##fft_flags', width=-1, enabled=(self.fft_show or not self.fft_all)):
- self.fft_show = False
- self.fft_all = True
-
- # FFT range.
- with imgui_utils.grayed_out(not self.fft_show):
- with imgui_utils.item_width(-1 - viz.button_w - viz.spacing):
- _changed, self.fft_range_db = imgui.slider_float('##fft_range_db', self.fft_range_db, min_value=0.1, max_value=100, format='Range +-%.1f dB')
- imgui.same_line()
- if imgui_utils.button('Reset##fft_range_db', width=-1, enabled=(self.fft_range_db != 50)):
- self.fft_range_db = 50
-
- # FFT beta.
- with imgui_utils.grayed_out(not self.fft_show):
- with imgui_utils.item_width(-1 - viz.button_w - viz.spacing):
- _changed, self.fft_beta = imgui.slider_float('##fft_beta', self.fft_beta, min_value=0, max_value=50, format='Kaiser beta %.2f', power=2.63)
- imgui.same_line()
- if imgui_utils.button('Reset##fft_beta', width=-1, enabled=(self.fft_beta != 8)):
- self.fft_beta = 8
-
- # End options.
- imgui.end_child()
-
- self.base_channel = min(max(self.base_channel, 0), base_channel_max)
- viz.args.layer_name = self.cur_layer if len(layers) > 0 and self.cur_layer != layers[-1].name else None
- viz.args.update(sel_channels=self.sel_channels, base_channel=self.base_channel, img_scale_db=self.img_scale_db, img_normalize=self.img_normalize)
- viz.args.fft_show = self.fft_show
- if self.fft_show:
- viz.args.update(fft_all=self.fft_all, fft_range_db=self.fft_range_db, fft_beta=self.fft_beta)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/facebook/incoder-demo/modules/cloud_logging.py b/spaces/facebook/incoder-demo/modules/cloud_logging.py
deleted file mode 100644
index f82a884f65d92ffde60be56be8b7599df09882d0..0000000000000000000000000000000000000000
--- a/spaces/facebook/incoder-demo/modules/cloud_logging.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-def make_logging_client():
- cred_filename = os.environ.get('GOOGLE_APPLICATION_CREDENTIALS')
- if not cred_filename:
- return None
- print("cred filename:", cred_filename)
- cred_string = os.environ.get('GOOGLE_APPLICATION_CREDENTIALS_STRING')
- print("cred string:", bool(cred_string))
- if not os.path.exists(cred_filename):
- if cred_string:
- print(f"writing cred string to {cred_filename}")
- with open(cred_filename, 'w') as f:
- f.write(cred_string)
- else:
- return None
- from google.cloud import logging
- logging_client = logging.Client()
- logging_client.setup_logging()
- return logging_client
-
-logging_client = make_logging_client()
diff --git a/spaces/falterWliame/Face_Mask_Detection/Cambridge Vocabulary For First Certificate (with Answers And Audio CD) REPACK.md b/spaces/falterWliame/Face_Mask_Detection/Cambridge Vocabulary For First Certificate (with Answers And Audio CD) REPACK.md
deleted file mode 100644
index 141d6fb41aef0007ec37b6fc2bb01109db6bcfeb..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Cambridge Vocabulary For First Certificate (with Answers And Audio CD) REPACK.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
How to Improve Your Vocabulary for the First Certificate Exam
-
If you are preparing for the First Certificate exam, also known as B2 First, you might be wondering how to improve your vocabulary. Vocabulary is one of the key components of the exam, as it tests your ability to understand and use a wide range of words and phrases in different contexts.
-
One of the best ways to improve your vocabulary is to use a dedicated book that covers the topics and language areas that you need for the exam. One such book is Cambridge Vocabulary For First Certificate (with Answers And Audio CD), published by Cambridge University Press. This book is designed to help you learn and practice the vocabulary that you need for the exam, as well as develop your skills in reading, writing, listening and speaking.
-
Cambridge Vocabulary For First Certificate (with Answers And Audio CD)
The book contains 25 units that cover topics such as education, travel, health, entertainment, technology and society. Each unit introduces new vocabulary through authentic texts and recordings, followed by exercises that help you check your understanding and use the words correctly. The book also includes tips and strategies on how to deal with different types of questions in the exam, such as multiple choice, gap fill, word formation and keyword transformation.
-
The book comes with an answer key and an audio CD that contains all the listening material for the exercises. You can use the book for self-study or in class with a teacher. The book is suitable for students who have an intermediate level of English (B1) or higher.
-
By using Cambridge Vocabulary For First Certificate (with Answers And Audio CD), you can improve your vocabulary and confidence for the exam. You can also expand your knowledge of English and learn about different topics and cultures. The book is available from Cambridge University Press or other online and offline retailers.
-
-
Another way to improve your vocabulary is to use online tools and apps that can help you practice and learn new words. For example, you can use Test & Train, an easy-to-use practice tool that helps you get ready for the B2 First exam through short, sharp workouts. With over 300 practice questions, you can use it anytime, anywhere and as many times as you like[^2^].
-
You can also use Write & Improve, a free online tool that helps you practice and improve your writing. Just choose a task, write or upload your answer and use the feedback to quickly improve. You can also use Exam Lift, a free app that helps you learn English on the go and develop the skills you need for the B2 First exam[^2^].
-
Finally, you can find more tips and advice on how to prepare for the B2 First exam on various websites and blogs. For example, you can visit FCE Exam Tips, a website that offers exam strategies, vocabulary ideas, videos and helpful articles on different skills and topics[^1^]. You can also visit Cambridge English, the official website of the exam provider, where you can find sample tests, handbooks, lesson plans, teacher guides and webinars[^2^] [^3^].
-
By using these resources and tools, you can boost your vocabulary and your chances of success in the B2 First exam. Remember that vocabulary is not only about knowing words, but also about using them appropriately and accurately in different situations. Good luck with your exam preparation!
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Facing The Giants DVDRIP FRENCH.md b/spaces/falterWliame/Face_Mask_Detection/Facing The Giants DVDRIP FRENCH.md
deleted file mode 100644
index 1e0d2815b44c0d33232c5d72220962dd0d0c1ffc..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Facing The Giants DVDRIP FRENCH.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
Facing The Giants DVDRIP FRENCH: Un film qui vous inspire à croire
-
Facing The Giants DVDRIP FRENCH est un fichier qui contient le film Facing The Giants en format numérique. Il a été extrait d'un DVD, ce qui signifie qu'il a été copié et compressé pour réduire sa taille et sa qualité. Mais cela ne l'empêche pas d'être un film passionnant et émouvant, qui vous fera découvrir l'histoire d'un entraîneur de football américain et de son équipe, qui font face à leurs géants de la peur et de l'échec, sur le terrain et dans la vie.
De quoi parle le film Facing The Giants DVDRIP FRENCH?
-
Le film Facing The Giants DVDRIP FRENCH raconte l'histoire de Grant Taylor, un entraîneur de football américain dans un lycée chrétien, qui traverse une crise personnelle et professionnelle. Son équipe perd tous ses matchs depuis six ans, ses joueurs manquent de motivation et de confiance, et il est menacé d'être renvoyé. De plus, il doit faire face à des problèmes conjugaux et à une infertilité qui le frustre. Il se sent dépassé par les difficultés et prêt à abandonner.
-
Mais un jour, il décide de changer de perspective et de remettre sa vie entre les mains de Dieu. Il se met à prier avec ferveur et à encourager ses joueurs à donner le meilleur d'eux-mêmes, sans se soucier du résultat. Il leur apprend à croire en eux-mêmes et en Dieu, à affronter leurs peurs et leurs obstacles, et à jouer avec passion et honneur. Il leur fait comprendre que le football n'est pas qu'un sport, mais une métaphore de la vie.
-
Grâce à cette nouvelle attitude, Grant Taylor et son équipe vont vivre une saison incroyable, où ils vont renverser la situation et remporter des victoires inattendues. Ils vont aussi expérimenter des miracles et des bénédictions dans leurs vies personnelles, qui vont renforcer leur foi et leur gratitude. Ils vont réaliser que rien n'est impossible à celui qui croit.
-
Pourquoi regarder le film Facing The Giants DVDRIP FRENCH?
-
Le film Facing The Giants DVDRIP FRENCH est un film qui vous inspire à croire en Dieu et en vous-même. Il vous montre que Dieu a un plan pour votre vie, même si vous ne le voyez pas toujours. Il vous enseigne que la prière est puissante, que la foi peut déplacer des montagnes, et que l'amour est plus fort que tout. Il vous fait prendre conscience que vous avez du potentiel, que vous pouvez surmonter vos difficultés, et que vous pouvez réaliser vos rêves.
-
Le film Facing The Giants DVDRIP FRENCH est aussi un film qui vous divertit et vous émeut. Il vous fait vibrer avec les scènes de football, qui sont intenses et captivantes. Il vous fait rire avec les moments d'humour, qui sont légers et drôles. Il vous fait pleurer avec les moments d'émotion, qui sont touchants et sincères. Il vous fait réfléchir avec les messages qu'il véhicule, qui sont profonds et universels.
-
-
Le film Facing The Giants DVDRIP FRENCH est donc un film à voir absolument, si vous aimez les films inspirants, motivants et édifiants. Il vous fera passer un bon moment, tout en nourrissant votre âme. Il vous fera découvrir un film chrétien qui n'est pas moralisateur ni ennuyeux, mais au contraire dynamique et passionnant. Il vous fera apprécier un film qui n'est pas seulement un divertissement, mais aussi une source d'espérance et de foi.
-
Quels sont les avis des spectateurs sur le film Facing The Giants DVDRIP FRENCH?
-
Le film Facing The Giants DVDRIP FRENCH a reçu des avis très positifs de la part des spectateurs, qui ont été touchés par son message de foi et d'espérance. Sur le site IMDb, le film a obtenu une note moyenne de 6,7/10, basée sur plus de 13 000 votes. Sur le site Rotten Tomatoes, le film a obtenu une note moyenne de 4,3/5, basée sur plus de 40 000 votes.
-
Voici quelques exemples de commentaires laissés par les spectateurs sur le film Facing The Giants DVDRIP FRENCH:
-
-
"Un film inspirant, qui montre comment la foi et la détermination peuvent aider même les personnes les plus découragées. Le film est bien réalisé, avec des acteurs crédibles et des scènes de football palpitantes. Un film à voir en famille, qui fait du bien au cœur et à l'âme." (Sandro A, IMDb)
-
"Mon mari et moi avons pu voir le film Facing The Giants à Denver lors de la conférence CBA. Je dois dire que c'était le meilleur film que j'ai vu depuis des années. Il vous prend dès le début. Vous passez du rire aux larmes. Le film est plein de valeurs familiales et de football." (Heather Boerner, Common Sense Media)
-
"Facing The Giants est un film qui vous inspire à croire en Dieu et en vous-même. Il vous montre que Dieu a un plan pour votre vie, même si vous ne le voyez pas toujours. Il vous enseigne que la prière est puissante, que la foi peut déplacer des montagnes, et que l'amour est plus fort que tout. Il vous fait prendre conscience que vous avez du potentiel, que vous pouvez surmonter vos difficultés, et que vous pouvez réaliser vos rêves." (Jeremy, SoundCloud)
-
"Facing The Giants est un film qui vous divertit et vous émeut. Il vous fait vibrer avec les scènes de football, qui sont intenses et captivantes. Il vous fait rire avec les moments d'humour, qui sont légers et drôles. Il vous fait pleurer avec les moments d'émotion, qui sont touchants et sincères. Il vous fait réfléchir avec les messages qu'il véhicule, qui sont profonds et universels." (benniegentil, SUBDL)
-
-
Comment télécharger le film Facing The Giants DVDRIP FRENCH?
-
Si vous souhaitez télécharger le film Facing The Giants DVDRIP FRENCH, vous pouvez le faire facilement en suivant ces étapes:
-
-
Rendez-vous sur un site de téléchargement légal, comme Amazon Prime Video ou iTunes.
-
Recherchez le film Facing The Giants DVDRIP FRENCH dans la barre de recherche.
-
Cliquez sur le bouton "Acheter" ou "Louer" selon votre préférence.
-
Choisissez le format de votre choix (SD ou HD) et le mode de paiement.
-
Téléchargez le film sur votre ordinateur ou votre appareil mobile.
-
Profitez du film Facing The Giants DVDRIP FRENCH quand vous voulez et où vous voulez.
-
-
Vous pouvez aussi regarder le film Facing The Giants DVDRIP FRENCH en streaming sur des plateformes comme Netflix ou Hulu, si vous disposez d'un abonnement.
-
Conclusion
-
Le film Facing The Giants DVDRIP FRENCH est un film qui vaut la peine d'être vu, si vous aimez les films inspirants, motivants et édifiants. Il vous fera découvrir l'histoire d'un entraîneur de football américain et de son équipe, qui font face à leurs géants de la peur et de l'échec, sur le terrain et dans la vie. Il vous fera partager leur parcours incroyable, où ils vont expérimenter la puissance de la foi et de l'amour. Il vous fera ressentir des émotions fortes, du rire aux larmes. Il vous fera réfléchir sur le sens de votre vie et sur votre relation avec Dieu.
-
Vous pouvez télécharger ou regarder le film Facing The Giants DVDRIP FRENCH sur différents sites ou plateformes légaux, selon votre convenance. Vous pouvez aussi consulter les avis des spectateurs sur le film Facing The Giants DVDRIP FRENCH, pour avoir un aperçu de ce qui vous attend.
-
N'attendez plus et découvrez le film Facing The Giants DVDRIP FRENCH dès maintenant! Vous ne le regretterez pas!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Freebandicamfullversiondownload HOTmediafire.md b/spaces/falterWliame/Face_Mask_Detection/Freebandicamfullversiondownload HOTmediafire.md
deleted file mode 100644
index ea4e6ff6f74a3ed11c5084d40b0cc6bbd7e74b96..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Freebandicamfullversiondownload HOTmediafire.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
How to Download Bandicam Full Version for Free from Mediafire
-
Bandicam is a popular screen recording software that allows you to capture your gameplay, webcam, desktop, or any other video source. It also has features such as editing, compression, and watermarking. However, the free version of Bandicam has some limitations, such as a 10-minute recording time and a watermark on the output video. If you want to enjoy the full features of Bandicam without paying for a license, you can download it for free from Mediafire.
-
Mediafire is a file hosting and sharing service that lets you upload and download files easily. It has a large collection of software, games, movies, music, and more. You can find Bandicam full version on Mediafire by following these steps:
Select the file that matches your system requirements and has positive ratings and comments.
-
Click on the download button and wait for the file to be downloaded.
-
Extract the file using a program such as WinRAR or 7-Zip.
-
Run the setup file and follow the instructions to install Bandicam on your computer.
-
Enjoy using Bandicam full version for free!
-
-
Note: Downloading Bandicam full version from Mediafire may be illegal or unsafe. You may encounter viruses, malware, or other threats that can harm your computer or compromise your privacy. You may also violate the terms of service of Bandicam or Mediafire. We do not recommend or endorse this method of obtaining Bandicam full version. Use it at your own risk.
-
-
Now that you have downloaded and installed Bandicam full version for free from Mediafire, you may wonder how to use it effectively. Here are some tips and tricks to help you get the most out of Bandicam:
-
-
To start recording, press the F12 key or click on the red record button on the Bandicam interface. You can also use the hotkeys to pause, resume, or stop the recording.
-
To change the recording mode, click on the tabs at the top of the Bandicam interface. You can choose from game recording mode, screen recording mode, device recording mode, or webcam overlay mode.
-
To adjust the settings, click on the settings button on the Bandicam interface. You can customize the video format, quality, codec, framerate, audio settings, mouse effects, logo settings, and more.
-
To edit your recorded video, click on the edit button on the Bandicam interface. You can trim, split, join, or convert your video using Bandicut, a built-in video editor.
-
To share your recorded video, click on the upload button on the Bandicam interface. You can upload your video to YouTube, Vimeo, Facebook, or other platforms directly from Bandicam.
-
-
With Bandicam full version for free from Mediafire, you can record and share your videos with ease and quality. However, remember to be careful and responsible when using this method of obtaining Bandicam full version. We hope this article was helpful and informative for you.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Gaussian 09 Torrent 1357.md b/spaces/falterWliame/Face_Mask_Detection/Gaussian 09 Torrent 1357.md
deleted file mode 100644
index c050548b156344fb3a46e4393a8bdafe3d04d663..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Gaussian 09 Torrent 1357.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
Gaussian 09 Torrent 1357: How to Download and Use It for Quantum Chemistry
-
-
Gaussian 09 is a software that allows you to perform electronic structure calculations for molecular systems, based on the principles of quantum mechanics. Gaussian 09 can predict the energies, structures, frequencies, and properties of molecules and reactions, under various conditions and scenarios. Gaussian 09 is widely used by chemists, physicists, biochemists, and other researchers who are interested in studying the behavior and interactions of atoms and molecules.
However, Gaussian 09 is not a free software, and it requires a license key to activate and use it. The license key is a unique code that verifies that you have purchased or obtained a valid license for the software, and that you are authorized to use its features and functions. The license key also protects the intellectual property rights of the software developer, and prevents piracy and unauthorized distribution of the software.
-
-
If you want to use Gaussian 09 for your research or academic purposes, but you do not have a license key, you might be tempted to download Gaussian 09 torrent 1357, which is a modified version of Gaussian 09 that bypasses the license key protection and allows you to use the software without any limitations or restrictions. Gaussian 09 torrent 1357 is usually distributed by hackers or crackers who have managed to break the encryption code of the original software and make it available for anyone to download and use.
-
-
In this article, we will explain how to download and use Gaussian 09 torrent 1357, and what are the benefits and drawbacks of using it. We will also show you some of the features and functions of Gaussian 09 torrent 1357, and how to use it effectively for your quantum chemistry calculations.
-
-
How to Download and Use Gaussian 09 Torrent 1357?
-
-
There are many websites that claim to offer Gaussian 09 torrent 1357 for free download, but not all of them are trustworthy or reliable. Some of them may contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information.
-
-
-
To avoid these risks, you should only download Gaussian 09 torrent 1357 from reputable sources that have positive reviews and feedback from other users. You should also scan the downloaded file with a reliable antivirus software before opening it or installing it on your computer.
-
-
Here are the steps to download and use Gaussian 09 torrent 1357 on your computer:
-
-
-
Go to one of the websites that offer Gaussian 09 torrent 1357 for free download, such as https://www1.thepiratebay3.to/torrent/11102069/Gaussian_09.
-
Click on the download button and choose a location to save the file on your computer.
-
Wait for the download to complete and then open the file with a file extractor program, such as WinRAR or 7-Zip.
-
Extract the contents of the file to a folder on your computer.
-
Open the folder and double-click on the setup.exe file to start the installation process.
-
Follow the instructions on the screen and choose the options that suit your preferences.
-
When the installation is finished, launch Gaussian 09 torrent 1357 from your desktop or start menu.
-
Enjoy using Gaussian 09 torrent 1357 for your quantum chemistry calculations.
-
-
-
What are the Benefits and Drawbacks of Using Gaussian 09 Torrent 1357?
-
-
Using Gaussian 09 torrent 1357 has some advantages and disadvantages that you should be aware of before deciding whether to use it or not.
-
-
Some of the benefits of using Gaussian 09 torrent 1357 are:
-
-
-
You can use Gaussian 09 torrent 1357 for free without paying any fees or charges.
-
You can use Gaussian 09 torrent
-
1357 without needing a license key or an internet connection to activate it.
-
You can use all the features and functions of Gaussian 09 torrent 1357 without any limitations or restrictions.
-
You can perform complex and accurate quantum chemistry calculations with Gaussian 09 torrent 1357.
-
-
-
Some of the drawbacks of using Gaussian 09 torrent 1357 are:
-
-
-
You may face legal issues or penalties for using pirated software that violates the intellectual property rights of the original developer.
-
You may not receive any updates, patches, bug fixes, or technical support from the original developer of Gaussian 09.
-
You may encounter compatibility issues or errors with some operating systems or hardware that are not supported by Gaussian 09 torrent 1357.
-
You may expose your computer to viruses, malware, spyware, or other harmful programs that may be hidden in the downloaded file or installed along with Gaussian 09 torrent 1357.
-
-
-
What are the Features and Functions of Gaussian 09 Torrent 1357?
-
-
Gaussian 09 torrent 1357 is a powerful software that can perform electronic structure calculations for molecular systems, based on the principles of quantum mechanics. Gaussian 09 torrent 1357 can predict the energies, structures, frequencies, and properties of molecules and reactions, under various conditions and scenarios.
-
-
Some of the features and functions of Gaussian 09 torrent 1357 are:
-
-
-
You can use various methods and models to describe the electronic structure of molecules, such as Hartree-Fock, density functional theory, post-Hartree-Fock, semi-empirical, etc.
-
You can use various basis sets to represent the atomic orbitals of molecules, such as Slater-type, Gaussian-type, pseudopotential, etc.
-
You can use various solvation models to account for the effects of solvent on molecules, such as continuum, discrete, mixed, etc.
-
You can use various correlation methods to account for the effects of electron-electron interactions on molecules, such as configuration interaction, coupled cluster, perturbation theory, etc.
-
You can use various properties and spectra methods to calculate the physical and chemical properties and spectra of molecules, such as dipole moment, polarizability, NMR, IR, UV-Vis, etc.
-
-
-
How to Use Gaussian 09 Torrent 1357 Effectively for Your Quantum Chemistry Calculations?
-
-
To use Gaussian 09 torrent 1357 effectively for your quantum chemistry calculations
-
Conclusion
-
-
Gaussian 09 is a software that allows you to perform electronic structure calculations for molecular systems, based on the principles of quantum mechanics. Gaussian 09 can predict the energies, structures, frequencies, and properties of molecules and reactions, under various conditions and scenarios. However, Gaussian 09 is not a free software, and it requires a license key to activate and use it.
-
-
If you want to use Gaussian 09 for your research or academic purposes, but you do not have a license key, you might be tempted to download Gaussian 09 torrent 1357, which is a modified version of Gaussian 09 that bypasses the license key protection and allows you to use the software without any limitations or restrictions.
-
-
In this article, we have explained how to download and use Gaussian 09 torrent 1357, and what are the benefits and drawbacks of using it. We have also shown you some of the features and functions of Gaussian 09 torrent 1357, and how to use it effectively for your quantum chemistry calculations.
-
-
We hope this article has been informative and helpful for you. If you have any questions or comments about Gaussian 09 torrent 1357, feel free to leave them below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Microwave Engineering By Annapurna Das Sisir K Dasrar __LINK__.md b/spaces/falterWliame/Face_Mask_Detection/Microwave Engineering By Annapurna Das Sisir K Dasrar __LINK__.md
deleted file mode 100644
index bb7c3e48552fa81e7cc5999d59949ea59127452d..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Microwave Engineering By Annapurna Das Sisir K Dasrar __LINK__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Microwave Engineering By Annapurna Das Sisir K Dasrar
-
-December 2, 2020 by guest annapurna das microwave engineering. Microwave Engineering, 3e. Annapurna Das, Sisir K Das. Limited preview. 4d29de3e1b
-
-
-
diff --git a/spaces/fatiXbelha/sd/Delphi Connect How to Access Your Cars Data Alerts and Location Map.md b/spaces/fatiXbelha/sd/Delphi Connect How to Access Your Cars Data Alerts and Location Map.md
deleted file mode 100644
index 042bc1c5784385cae48501ba42a27aba4ef22e29..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Delphi Connect How to Access Your Cars Data Alerts and Location Map.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Delphi Cars APK: A Smart Solution for Your Car
-
Do you want to have a convenient way of finding, accessing, and controlling your car from anywhere and anytime with your smartphone or browser? Do you want to get in-depth diagnostics and advanced technical information for a wide range of car models and brands? Do you want to add features to your car that have traditionally been available only on new vehicles or with professionally installed aftermarket systems? If you answered yes to any of these questions, then you might be interested in delphi cars apk.
-
Delphi cars apk is an Android app that works with the Delphi Connect system, a wireless module that plugs into your car's OBD port and connects to the cloud via a cellular data connection and a GPS receiver. The app allows you to locate, track, diagnose, and lock/unlock/remote start your car from your smartphone or browser. You can also access live data, alerts, trip logs, geo-fencing, keyfob functionality, and more. Delphi cars apk is compatible with almost all vehicles sold in the US since 1996.
In this article, we will show you how to download and install delphi cars apk on your Android device, how to use it to diagnose and control your car, what are some of the reviews and ratings of the app from users and experts, how it compares to its competitors in the market, and why you should choose it for your car maintenance and connectivity needs. We will also answer some common questions about delphi cars apk.
-
How to Download and Install Delphi Cars APK on Your Android Device?
-
Downloading and installing delphi cars apk on your Android device is easy and straightforward. Here are the steps you need to follow:
Go to [text](^1^) on your Android device's browser. This is the official website of Delphi Connect where you can find more information about the system and the app.
-
Scroll down to the bottom of the page and tap on "Download App" under "Android". This will redirect you to the Google Play Store where you can download delphi cars apk for free.
-
Tap on "Install" and wait for the app to be downloaded and installed on your device.
-
Once the app is installed, open it and sign in with your Delphi Connect account credentials. If you don't have an account yet, you can create one by tapping on "Create Account" and following the instructions.
-
After signing in, you can start using delphi cars apk to diagnose and control your car.
-
-
How to Use Delphi Cars APK to Diagnose and Control Your Car?
-
Using delphi cars apk to diagnose and control your car is simple and intuitive. Here are some of the things you can do with the app:
-
-
Dashboard: This is where you can see a quick overview of your car's status, driving activity, and location. You can also access other features such as alerts, trips, keyfob, settings, etc.
-
Location Map: This is where you can locate all your vehicles on a map. You can also view navigation directions that take you to your vehicle using your navigation app.
-
Geo-fencing: This is where you can create up to 6 boundaries per vehicle on a map so you can be alerted anytime a vehicle enters or exits the specified area.
-
Live Tracking: This is where you can track your vehicle on a map with updates every 5 seconds of vehicle location, speed, and heading.
-
Recent Trips: This is where you can see all the trips you take in your vehicle and the trip details, such as distance, duration, fuel consumption, average speed, etc.
-
Vehicle Health: This is where you can see the diagnostic trouble codes (DTCs) of your vehicle and their descriptions. You can also clear the codes or contact a nearby service center for assistance.
-
Vehicle Alerts: This is where you can see the alerts that are triggered by your vehicle, such as low battery, check engine light, geo-fence violation, etc. You can also customize the alert settings and preferences.
-
Keyfob: This is where you can remotely lock/unlock your doors, honk your horn, flash your lights, and start/stop your engine (if supported by your vehicle).
-
Settings: This is where you can manage your account information, vehicles, devices, notifications, etc.
-
-
What Are Some of the Reviews and Ratings of Delphi Cars APK from Users and Experts?
-
Delphi cars apk has received mostly positive reviews and ratings from users and experts who have tried it. Here are some of the comments and feedbacks from different sources:
-
-
-
Source
-
Rating
-
Comment
-
-
-
Google Play Store
-
4.1 out of 5 stars (based on 1,216 reviews)
-
"Great app. Works well with my 2013 Ford Escape. I can start my car from anywhere and monitor its location and health. The customer service is also very helpful and responsive."
-
-
-
CNET
-
4 out of 5 stars (based on 1 review)
-
"Delphi Connect is a useful and easy-to-use system for anyone who wants to monitor their car's location and status remotely. It also adds some features that are normally found only on newer or more expensive cars."
-
-
-
PCMag
-
3.5 out of 5 stars (based on 1 review)
-
"Delphi Connect is a good choice for anyone looking for a simple way to add connectivity features to their car. It offers basic remote control and diagnostic functions, but lacks some of the advanced features of its competitors."
-
-
-
How Does Delphi Cars APK Compare to Its Competitors in the Market?
-
Delphi cars apk is not the only app that offers car connectivity and diagnostic features. There are other apps and systems that provide similar or different functions for different prices and requirements. Here are some of the main competitors of delphi cars apk in the market:
-
-
Viper SmartStart: This is an app that works with a Viper security system installed in your car. It allows you to remotely start, lock/unlock, locate, and monitor your car from your smartphone or browser. It also offers security features such as alarm notifications, panic button, etc. The app is free to download, but the system costs around $300 to $600 depending on the model and installation.
-
Automatic: This is an app that works with a small device that plugs into your car's OBD port. It allows you to track your driving habits, fuel efficiency, trip logs, engine health, etc. It also offers crash alert, roadside assistance, parking reminder, etc. The app is free to download, but the device costs $99.95.
-
Zubie: This is an app that works with a device that plugs into your car's OBD port and connects to the cloud via a cellular data connection. It allows you to locate, track, diagnose, and monitor your car from your smartphone or browser. It also offers driving insights, maintenance reminders, trip reports, etc. The app is free to download, but the device costs $99.95 plus a monthly service fee of $9.95.
-
-
Conclusion: Why Should You Choose Delphi Cars APK for Your Car Maintenance and Connectivity Needs?
-
If you are looking for a smart solution for your car maintenance and connectivity needs, then delphi cars apk might be the right choice for you. Here are some of the reasons why you should choose delphi cars apk over its competitors:
-
-
Compatibility: Delphi cars apk works with almost all vehicles sold in the US since 1996. You don't need to worry about whether your car model or brand is supported or not.
-
Affordability: Delphi cars apk is free to download and use. You only need to pay for the Delphi Connect system which costs around $200 plus a monthly service fee of $5. The system includes a wireless module, a GPS receiver, and a cellular data connection. You don't need to pay extra for installation or other devices.
-
Functionality: Delphi cars apk offers a wide range of features and functions that allow you to diagnose and control your car from anywhere and anytime. You can access live data, alerts, trip logs, geo-fencing, keyfob functionality, and more. You can also add features to your car that have traditionally been available only on new vehicles or with professionally installed aftermarket systems.
-
Reliability: Delphi cars apk is backed by Delphi Technologies, a global leader in automotive technology and innovation. The app and the system are designed and tested to meet the highest standards of quality and performance. You can trust that your car data and information are secure and accurate.
-
-
Delphi cars apk is a smart solution for your car maintenance and connectivity needs. It is compatible, affordable, functional, and reliable. It can help you save time, money, and hassle while enhancing your driving experience and safety. Download delphi cars apk today and see for yourself how it can transform your car into a smart car.
-
FAQs: Some Common Questions and Answers About Delphi Cars APK
-
Here are some of the common questions and answers about delphi cars apk that you might have:
-
-
Q: How do I get the Delphi Connect system for my car?
-
A: You can buy the Delphi Connect system online from [text] or from authorized retailers such as Verizon Wireless, Best Buy, Amazon, etc. You can also find a list of retailers near you on the website.
-
Q: How do I install the Delphi Connect system in my car?
-
A: You can install the Delphi Connect system in your car by yourself in minutes. All you need to do is plug the wireless module into your car's OBD port (usually located under the dashboard) and activate it with your Delphi Connect account. You can find more detailed instructions on the website or in the user manual.
-
Q: How do I update the Delphi Connect system or the app?
-
A: The Delphi Connect system and the app are automatically updated over the air whenever there are new features or improvements available. You don't need to do anything to update them.
-
Q: How do I contact the customer support for Delphi Connect?
-
A: You can contact the customer support for Delphi Connect by phone at 1-877-855-8400 or by email at delphiconnectsupport@delphi.com. You can also visit the website for more information and resources.
-
Q: What are the minimum requirements for using Delphi cars apk?
-
A: To use delphi cars apk, you need an Android device running Android 4.0 or higher, a Delphi Connect account, and a Delphi Connect system installed in your car.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/FIFA Mobile Dominate the Field with Advanced Passing and Tactics on iOS.md b/spaces/fatiXbelha/sd/FIFA Mobile Dominate the Field with Advanced Passing and Tactics on iOS.md
deleted file mode 100644
index b2e46e6975e6cf3536c0b3666635cedb846fde4b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/FIFA Mobile Dominate the Field with Advanced Passing and Tactics on iOS.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
FIFA Mobile: The Ultimate Football Game for iOS and Android
-
If you are a fan of football, you will love FIFA Mobile, the mobile version of the popular EA SPORTS FIFA game. FIFA Mobile lets you build your own Ultimate Team and challenge your friends and other players from around the world in various modes, including Head-to-Head, VS Attack, and Manager Mode. You can also play through the entire Tournament with any of the 32 qualified National Teams, or rewrite history and take control of 15 non-qualified National Teams. FIFA Mobile is available for both iOS and Android devices, and you can download it for free from the App Store or Google Play.
FIFA Mobile is a football game that allows you to experience the thrill of playing with the world's best players and teams. You can choose from over 15,000 players, including world-class talent like Kylian Mbappé and Christian Pulisic, plus 600+ teams, including Real Madrid and Manchester City. You can also kick off against teams from club football's most prestigious competitions – the UEFA Champions League, UEFA Europa League, and UEFA Europa Conference League. You can follow every step, from the group stages all the way to the final.
-
Features of FIFA Mobile
-
FIFA Mobile has many features that make it an exciting and engaging game for football lovers. Here are some of them:
-
Play with over 15,000 players and 600+ teams
-
You can build your Ultimate Team with players from the Premier League, LaLiga Santander, Bundesliga, Serie A TIM, Ligue 1 Uber Eats, and more. You can also transfer players to reflect their new team in the next roster update. You can train your favorite players, increasing their stats and OVR. You can take your superstars to the next level by scoring goals with them.
-
Compete in the UEFA Champions League, UEFA Europa League, and UEFA Europa Conference League
-
You can kick off against teams from club football's most prestigious competitions – the UEFA Champions League, UEFA Europa League, and UEFA Europa Conference League. You can take part in playable live events that correspond with the real-world tournaments throughout the football season to earn special UEFA Champions League, UEFA Europa League, and UEFA Europa Conference League players. You can follow every step, from the group stages all the way to the final.
-
fifa mobile ios apk download
-fifa mobile ios apk mod
-fifa mobile ios apk hack
-fifa mobile ios apk obb
-fifa mobile ios apk offline
-fifa mobile ios apk latest version
-fifa mobile ios apk free
-fifa mobile ios apk update
-fifa mobile ios apk 2023
-fifa mobile ios apk data
-fifa mobile ios apk full
-fifa mobile ios apk cracked
-fifa mobile ios apk unlimited money
-fifa mobile ios apk no verification
-fifa mobile ios apk gameplay
-fifa mobile ios apk review
-fifa mobile ios apk install
-fifa mobile ios apk size
-fifa mobile ios apk requirements
-fifa mobile ios apk features
-fifa mobile ios apk tips
-fifa mobile ios apk cheats
-fifa mobile ios apk generator
-fifa mobile ios apk coins
-fifa mobile ios apk points
-fifa mobile ios apk reddit
-fifa mobile ios apk forum
-fifa mobile ios apk blog
-fifa mobile ios apk news
-fifa mobile ios apk guide
-fifa mobile ios apk wiki
-fifa mobile ios apk faq
-fifa mobile ios apk support
-fifa mobile ios apk help
-fifa mobile ios apk error
-fifa mobile ios apk fix
-fifa mobile ios apk patch
-fifa mobile ios apk beta
-fifa mobile ios apk release date
-fifa mobile ios apk official site[^1^]
-fifa mobile ios apk ea sports[^1^]
-fifa mobile ios apk ultimate team[^1^]
-fifa mobile ios apk champions league[^1^]
-fifa mobile ios apk icons[^1^]
-fifa mobile ios apk heroes[^1^]
-fifa mobile ios apk advanced passing[^1^]
-fifa mobile ios apk head-to-head[^1^]
-fifa mobile ios apk vs attack[^1^]
-fifa mobile ios apk manager mode[^1^]
-
Build your team with ICONS and HEROES
-
You can build a team full of football legends, with over 100 of the biggest ICONS, from Zidane and Beckham to Ronaldo and Maldini. You can also celebrate some of football's most memorable players with new Heroes, representing career-making unforgettable moments from fan favorites like Solskjær and Di Natale.
-
How to download FIFA Mobile on iOS and Android
-
Downloading FIFA Mobile on your iOS or Android device is easy and fast. Here are the steps you need to follow:
-
Download from the App Store or Google Play
-
You can download FIFA Mobile for free from the App Store or Google Play. Just search for "FIFA Mobile" in your app store and tap on "Get" or "Install". The game will start downloading on your device.
-
Sign up with your EA Account or create a new one
-
Once you have downloaded FIFA Mobile, you will need to sign up with your EA Account or create a new one. If you already have an EA Account, you can use it to log in to FIFA Mobile. If you don't have one, you can create one by entering your email address, password, and security question. You can also link your Facebook, Google, or Apple account to your EA Account for easier access.
-
Customize your profile and settings
-
After you have signed up, you can customize your profile and settings. You can choose your country, language, and favorite team. You can also adjust your game preferences, such as sound, graphics, and controls. You can change your profile and settings anytime from the main menu.
-
Tips and tricks for playing FIFA Mobile
-
FIFA Mobile is a fun and addictive game that will keep you entertained for hours. However, if you want to improve your skills and performance, you might want to follow some tips and tricks. Here are some of them:
-
Train your players and improve their stats
-
One of the best ways to make your team stronger is to train your players and improve their stats. You can use Training XP items that you earn from playing matches and events to level up your players. You can also use Skill Boosts items that you collect from various sources to boost specific attributes of your players. You can train and boost any player in your team, regardless of their OVR or position.
-
Use the Advanced Passing system to create more chances
-
Another way to enhance your gameplay is to use the Advanced Passing system, which allows you to control the direction and power of your passes. You can swipe on the screen to pass the ball to a specific spot or player, or tap on a teammate to pass the ball to them. You can also use the Through Pass button to send the ball ahead of a running player, or the Lob Pass button to chip the ball over the defenders. The Advanced Passing system can help you create more chances and score more goals.
-
Participate in live events and tournaments to earn rewards
-
A third way to enjoy FIFA Mobile is to participate in live events and tournaments that are updated regularly. You can play through various challenges and scenarios that reflect the real-world football season and earn rewards such as coins, players, kits, and more. You can also join leagues and compete with other players in tournaments such as League vs League or League Survival. You can also play against other players in Head-to-Head or VS Attack modes and climb the leaderboards.
-
Conclusion
-
FIFA Mobile is a great game for football fans who want to experience the excitement of playing with their favorite players and teams on their mobile devices. You can download FIFA Mobile for free from the App Store or Google Play and start building your Ultimate Team. You can also compete in various modes and events, such as the UEFA Champions League, UEFA Europa League, UEFA Europa Conference League, Head-to-Head, VS Attack, Manager Mode, Tournament Mode, and more. FIFA Mobile is a game that will keep you hooked for hours with its amazing graphics, realistic gameplay, and engaging features.
-
FAQs
-
Here are some frequently asked questions about FIFA Mobile:
-
-
Q: How much space does FIFA Mobile take on my device?
-
A: FIFA Mobile requires about 1 GB of free space on your device. However, this may vary depending on your device model and operating system.
-
Q: How do I update FIFA Mobile?
-
A: FIFA Mobile updates automatically when you launch the game if there is a new version available. However, you can also check for updates manually by going to the App Store or Google Play and tapping on "Update".
-
Q: How do I contact EA Customer Support?
-
A: If you have any issues or questions about FIFA Mobile, you can contact EA Customer Support by going to the main menu and tapping on "Settings", then "Help". You can also visit https://help.ea.com/en/fifa/fifa-mobile/ for more information.
-
Q: How do I get more coins in FIFA Mobile?
-
A: There are many ways to get more coins in FIFA Mobile, such as playing matches and events, completing achievements and quests, selling players on the market, watching ads, or buying coin packs with real money.
-
Q: How do I get more FIFA Points in FIFA Mobile?
-
A: The only way to get more FIFA Points in FIFA Mobile is to buy them with real money. You can use FIFA Points to buy premium packs, bundles, passes, offers, and more.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/feregVcuzo/sanity-test-midi/Kaplan Surgery Audio - Carlos Pestana.md b/spaces/feregVcuzo/sanity-test-midi/Kaplan Surgery Audio - Carlos Pestana.md
deleted file mode 100644
index 69b1923ace0f3965c24b4dfbafa65e63d9782206..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/Kaplan Surgery Audio - Carlos Pestana.md
+++ /dev/null
@@ -1,96 +0,0 @@
-## Kaplan Surgery Audio - Carlos Pestana
-
-
-
-
-
-
-
-
-
-**CLICK HERE >> [https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2txvKv&sa=D&sntz=1&usg=AOvVaw0g8CMElJz2dMaZ12CHC54E](https://www.google.com/url?q=https%3A%2F%2Fcinurl.com%2F2txvKv&sa=D&sntz=1&usg=AOvVaw0g8CMElJz2dMaZ12CHC54E)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Dr. Pestana's Surgery Notes: A Review of Kaplan's Audio Course
-
-
-
-If you are preparing for the surgical clerkship and shelf exams, you might be interested in Kaplan's audio course, Dr. Pestana's Surgery Notes. This course is based on the popular book by Carlos Pestana, a renowned surgeon and educator who has taught thousands of medical students and residents. In this course, you will get a concise and comprehensive overview of the most important topics in surgery, such as trauma, vascular, abdominal, breast, endocrine, and pediatric surgery. You will also learn about preoperative and postoperative care, anesthesia, complications, ethics, and legal issues.
-
-
-
-The course is narrated by Matthew Kugler, a professional voice actor who delivers the content in a clear and engaging manner. The course is divided into 10 chapters, each lasting about 30 minutes. You can listen to the course at your own pace, whenever and wherever you want. The course also comes with a PDF study guide that summarizes the key points and provides practice questions and answers.
-
-
-
-Dr. Pestana's Surgery Notes is a pocket-sized review that will help you ace the surgical clerkship and shelf exams. It will also reinforce your knowledge and skills for your future career as a surgeon. You can download the course from [Audiobooks Now](https://www.audiobooksnow.com/audiobook/dr-pestanas-surgery-notes/5698166/) for $14.99 or get it for free with a 30-day trial membership.
-
-
-
-Here are some of the benefits of listening to Dr. Pestana's Surgery Notes:
-
-
-
-- You will save time and energy by focusing on the most high-yield topics in surgery.
-
-- You will boost your confidence and performance by reviewing the essential concepts and facts.
-
-- You will improve your retention and recall by listening to the audio repeatedly.
-
-- You will enhance your understanding and application by doing the practice questions and checking the answers.
-
-- You will enjoy learning from a master teacher who has a wealth of experience and wisdom.
-
-
-
-Dr. Pestana's Surgery Notes is more than just an audio course. It is a valuable resource that will help you succeed in your surgical clerkship and shelf exams. Don't miss this opportunity to learn from one of the best in the field. Get your copy of Dr. Pestana's Surgery Notes today and start your journey to becoming a great surgeon.
-
-
-
-Here are some of the testimonials from other students who have used this course:
-
-
-
-> "Dr. Pestana's Surgery Notes was a lifesaver for me. I listened to it every day on my way to the hospital and it helped me ace the shelf exam. I highly recommend it to anyone who wants to learn surgery in a simple and effective way."
-
->
-
-> - Sarah, third-year medical student
-
-
-
-> "This course is amazing. Dr. Pestana explains everything so clearly and concisely. He makes surgery fun and easy to understand. I wish I had this course when I was studying for Step 1."
-
->
-
-> - David, fourth-year medical student
-
-
-
-> "I love this course. It covers all the important topics in surgery and gives you the tips and tricks you need to do well on the clerkship and the shelf exam. It also helps you prepare for the oral exams and the residency interviews. It's like having a personal mentor in your pocket."
-
->
-
-> - Jessica, third-year medical student
-
-
-
-As you can see, Dr. Pestana's Surgery Notes has helped many students achieve their goals and dreams. It can help you too.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/unpipe/HISTORY.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/unpipe/HISTORY.md
deleted file mode 100644
index 85e0f8d747dc2a960e1ae6640c8bf081631ac0ec..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/unpipe/HISTORY.md
+++ /dev/null
@@ -1,4 +0,0 @@
-1.0.0 / 2015-06-14
-==================
-
- * Initial release
diff --git a/spaces/fffiloni/instant-TTS-Bark-cloning/examples/blank.md b/spaces/fffiloni/instant-TTS-Bark-cloning/examples/blank.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/generate_test_paris.sh b/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/generate_test_paris.sh
deleted file mode 100644
index 66056017c3aa376ef0767a59583ab25a321b559b..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/bin/paper_runfiles/generate_test_paris.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/Paris_StreetView_Dataset_val"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in paris_eval_gt
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 segm_256
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-paris \
- location.out_dir=OUT_DIR cropping.out_square_crop=False cropping.out_min_size=227
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/vis.py b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/vis.py
deleted file mode 100644
index c2910b4ef8c61efee72dabd0531a9b669ec8bf98..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/vis.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import numpy as np
-from skimage import io
-from skimage.segmentation import mark_boundaries
-
-
-def save_item_for_vis(item, out_file):
- mask = item['mask'] > 0.5
- if mask.ndim == 3:
- mask = mask[0]
- img = mark_boundaries(np.transpose(item['image'], (1, 2, 0)),
- mask,
- color=(1., 0., 0.),
- outline_color=(1., 1., 1.),
- mode='thick')
-
- if 'inpainted' in item:
- inp_img = mark_boundaries(np.transpose(item['inpainted'], (1, 2, 0)),
- mask,
- color=(1., 0., 0.),
- mode='outer')
- img = np.concatenate((img, inp_img), axis=1)
-
- img = np.clip(img * 255, 0, 255).astype('uint8')
- io.imsave(out_file, img)
-
-
-def save_mask_for_sidebyside(item, out_file):
- mask = item['mask']# > 0.5
- if mask.ndim == 3:
- mask = mask[0]
- mask = np.clip(mask * 255, 0, 255).astype('uint8')
- io.imsave(out_file, mask)
-
-def save_img_for_sidebyside(item, out_file):
- img = np.transpose(item['image'], (1, 2, 0))
- img = np.clip(img * 255, 0, 255).astype('uint8')
- io.imsave(out_file, img)
\ No newline at end of file
diff --git a/spaces/firsk/ai_otto/data_utils.py b/spaces/firsk/ai_otto/data_utils.py
deleted file mode 100644
index d8e6b9e30b90839644e8a2c33c5166288b720d02..0000000000000000000000000000000000000000
--- a/spaces/firsk/ai_otto/data_utils.py
+++ /dev/null
@@ -1,406 +0,0 @@
-import os
-import random
-import torch
-import torch.utils.data
-from tqdm import tqdm
-from loguru import logger
-import commons
-from mel_processing import spectrogram_torch, mel_spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-from text import cleaned_text_to_sequence, get_bert
-
-"""Multi speaker version"""
-
-
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_sid_text, hparams):
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.spk_map = hparams.spk2id
- self.hparams = hparams
-
- self.use_mel_spec_posterior = getattr(
- hparams, "use_mel_posterior_encoder", False
- )
- if self.use_mel_spec_posterior:
- self.n_mel_channels = getattr(hparams, "n_mel_channels", 80)
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 300)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_sid_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_sid_text_new = []
- lengths = []
- skipped = 0
- logger.info("Init dataset...")
- for _id, spk, language, text, phones, tone, word2ph in tqdm(
- self.audiopaths_sid_text
- ):
- audiopath = f"{_id}"
- if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len:
- phones = phones.split(" ")
- tone = [int(i) for i in tone.split(" ")]
- word2ph = [int(i) for i in word2ph.split(" ")]
- audiopaths_sid_text_new.append(
- [audiopath, spk, language, text, phones, tone, word2ph]
- )
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- else:
- skipped += 1
- logger.info(
- "skipped: "
- + str(skipped)
- + ", total: "
- + str(len(self.audiopaths_sid_text))
- )
- self.audiopaths_sid_text = audiopaths_sid_text_new
- self.lengths = lengths
-
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
- # separate filename, speaker_id and text
- audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text
-
- bert, ja_bert, phones, tone, language = self.get_text(
- text, word2ph, phones, tone, language, audiopath
- )
-
- spec, wav = self.get_audio(audiopath)
- sid = torch.LongTensor([int(self.spk_map[sid])])
- return (phones, spec, wav, sid, tone, language, bert, ja_bert)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} {} SR doesn't match target {} SR".format(
- filename, sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if self.use_mel_spec_posterior:
- spec_filename = spec_filename.replace(".spec.pt", ".mel.pt")
- try:
- spec = torch.load(spec_filename)
- except:
- if self.use_mel_spec_posterior:
- spec = mel_spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.n_mel_channels,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- self.hparams.mel_fmin,
- self.hparams.mel_fmax,
- center=False,
- )
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text, word2ph, phone, tone, language_str, wav_path):
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
- if self.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- bert_path = wav_path.replace(".wav", ".bert.pt")
- try:
- bert = torch.load(bert_path)
- assert bert.shape[-1] == len(phone)
- except:
- bert = get_bert(text, word2ph, language_str)
- torch.save(bert, bert_path)
- assert bert.shape[-1] == len(phone), phone
-
- if language_str == "ZH":
- bert = bert
- ja_bert = torch.zeros(768, len(phone))
- elif language_str == "JP":
- ja_bert = bert
- bert = torch.zeros(1024, len(phone))
- else:
- bert = torch.zeros(1024, len(phone))
- ja_bert = torch.zeros(768, len(phone))
- assert bert.shape[-1] == len(phone), (
- bert.shape,
- len(phone),
- sum(word2ph),
- p1,
- p2,
- t1,
- t2,
- pold,
- pold2,
- word2ph,
- text,
- w2pho,
- )
- phone = torch.LongTensor(phone)
- tone = torch.LongTensor(tone)
- language = torch.LongTensor(language)
- return bert, ja_bert, phone, tone, language
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def __getitem__(self, index):
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
-
- def __len__(self):
- return len(self.audiopaths_sid_text)
-
-
-class TextAudioSpeakerCollate:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text, audio and speaker identities
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
- sid = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- tone_padded = torch.LongTensor(len(batch), max_text_len)
- language_padded = torch.LongTensor(len(batch), max_text_len)
- bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len)
- ja_bert_padded = torch.FloatTensor(len(batch), 768, max_text_len)
-
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- tone_padded.zero_()
- language_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- bert_padded.zero_()
- ja_bert_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, : text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, : wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- sid[i] = row[3]
-
- tone = row[4]
- tone_padded[i, : tone.size(0)] = tone
-
- language = row[5]
- language_padded[i, : language.size(0)] = language
-
- bert = row[6]
- bert_padded[i, :, : bert.size(1)] = bert
-
- ja_bert = row[7]
- ja_bert_padded[i, :, : ja_bert.size(1)] = ja_bert
-
- return (
- text_padded,
- text_lengths,
- spec_padded,
- spec_lengths,
- wav_padded,
- wav_lengths,
- sid,
- tone_padded,
- language_padded,
- bert_padded,
- ja_bert_padded,
- )
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
-
- def __init__(
- self,
- dataset,
- batch_size,
- boundaries,
- num_replicas=None,
- rank=None,
- shuffle=True,
- ):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- try:
- for i in range(len(buckets) - 1, 0, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
- assert all(len(bucket) > 0 for bucket in buckets)
- # When one bucket is not traversed
- except Exception as e:
- print("Bucket warning ", e)
- for i in range(len(buckets) - 1, -1, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (
- total_batch_size - (len_bucket % total_batch_size)
- ) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- if len_bucket == 0:
- continue
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = (
- ids_bucket
- + ids_bucket * (rem // len_bucket)
- + ids_bucket[: (rem % len_bucket)]
- )
-
- # subsample
- ids_bucket = ids_bucket[self.rank :: self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [
- bucket[idx]
- for idx in ids_bucket[
- j * self.batch_size : (j + 1) * self.batch_size
- ]
- ]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/fishaudio/fish-diffusion/configs/JSUT.py b/spaces/fishaudio/fish-diffusion/configs/JSUT.py
deleted file mode 100644
index e7ab4e2e48d4bf618f3487f91bb5848ff2dce9ac..0000000000000000000000000000000000000000
--- a/spaces/fishaudio/fish-diffusion/configs/JSUT.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- "./_base_/archs/hifi_svc.py",
-]
-
-speaker_mapping = {'jsut': 0,}
-
-model = dict(
- type="HiFiSVC",
- speaker_encoder=dict(
- input_size=len(speaker_mapping),
- ),
-)
-
-preprocessing = dict(
- text_features_extractor=dict(
- type="ContentVec",
- ),
- pitch_extractor=dict(
- type="ParselMouthPitchExtractor",
- keep_zeros=False,
- f0_min=40.0,
- f0_max=1600.0,
- ),
- energy_extractor=dict(
- type="RMSEnergyExtractor",
- ),
- augmentations=[
- dict(
- type="RandomPitchShifting",
- key_shifts=[-5., 5.],
- probability=1.5,
- ),
- dict(
- type="RandomTimeStretching",
- factors=[0.8, 1.2],
- probability=0.75,
- )
- ],
-)
\ No newline at end of file
diff --git a/spaces/flax-community/spanish-image-captioning/utils.py b/spaces/flax-community/spanish-image-captioning/utils.py
deleted file mode 100644
index e7eb5ca2ec403549a46efea4521e02c346200df0..0000000000000000000000000000000000000000
--- a/spaces/flax-community/spanish-image-captioning/utils.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from torchvision.io import read_image, ImageReadMode
-import torch
-import numpy as np
-from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize
-from torchvision.transforms.functional import InterpolationMode
-from PIL import Image
-
-
-class Transform(torch.nn.Module):
- def __init__(self, image_size):
- super().__init__()
- self.transforms = torch.nn.Sequential(
- Resize([image_size], interpolation=InterpolationMode.BICUBIC),
- CenterCrop(image_size),
- ConvertImageDtype(torch.float),
- Normalize(
- (0.48145466, 0.4578275, 0.40821073),
- (0.26862954, 0.26130258, 0.27577711),
- ),
- )
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- with torch.no_grad():
- x = self.transforms(x)
- return x
-
-
-transform = Transform(224)
-
-def get_transformed_image(image):
- if image.shape[-1] == 3 and isinstance(image, np.ndarray):
- image = image.transpose(2, 0, 1)
- image = torch.tensor(image)
- return transform(image).unsqueeze(0).permute(0, 2, 3, 1).numpy()
\ No newline at end of file
diff --git a/spaces/floriankrempl/mtg_rules_bot/tests/__init__.py b/spaces/floriankrempl/mtg_rules_bot/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/florim/MedGPT/tests/test_prompt_generator.py b/spaces/florim/MedGPT/tests/test_prompt_generator.py
deleted file mode 100644
index 6a0bfd6c7bbdbfaa3750e9dee621bd25e17a448b..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/tests/test_prompt_generator.py
+++ /dev/null
@@ -1,114 +0,0 @@
-from unittest import TestCase
-
-from autogpt.promptgenerator import PromptGenerator
-
-
-class TestPromptGenerator(TestCase):
- """
- Test cases for the PromptGenerator class, which is responsible for generating
- prompts for the AI with constraints, commands, resources, and performance evaluations.
- """
-
- @classmethod
- def setUpClass(cls):
- """
- Set up the initial state for each test method by creating an instance of PromptGenerator.
- """
- cls.generator = PromptGenerator()
-
- # Test whether the add_constraint() method adds a constraint to the generator's constraints list
- def test_add_constraint(self):
- """
- Test if the add_constraint() method adds a constraint to the generator's constraints list.
- """
- constraint = "Constraint1"
- self.generator.add_constraint(constraint)
- self.assertIn(constraint, self.generator.constraints)
-
- # Test whether the add_command() method adds a command to the generator's commands list
- def test_add_command(self):
- """
- Test if the add_command() method adds a command to the generator's commands list.
- """
- command_label = "Command Label"
- command_name = "command_name"
- args = {"arg1": "value1", "arg2": "value2"}
- self.generator.add_command(command_label, command_name, args)
- command = {
- "label": command_label,
- "name": command_name,
- "args": args,
- }
- self.assertIn(command, self.generator.commands)
-
- def test_add_resource(self):
- """
- Test if the add_resource() method adds a resource to the generator's resources list.
- """
- resource = "Resource1"
- self.generator.add_resource(resource)
- self.assertIn(resource, self.generator.resources)
-
- def test_add_performance_evaluation(self):
- """
- Test if the add_performance_evaluation() method adds an evaluation to the generator's
- performance_evaluation list.
- """
- evaluation = "Evaluation1"
- self.generator.add_performance_evaluation(evaluation)
- self.assertIn(evaluation, self.generator.performance_evaluation)
-
- def test_generate_prompt_string(self):
- """
- Test if the generate_prompt_string() method generates a prompt string with all the added
- constraints, commands, resources, and evaluations.
- """
- # Define the test data
- constraints = ["Constraint1", "Constraint2"]
- commands = [
- {
- "label": "Command1",
- "name": "command_name1",
- "args": {"arg1": "value1"},
- },
- {
- "label": "Command2",
- "name": "command_name2",
- "args": {},
- },
- ]
- resources = ["Resource1", "Resource2"]
- evaluations = ["Evaluation1", "Evaluation2"]
-
- # Add test data to the generator
- for constraint in constraints:
- self.generator.add_constraint(constraint)
- for command in commands:
- self.generator.add_command(
- command["label"], command["name"], command["args"]
- )
- for resource in resources:
- self.generator.add_resource(resource)
- for evaluation in evaluations:
- self.generator.add_performance_evaluation(evaluation)
-
- # Generate the prompt string and verify its correctness
- prompt_string = self.generator.generate_prompt_string()
- self.assertIsNotNone(prompt_string)
-
- # Check if all constraints, commands, resources, and evaluations are present in the prompt string
- for constraint in constraints:
- self.assertIn(constraint, prompt_string)
- for command in commands:
- self.assertIn(command["name"], prompt_string)
- for key, value in command["args"].items():
- self.assertIn(f'"{key}": "{value}"', prompt_string)
- for resource in resources:
- self.assertIn(resource, prompt_string)
- for evaluation in evaluations:
- self.assertIn(evaluation, prompt_string)
-
- self.assertIn("constraints", prompt_string.lower())
- self.assertIn("commands", prompt_string.lower())
- self.assertIn("resources", prompt_string.lower())
- self.assertIn("performance evaluation", prompt_string.lower())
diff --git a/spaces/gary109/hotdog-not-hotdog/app.py b/spaces/gary109/hotdog-not-hotdog/app.py
deleted file mode 100644
index 3bf20a1281ee2fc125aa10e4f7132f61e9504eca..0000000000000000000000000000000000000000
--- a/spaces/gary109/hotdog-not-hotdog/app.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-pipeline = pipeline(task="image-classification", model="julien-c/hotdog-not-hotdog")
-
-def predict(image):
- predictions = pipeline(image)
- return {p["label"]: p["score"] for p in predictions}
-
-gr.Interface(
- predict,
- inputs=gr.inputs.Image(label="Upload hot dog candidate", type="filepath"),
- outputs=gr.outputs.Label(num_top_classes=2),
- title="是不是熱狗?",
- examples=[["./imgs/hotdog1.jpg"]]
-).launch()
\ No newline at end of file
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/core/evaluation/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/core/evaluation/__init__.py
deleted file mode 100644
index f7cc4b23413a0639e9de00eeb0bf600632d2c6cd..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/core/evaluation/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from .class_names import get_classes, get_palette
-from .eval_hooks import DistEvalHook, EvalHook
-from .metrics import eval_metrics, mean_dice, mean_fscore, mean_iou
-
-__all__ = [
- 'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore',
- 'eval_metrics', 'get_classes', 'get_palette'
-]
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/backbones/mobilenet_v3.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/backbones/mobilenet_v3.py
deleted file mode 100644
index 16817400b4102899794fe64c9644713a4e54e2f9..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/backbones/mobilenet_v3.py
+++ /dev/null
@@ -1,255 +0,0 @@
-import logging
-
-import annotator.uniformer.mmcv as mmcv
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import ConvModule, constant_init, kaiming_init
-from annotator.uniformer.mmcv.cnn.bricks import Conv2dAdaptivePadding
-from annotator.uniformer.mmcv.runner import load_checkpoint
-from torch.nn.modules.batchnorm import _BatchNorm
-
-from ..builder import BACKBONES
-from ..utils import InvertedResidualV3 as InvertedResidual
-
-
-@BACKBONES.register_module()
-class MobileNetV3(nn.Module):
- """MobileNetV3 backbone.
-
- This backbone is the improved implementation of `Searching for MobileNetV3
- `_.
-
- Args:
- arch (str): Architecture of mobilnetv3, from {'small', 'large'}.
- Default: 'small'.
- conv_cfg (dict): Config dict for convolution layer.
- Default: None, which means using conv2d.
- norm_cfg (dict): Config dict for normalization layer.
- Default: dict(type='BN').
- out_indices (tuple[int]): Output from which layer.
- Default: (0, 1, 12).
- frozen_stages (int): Stages to be frozen (all param fixed).
- Default: -1, which means not freezing any parameters.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only. Default: False.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save
- some memory while slowing down the training speed.
- Default: False.
- """
- # Parameters to build each block:
- # [kernel size, mid channels, out channels, with_se, act type, stride]
- arch_settings = {
- 'small': [[3, 16, 16, True, 'ReLU', 2], # block0 layer1 os=4
- [3, 72, 24, False, 'ReLU', 2], # block1 layer2 os=8
- [3, 88, 24, False, 'ReLU', 1],
- [5, 96, 40, True, 'HSwish', 2], # block2 layer4 os=16
- [5, 240, 40, True, 'HSwish', 1],
- [5, 240, 40, True, 'HSwish', 1],
- [5, 120, 48, True, 'HSwish', 1], # block3 layer7 os=16
- [5, 144, 48, True, 'HSwish', 1],
- [5, 288, 96, True, 'HSwish', 2], # block4 layer9 os=32
- [5, 576, 96, True, 'HSwish', 1],
- [5, 576, 96, True, 'HSwish', 1]],
- 'large': [[3, 16, 16, False, 'ReLU', 1], # block0 layer1 os=2
- [3, 64, 24, False, 'ReLU', 2], # block1 layer2 os=4
- [3, 72, 24, False, 'ReLU', 1],
- [5, 72, 40, True, 'ReLU', 2], # block2 layer4 os=8
- [5, 120, 40, True, 'ReLU', 1],
- [5, 120, 40, True, 'ReLU', 1],
- [3, 240, 80, False, 'HSwish', 2], # block3 layer7 os=16
- [3, 200, 80, False, 'HSwish', 1],
- [3, 184, 80, False, 'HSwish', 1],
- [3, 184, 80, False, 'HSwish', 1],
- [3, 480, 112, True, 'HSwish', 1], # block4 layer11 os=16
- [3, 672, 112, True, 'HSwish', 1],
- [5, 672, 160, True, 'HSwish', 2], # block5 layer13 os=32
- [5, 960, 160, True, 'HSwish', 1],
- [5, 960, 160, True, 'HSwish', 1]]
- } # yapf: disable
-
- def __init__(self,
- arch='small',
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- out_indices=(0, 1, 12),
- frozen_stages=-1,
- reduction_factor=1,
- norm_eval=False,
- with_cp=False):
- super(MobileNetV3, self).__init__()
- assert arch in self.arch_settings
- assert isinstance(reduction_factor, int) and reduction_factor > 0
- assert mmcv.is_tuple_of(out_indices, int)
- for index in out_indices:
- if index not in range(0, len(self.arch_settings[arch]) + 2):
- raise ValueError(
- 'the item in out_indices must in '
- f'range(0, {len(self.arch_settings[arch])+2}). '
- f'But received {index}')
-
- if frozen_stages not in range(-1, len(self.arch_settings[arch]) + 2):
- raise ValueError('frozen_stages must be in range(-1, '
- f'{len(self.arch_settings[arch])+2}). '
- f'But received {frozen_stages}')
- self.arch = arch
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
- self.reduction_factor = reduction_factor
- self.norm_eval = norm_eval
- self.with_cp = with_cp
- self.layers = self._make_layer()
-
- def _make_layer(self):
- layers = []
-
- # build the first layer (layer0)
- in_channels = 16
- layer = ConvModule(
- in_channels=3,
- out_channels=in_channels,
- kernel_size=3,
- stride=2,
- padding=1,
- conv_cfg=dict(type='Conv2dAdaptivePadding'),
- norm_cfg=self.norm_cfg,
- act_cfg=dict(type='HSwish'))
- self.add_module('layer0', layer)
- layers.append('layer0')
-
- layer_setting = self.arch_settings[self.arch]
- for i, params in enumerate(layer_setting):
- (kernel_size, mid_channels, out_channels, with_se, act,
- stride) = params
-
- if self.arch == 'large' and i >= 12 or self.arch == 'small' and \
- i >= 8:
- mid_channels = mid_channels // self.reduction_factor
- out_channels = out_channels // self.reduction_factor
-
- if with_se:
- se_cfg = dict(
- channels=mid_channels,
- ratio=4,
- act_cfg=(dict(type='ReLU'),
- dict(type='HSigmoid', bias=3.0, divisor=6.0)))
- else:
- se_cfg = None
-
- layer = InvertedResidual(
- in_channels=in_channels,
- out_channels=out_channels,
- mid_channels=mid_channels,
- kernel_size=kernel_size,
- stride=stride,
- se_cfg=se_cfg,
- with_expand_conv=(in_channels != mid_channels),
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=dict(type=act),
- with_cp=self.with_cp)
- in_channels = out_channels
- layer_name = 'layer{}'.format(i + 1)
- self.add_module(layer_name, layer)
- layers.append(layer_name)
-
- # build the last layer
- # block5 layer12 os=32 for small model
- # block6 layer16 os=32 for large model
- layer = ConvModule(
- in_channels=in_channels,
- out_channels=576 if self.arch == 'small' else 960,
- kernel_size=1,
- stride=1,
- dilation=4,
- padding=0,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=dict(type='HSwish'))
- layer_name = 'layer{}'.format(len(layer_setting) + 1)
- self.add_module(layer_name, layer)
- layers.append(layer_name)
-
- # next, convert backbone MobileNetV3 to a semantic segmentation version
- if self.arch == 'small':
- self.layer4.depthwise_conv.conv.stride = (1, 1)
- self.layer9.depthwise_conv.conv.stride = (1, 1)
- for i in range(4, len(layers)):
- layer = getattr(self, layers[i])
- if isinstance(layer, InvertedResidual):
- modified_module = layer.depthwise_conv.conv
- else:
- modified_module = layer.conv
-
- if i < 9:
- modified_module.dilation = (2, 2)
- pad = 2
- else:
- modified_module.dilation = (4, 4)
- pad = 4
-
- if not isinstance(modified_module, Conv2dAdaptivePadding):
- # Adjust padding
- pad *= (modified_module.kernel_size[0] - 1) // 2
- modified_module.padding = (pad, pad)
- else:
- self.layer7.depthwise_conv.conv.stride = (1, 1)
- self.layer13.depthwise_conv.conv.stride = (1, 1)
- for i in range(7, len(layers)):
- layer = getattr(self, layers[i])
- if isinstance(layer, InvertedResidual):
- modified_module = layer.depthwise_conv.conv
- else:
- modified_module = layer.conv
-
- if i < 13:
- modified_module.dilation = (2, 2)
- pad = 2
- else:
- modified_module.dilation = (4, 4)
- pad = 4
-
- if not isinstance(modified_module, Conv2dAdaptivePadding):
- # Adjust padding
- pad *= (modified_module.kernel_size[0] - 1) // 2
- modified_module.padding = (pad, pad)
-
- return layers
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- outs = []
- for i, layer_name in enumerate(self.layers):
- layer = getattr(self, layer_name)
- x = layer(x)
- if i in self.out_indices:
- outs.append(x)
- return outs
-
- def _freeze_stages(self):
- for i in range(self.frozen_stages + 1):
- layer = getattr(self, f'layer{i}')
- layer.eval()
- for param in layer.parameters():
- param.requires_grad = False
-
- def train(self, mode=True):
- super(MobileNetV3, self).train(mode)
- self._freeze_stages()
- if mode and self.norm_eval:
- for m in self.modules():
- if isinstance(m, _BatchNorm):
- m.eval()
diff --git a/spaces/gojiteji/NAGISystem/app.py b/spaces/gojiteji/NAGISystem/app.py
deleted file mode 100644
index 8d5b4bcb56739aba1e577733da788f3c53f94c23..0000000000000000000000000000000000000000
--- a/spaces/gojiteji/NAGISystem/app.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import torch
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForMaskedLM, AutoModelForSeq2SeqLM, AutoModelForCausalLM
-
-BERTTokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
-BERTModel = AutoModelForMaskedLM.from_pretrained("cl-tohoku/bert-base-japanese")
-
-mBERTTokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased")
-mBERTModel = AutoModelForMaskedLM.from_pretrained("bert-base-multilingual-cased")
-
-GPT2Tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt2-medium")
-GPT2Model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-medium")
-
-votes=[]
-def MELCHIOR(sue):#BERT
- allow=BERTTokenizer("承認").input_ids[1]
- deny=BERTTokenizer("否定").input_ids[1]
- output=BERTModel(**BERTTokenizer('MELCHIORは科学者としての人格を持っています。人間とMELCHIORの対話です。人間「'+sue+'。承認 か 否定 のどちらかで答えてください。」'+"MELCHIOR 「[MASK]」",return_tensors="pt")).logits
- BERTTokenizer.batch_decode(torch.argmax(output,-1))
- mask=output[0,-3,:]
- votes.append(1 if mask[allow]>mask[deny] else -1)
- return "承認" if mask[allow]>mask[deny] else "否定"
-
-def BALTHASAR(sue):#mT5
- allow=mBERTTokenizer("Yes").input_ids[1]
- deny=mBERTTokenizer("No").input_ids[1]
- output=mBERTModel(**mBERTTokenizer('BALTHASARは母としての人格を持っています。人間とBALTHASARの対話です。人間「'+sue+'。YesかNoか。」'+"BALTHASAR 「[MASK]」",return_tensors="pt")).logits
- mask=output[0,-3,:]
- votes.append(1 if mask[allow]>mask[deny] else -1)
- return "承認" if mask[allow]>mask[deny] else "否定"
-
-
-def CASPER(sue):#GPT2
- allow=GPT2Tokenizer("承認").input_ids[1]
- deny=GPT2Tokenizer("否定").input_ids[1]
- inpt=GPT2Tokenizer('女としての人格を持ったAI・カスパーと人間の対話です。人間「'+sue+'。これに承認か否定か。」'+"カスパー「私は,",return_tensors="pt")
- probs=GPT2Model(input_ids=inpt.input_ids[:,:-1],attention_mask=inpt.attention_mask[:,:-1]).logits[0]
- i=-1
- p_answer=probs
- id=torch.argmax(probs[i])
- votes.append(1 if probs[i][allow]>probs[i][deny] else -1)
- return "承認" if probs[i][allow]>probs[i][deny] else "否定"
-
-
-def greet(sue):
- text1="BERT-1"+MELCHIOR(sue)
- text2="GPT-2"+CASPER(sue)
- text3="mBERT-3"+BALTHASAR(sue)
- return text1+" "+text2+" "+text3+"\n___\n\n"+("|可決|" if sum(votes[-3:])>0 else "| 否決 |")+"\n___"
-
-
-css="@import url('https://fonts.googleapis.com/css2?family=Shippori+Mincho:wght@800&display=swap'); .gradio-container {background-color: black} .gr-button {background-color: blue;color:black; weight:200%;font-family:'Shippori Mincho', serif;}"
-css+=".block{color:orange;} ::placeholder {font-size:35%} .gr-box {text-align: center;font-size: 125%;border-color:orange;background-color: #000000;weight:200%;font-family:'Shippori Mincho', serif;}:disabled {color: orange;opacity:1.0;}"
-with gr.Blocks(css=css) as demo:
- sue = gr.Textbox(label="NAGI System",placeholder="決議を入力(多数決)")
- greet_btn = gr.Button("提訴")
- output = gr.Textbox(label="決議",placeholder="本システムは事前学習モデルのpromptにより行われています.決議結果に対して当サービス開発者は一切の責任を負いません.")
- greet_btn.click(fn=greet, inputs=sue, outputs=output)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/gordonchan/h2oo/prompter.py b/spaces/gordonchan/h2oo/prompter.py
deleted file mode 100644
index 31d4d89f816a0da0fcbd9cab56d69b40254f8401..0000000000000000000000000000000000000000
--- a/spaces/gordonchan/h2oo/prompter.py
+++ /dev/null
@@ -1,871 +0,0 @@
-import os
-import ast
-import time
-from enums import PromptType # also supports imports from this file from other files
-
-non_hf_types = ['gpt4all_llama', 'llama', 'gptj']
-
-prompt_type_to_model_name = {
- 'plain': [
- 'EleutherAI/gpt-j-6B',
- 'EleutherAI/pythia-6.9b',
- 'EleutherAI/pythia-12b',
- 'EleutherAI/pythia-12b-deduped',
- 'EleutherAI/gpt-neox-20b',
- 'openlm-research/open_llama_7b_700bt_preview',
- 'decapoda-research/llama-7b-hf',
- 'decapoda-research/llama-13b-hf',
- 'decapoda-research/llama-30b-hf',
- 'decapoda-research/llama-65b-hf',
- 'facebook/mbart-large-50-many-to-many-mmt',
- 'philschmid/bart-large-cnn-samsum',
- 'philschmid/flan-t5-base-samsum',
- 'gpt2',
- 'distilgpt2',
- 'mosaicml/mpt-7b-storywriter',
- ],
- 'gptj': ['gptj', 'gpt4all_llama'],
- 'prompt_answer': [
- 'h2oai/h2ogpt-gm-oasst1-en-1024-20b',
- 'h2oai/h2ogpt-gm-oasst1-en-1024-12b',
- 'h2oai/h2ogpt-gm-oasst1-multilang-1024-20b',
- 'h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b',
- 'h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b-v2',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2',
- 'h2oai/h2ogpt-gm-oasst1-en-xgen-7b-8k',
- 'h2oai/h2ogpt-gm-oasst1-multilang-xgen-7b-8k',
- 'TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GPTQ',
- ],
- 'prompt_answer_openllama': [
- 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-700bt',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b',
- 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b',
- ],
- 'instruct': ['TheBloke/llama-30b-supercot-SuperHOT-8K-fp16'], # https://huggingface.co/TheBloke/llama-30b-supercot-SuperHOT-8K-fp16#prompting
- 'instruct_with_end': ['databricks/dolly-v2-12b'],
- 'quality': [],
- 'human_bot': [
- 'h2oai/h2ogpt-oasst1-512-12b',
- 'h2oai/h2ogpt-oasst1-512-20b',
- 'h2oai/h2ogpt-oig-oasst1-256-6_9b',
- 'h2oai/h2ogpt-oig-oasst1-512-6_9b',
- 'h2oai/h2ogpt-oig-oasst1-256-6.9b', # legacy
- 'h2oai/h2ogpt-oig-oasst1-512-6.9b', # legacy
- 'h2oai/h2ogpt-research-oasst1-512-30b',
- 'h2oai/h2ogpt-research-oasst1-llama-65b',
- 'h2oai/h2ogpt-oasst1-falcon-40b',
- 'h2oai/h2ogpt-oig-oasst1-falcon-40b',
- ],
- 'dai_faq': [],
- 'summarize': [],
- 'simple_instruct': ['t5-small', 't5-large', 'google/flan-t5', 'google/flan-t5-xxl', 'google/flan-ul2'],
- 'instruct_vicuna': ['AlekseyKorshuk/vicuna-7b', 'TheBloke/stable-vicuna-13B-HF', 'junelee/wizard-vicuna-13b'],
- 'human_bot_orig': ['togethercomputer/GPT-NeoXT-Chat-Base-20B'],
- "open_assistant": ['OpenAssistant/oasst-sft-7-llama-30b-xor', 'oasst-sft-7-llama-30b'],
- "wizard_lm": ['ehartford/WizardLM-7B-Uncensored', 'ehartford/WizardLM-13B-Uncensored'],
- "wizard_mega": ['openaccess-ai-collective/wizard-mega-13b'],
- "instruct_simple": ['JosephusCheung/Guanaco'],
- "wizard_vicuna": ['ehartford/Wizard-Vicuna-13B-Uncensored'],
- "wizard2": ['llama'],
- "mptinstruct": ['mosaicml/mpt-30b-instruct', 'mosaicml/mpt-7b-instruct', 'mosaicml/mpt-30b-instruct'],
- "mptchat": ['mosaicml/mpt-7b-chat', 'mosaicml/mpt-30b-chat', 'TheBloke/mpt-30B-chat-GGML'],
- "vicuna11": ['lmsys/vicuna-33b-v1.3'],
- "falcon": ['tiiuae/falcon-40b-instruct', 'tiiuae/falcon-40b', 'tiiuae/falcon-7b-instruct', 'tiiuae/falcon-7b'],
- "llama2": [
- 'meta-llama/Llama-2-7b-chat-hf',
- 'meta-llama/Llama-2-13b-chat-hf',
- 'meta-llama/Llama-2-34b-chat-hf',
- 'meta-llama/Llama-2-70b-chat-hf',
- ],
- # could be plain, but default is correct prompt_type for default TheBloke model ggml-wizardLM-7B.q4_2.bin
-}
-if os.getenv('OPENAI_API_KEY'):
- prompt_type_to_model_name.update({
- "openai": ["text-davinci-003", "text-curie-001", "text-babbage-001", "text-ada-001"],
- "openai_chat": ["gpt-3.5-turbo", "gpt-3.5-turbo-16k"],
- })
-
-inv_prompt_type_to_model_name = {v.strip(): k for k, l in prompt_type_to_model_name.items() for v in l}
-inv_prompt_type_to_model_lower = {v.strip().lower(): k for k, l in prompt_type_to_model_name.items() for v in l}
-
-prompt_types_strings = []
-for p in PromptType:
- prompt_types_strings.extend([p.name])
-
-prompt_types = []
-for p in PromptType:
- prompt_types.extend([p.name, p.value, str(p.value)])
-
-
-def get_prompt(prompt_type, prompt_dict, chat, context, reduced, making_context, return_dict=False):
- prompt_dict_error = ''
- generates_leading_space = False
-
- if prompt_type == PromptType.custom.name and not isinstance(prompt_dict, dict):
- try:
- prompt_dict = ast.literal_eval(prompt_dict)
- except BaseException as e:
- prompt_dict_error = str(e)
- if prompt_dict_error:
- promptA = None
- promptB = None
- PreInstruct = None
- PreInput = ''
- PreResponse = ''
- terminate_response = None
- chat_sep = ''
- chat_turn_sep = ''
- humanstr = ''
- botstr = ''
- generates_leading_space = False
- elif prompt_type in [PromptType.custom.value, str(PromptType.custom.value),
- PromptType.custom.name]:
- promptA = prompt_dict.get('promptA', '')
- promptB = prompt_dict.get('promptB', '')
- PreInstruct = prompt_dict.get('PreInstruct', '')
- PreInput = prompt_dict.get('PreInput', '')
- PreResponse = prompt_dict.get('PreResponse', '')
- terminate_response = prompt_dict.get('terminate_response', None)
- chat_sep = prompt_dict.get('chat_sep', '\n')
- chat_turn_sep = prompt_dict.get('chat_turn_sep', '\n')
- humanstr = prompt_dict.get('humanstr', '')
- botstr = prompt_dict.get('botstr', '')
- elif prompt_type in [PromptType.plain.value, str(PromptType.plain.value),
- PromptType.plain.name]:
- promptA = promptB = PreInstruct = PreInput = PreResponse = None
- terminate_response = []
- chat_turn_sep = chat_sep = ''
- # plain should have None for human/bot, so nothing truncated out, not '' that would truncate after first token
- humanstr = None
- botstr = None
- elif prompt_type == 'simple_instruct':
- promptA = promptB = PreInstruct = PreInput = PreResponse = None
- terminate_response = []
- chat_turn_sep = chat_sep = '\n'
- humanstr = None
- botstr = None
- elif prompt_type in [PromptType.instruct.value, str(PromptType.instruct.value),
- PromptType.instruct.name] + [PromptType.instruct_with_end.value,
- str(PromptType.instruct_with_end.value),
- PromptType.instruct_with_end.name]:
- promptA = 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n' if not (
- chat and reduced) else ''
- promptB = 'Below is an instruction that describes a task. Write a response that appropriately completes the request.\n' if not (
- chat and reduced) else ''
-
- PreInstruct = """
-### Instruction:
-"""
-
- PreInput = """
-### Input:
-"""
-
- PreResponse = """
-### Response:
-"""
- if prompt_type in [PromptType.instruct_with_end.value, str(PromptType.instruct_with_end.value),
- PromptType.instruct_with_end.name]:
- terminate_response = ['### End']
- else:
- terminate_response = None
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.quality.value, str(PromptType.quality.value),
- PromptType.quality.name]:
- promptA = 'Write a detailed high-quality, accurate, fair, Response with about 100 words by following the Instruction as applied on the Input.\n' if not (
- chat and reduced) else ''
- promptB = 'Write a detailed high-quality, accurate, fair, Response with about 100 words by following the Instruction.\n' if not (
- chat and reduced) else ''
-
- PreInstruct = """
-### Instruction:
-"""
-
- PreInput = """
-### Input:
-"""
-
- PreResponse = """
-### Response:
-"""
- terminate_response = None
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct # first thing human says
- botstr = PreResponse # first thing bot says
- elif prompt_type in [PromptType.human_bot.value, str(PromptType.human_bot.value),
- PromptType.human_bot.name] + [PromptType.human_bot_orig.value,
- str(PromptType.human_bot_orig.value),
- PromptType.human_bot_orig.name]:
- human = ':'
- bot = ":"
- if reduced or context or prompt_type in [PromptType.human_bot.value, str(PromptType.human_bot.value),
- PromptType.human_bot.name]:
- preprompt = ''
- else:
- cur_date = time.strftime('%Y-%m-%d')
- cur_time = time.strftime('%H:%M:%S %p %Z')
-
- PRE_PROMPT = """\
-Current Date: {}
-Current Time: {}
-
-"""
- preprompt = PRE_PROMPT.format(cur_date, cur_time)
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
-
- PreInstruct = human + ' '
-
- PreInput = None
-
- if making_context:
- # when making context, want it to appear as-if LLM generated, which starts with space after :
- PreResponse = bot + ' '
- else:
- # normally LLM adds space after this, because was how trained.
- # if add space here, non-unique tokenization will often make LLM produce wrong output
- PreResponse = bot
-
- terminate_response = ['\n' + human, '\n' + bot, human, bot, PreResponse]
- chat_turn_sep = chat_sep = '\n'
- humanstr = human # tag before human talks
- botstr = bot # tag before bot talks
- generates_leading_space = True
- elif prompt_type in [PromptType.dai_faq.value, str(PromptType.dai_faq.value),
- PromptType.dai_faq.name]:
- promptA = ''
- promptB = 'Answer the following Driverless AI question.\n'
-
- PreInstruct = """
-### Driverless AI frequently asked question:
-"""
-
- PreInput = None
-
- PreResponse = """
-### Driverless AI documentation answer:
-"""
- terminate_response = ['\n\n']
- chat_turn_sep = chat_sep = terminate_response
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.summarize.value, str(PromptType.summarize.value),
- PromptType.summarize.name]:
- promptA = promptB = PreInput = ''
- PreInstruct = '## Main Text\n\n'
- PreResponse = '\n\n## Summary\n\n'
- terminate_response = None
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.instruct_vicuna.value, str(PromptType.instruct_vicuna.value),
- PromptType.instruct_vicuna.name]:
- promptA = promptB = "A chat between a curious human and an artificial intelligence assistant. " \
- "The assistant gives helpful, detailed, and polite answers to the human's questions." if not (
- chat and reduced) else ''
-
- PreInstruct = """
-### Human:
-"""
-
- PreInput = None
-
- PreResponse = """
-### Assistant:
-"""
- terminate_response = [
- '### Human:'] # but only allow terminate after prompt is found correctly, else can't terminate
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.prompt_answer.value, str(PromptType.prompt_answer.value),
- PromptType.prompt_answer.name]:
- preprompt = ''
- prompt_tokens = "<|prompt|>"
- answer_tokens = "<|answer|>"
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = prompt_tokens
- PreInput = None
- PreResponse = answer_tokens
- eos = '<|endoftext|>' # neox eos
- humanstr = prompt_tokens
- botstr = answer_tokens
- terminate_response = [humanstr, PreResponse, eos]
- chat_sep = eos
- chat_turn_sep = eos
- elif prompt_type in [PromptType.prompt_answer_openllama.value, str(PromptType.prompt_answer_openllama.value),
- PromptType.prompt_answer_openllama.name]:
- preprompt = ''
- prompt_tokens = "<|prompt|>"
- answer_tokens = "<|answer|>"
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = prompt_tokens
- PreInput = None
- PreResponse = answer_tokens
- eos = '' # llama eos
- humanstr = prompt_tokens
- botstr = answer_tokens
- terminate_response = [humanstr, PreResponse, eos]
- chat_sep = eos
- chat_turn_sep = eos
- elif prompt_type in [PromptType.open_assistant.value, str(PromptType.open_assistant.value),
- PromptType.open_assistant.name]:
- # From added_tokens.json
- preprompt = ''
- prompt_tokens = "<|prompter|>"
- answer_tokens = "<|assistant|>"
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = prompt_tokens
- PreInput = None
- PreResponse = answer_tokens
- pend = "<|prefix_end|>"
- eos = ""
- humanstr = prompt_tokens
- botstr = answer_tokens
- terminate_response = [humanstr, PreResponse, pend, eos]
- chat_turn_sep = chat_sep = eos
- elif prompt_type in [PromptType.wizard_lm.value, str(PromptType.wizard_lm.value),
- PromptType.wizard_lm.name]:
- # https://github.com/ehartford/WizardLM/blob/main/src/train_freeform.py
- preprompt = ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = ""
- PreInput = None
- PreResponse = "\n\n### Response\n"
- eos = ""
- terminate_response = [PreResponse, eos]
- chat_turn_sep = chat_sep = eos
- humanstr = promptA
- botstr = PreResponse
- elif prompt_type in [PromptType.wizard_mega.value, str(PromptType.wizard_mega.value),
- PromptType.wizard_mega.name]:
- preprompt = ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = """
-### Instruction:
-"""
- PreInput = None
- PreResponse = """
-### Assistant:
-"""
- terminate_response = [PreResponse]
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.instruct_vicuna2.value, str(PromptType.instruct_vicuna2.value),
- PromptType.instruct_vicuna2.name]:
- promptA = promptB = "" if not (chat and reduced) else ''
-
- PreInstruct = """
-HUMAN:
-"""
-
- PreInput = None
-
- PreResponse = """
-ASSISTANT:
-"""
- terminate_response = [
- 'HUMAN:'] # but only allow terminate after prompt is found correctly, else can't terminate
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.instruct_vicuna3.value, str(PromptType.instruct_vicuna3.value),
- PromptType.instruct_vicuna3.name]:
- promptA = promptB = "" if not (chat and reduced) else ''
-
- PreInstruct = """
-### User:
-"""
-
- PreInput = None
-
- PreResponse = """
-### Assistant:
-"""
- terminate_response = [
- '### User:'] # but only allow terminate after prompt is found correctly, else can't terminate
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.wizard2.value, str(PromptType.wizard2.value),
- PromptType.wizard2.name]:
- # https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML
- preprompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" if not (
- chat and reduced) else ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = """
-### Instruction:
-"""
- PreInput = None
- PreResponse = """
-### Response:
-"""
- terminate_response = [PreResponse]
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.wizard3.value, str(PromptType.wizard3.value),
- PromptType.wizard3.name]:
- # https://huggingface.co/TheBloke/wizardLM-13B-1.0-GGML
- preprompt = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.""" if not (
- chat and reduced) else ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = """USER: """
- PreInput = None
- PreResponse = """ASSISTANT: """
- terminate_response = [PreResponse]
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.wizard_vicuna.value, str(PromptType.wizard_vicuna.value),
- PromptType.wizard_vicuna.name]:
- preprompt = ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = """USER: """
- PreInput = None
- PreResponse = """ASSISTANT: """
- terminate_response = [PreResponse]
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
-
- elif prompt_type in [PromptType.instruct_simple.value, str(PromptType.instruct_simple.value),
- PromptType.instruct_simple.name]:
- promptB = promptA = '' if not (chat and reduced) else ''
-
- PreInstruct = """
-### Instruction:
-"""
-
- PreInput = """
-### Input:
-"""
-
- PreResponse = """
-### Response:
-"""
- terminate_response = None
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.openai.value, str(PromptType.openai.value),
- PromptType.openai.name]:
- preprompt = """The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.""" if not (
- chat and reduced) else ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = "\nHuman: "
- PreInput = None
- PreResponse = "\nAI:"
- terminate_response = [PreResponse] + [" Human:", " AI:"]
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.gptj.value, str(PromptType.gptj.value),
- PromptType.gptj.name]:
- preprompt = "### Instruction:\n The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response." if not (
- chat and reduced) else ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = "\n### Prompt: "
- PreInput = None
- PreResponse = "\n### Response: "
- terminate_response = [PreResponse] + ["Prompt:", "Response:"]
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.openai_chat.value, str(PromptType.openai_chat.value),
- PromptType.openai_chat.name]:
- # prompting and termination all handled by endpoint
- preprompt = """"""
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- PreInstruct = ""
- PreInput = None
- PreResponse = ""
- terminate_response = []
- chat_turn_sep = chat_sep = '\n'
- humanstr = None
- botstr = None
- elif prompt_type in [PromptType.vicuna11.value, str(PromptType.vicuna11.value),
- PromptType.vicuna11.name]:
- preprompt = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. """ if not (
- chat and reduced) else ''
- start = ''
- promptB = promptA = '%s%s' % (preprompt, start)
- eos = ''
- PreInstruct = """USER: """
- PreInput = None
- PreResponse = """ASSISTANT:"""
- terminate_response = [PreResponse]
- chat_sep = ' '
- chat_turn_sep = eos
- humanstr = PreInstruct
- botstr = PreResponse
-
- if making_context:
- # when making context, want it to appear as-if LLM generated, which starts with space after :
- PreResponse = PreResponse + ' '
- else:
- # normally LLM adds space after this, because was how trained.
- # if add space here, non-unique tokenization will often make LLM produce wrong output
- PreResponse = PreResponse
- elif prompt_type in [PromptType.mptinstruct.value, str(PromptType.mptinstruct.value),
- PromptType.mptinstruct.name]:
- # https://huggingface.co/mosaicml/mpt-30b-instruct#formatting
- promptA = promptB = 'Below is an instruction that describes a task. Write a response that appropriately completes the request.\n' if not (
- chat and reduced) else ''
-
- PreInstruct = """
-### Instruction
-"""
-
- PreInput = """
-### Input
-"""
-
- PreResponse = """
-### Response
-"""
- terminate_response = None
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.mptchat.value, str(PromptType.mptchat.value),
- PromptType.mptchat.name]:
- # https://huggingface.co/TheBloke/mpt-30B-chat-GGML#prompt-template
- promptA = promptB = """<|im_start|>system\nA conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.\n<|im_end|>""" if not (
- chat and reduced) else ''
-
- PreInstruct = """<|im_start|>user
-"""
-
- PreInput = None
-
- PreResponse = """<|im_end|><|im_start|>assistant
-"""
- terminate_response = ['<|im_end|>']
- chat_sep = ''
- chat_turn_sep = '<|im_end|>'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.falcon.value, str(PromptType.falcon.value),
- PromptType.falcon.name]:
- promptA = promptB = "" if not (chat and reduced) else ''
-
- PreInstruct = """User: """
-
- PreInput = None
-
- PreResponse = """Assistant:"""
- terminate_response = ['\nUser', "<|endoftext|>"]
- chat_sep = '\n\n'
- chat_turn_sep = '\n\n'
- humanstr = PreInstruct
- botstr = PreResponse
- if making_context:
- # when making context, want it to appear as-if LLM generated, which starts with space after :
- PreResponse = 'Assistant: '
- else:
- # normally LLM adds space after this, because was how trained.
- # if add space here, non-unique tokenization will often make LLM produce wrong output
- PreResponse = PreResponse
- # generates_leading_space = True
- elif prompt_type in [PromptType.guanaco.value, str(PromptType.guanaco.value),
- PromptType.guanaco.name]:
- # https://huggingface.co/TheBloke/guanaco-65B-GPTQ
- promptA = promptB = "" if not (chat and reduced) else ''
-
- PreInstruct = """### Human: """
-
- PreInput = None
-
- PreResponse = """### Assistant:"""
- terminate_response = ['### Human:'] # but only allow terminate after prompt is found correctly, else can't terminate
- chat_turn_sep = chat_sep = '\n'
- humanstr = PreInstruct
- botstr = PreResponse
- elif prompt_type in [PromptType.llama2.value, str(PromptType.llama2.value),
- PromptType.llama2.name]:
- PreInstruct = ""
- llama2_sys = "<>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<>\n\n"
- prompt = "[INST] "
- enable_sys = False # too much safety, hurts accuracy
- if not (chat and reduced):
- if enable_sys:
- promptA = promptB = prompt + llama2_sys
- else:
- promptA = promptB = prompt
- else:
- promptA = promptB = ''
- PreInput = None
- PreResponse = ""
- terminate_response = ["[INST]", ""]
- chat_sep = ' [/INST]'
- chat_turn_sep = ' [INST] '
- humanstr = PreInstruct
- botstr = PreResponse
- if making_context:
- PreResponse += " "
- else:
- raise RuntimeError("No such prompt_type=%s" % prompt_type)
-
- if isinstance(terminate_response, (tuple, list)):
- assert '' not in terminate_response, "Bad terminate_response"
-
- ret_dict = dict(promptA=promptA, promptB=promptB, PreInstruct=PreInstruct, PreInput=PreInput,
- PreResponse=PreResponse, terminate_response=terminate_response, chat_sep=chat_sep,
- chat_turn_sep=chat_turn_sep,
- humanstr=humanstr, botstr=botstr,
- generates_leading_space=generates_leading_space)
-
- if return_dict:
- return ret_dict, prompt_dict_error
- else:
- return tuple(list(ret_dict.values()))
-
-
-def generate_prompt(data_point, prompt_type, prompt_dict, chat, reduced, making_context):
- context = data_point.get('context')
- if context is None:
- context = ''
- instruction = data_point.get('instruction')
- input = data_point.get('input')
- output = data_point.get('output')
- prompt_type = data_point.get('prompt_type', prompt_type)
- prompt_dict = data_point.get('prompt_dict', prompt_dict)
- assert prompt_type in prompt_types, "Bad prompt type: %s" % prompt_type
- promptA, promptB, PreInstruct, PreInput, PreResponse, \
- terminate_response, chat_sep, chat_turn_sep, humanstr, botstr, \
- generates_leading_space = get_prompt(prompt_type, prompt_dict, chat,
- context, reduced, making_context)
-
- # could avoid if reduce=True, but too complex for parent functions to handle
- prompt = context
-
- if input and promptA:
- prompt += f"""{promptA}"""
- elif promptB:
- prompt += f"""{promptB}"""
-
- if instruction and PreInstruct is not None and input and PreInput is not None:
- prompt += f"""{PreInstruct}{instruction}{PreInput}{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif instruction and input and PreInstruct is None and PreInput is not None:
- prompt += f"""{PreInput}{instruction}
-{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif input and instruction and PreInput is None and PreInstruct is not None:
- prompt += f"""{PreInstruct}{instruction}
-{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif instruction and PreInstruct is not None:
- prompt += f"""{PreInstruct}{instruction}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif input and PreInput is not None:
- prompt += f"""{PreInput}{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif input and instruction and PreInput is not None:
- prompt += f"""{PreInput}{instruction}{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif input and instruction and PreInstruct is not None:
- prompt += f"""{PreInstruct}{instruction}{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif input and instruction:
- # i.e. for simple_instruct
- prompt += f"""{instruction}: {input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif input:
- prompt += f"""{input}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
- elif instruction:
- prompt += f"""{instruction}"""
- prompt = inject_chatsep(prompt_type, prompt, chat_sep=chat_sep)
-
- if PreResponse is not None:
- prompt += f"""{PreResponse}"""
- pre_response = PreResponse # Don't use strip
- else:
- pre_response = ''
-
- if output:
- prompt += f"""{output}"""
-
- return prompt, pre_response, terminate_response, chat_sep, chat_turn_sep
-
-
-def inject_chatsep(prompt_type, prompt, chat_sep=None):
- if chat_sep:
- # only add new line if structured prompt, while 'plain' is just generation of next tokens from input
- prompt += chat_sep
- return prompt
-
-
-class Prompter(object):
- def __init__(self, prompt_type, prompt_dict, debug=False, chat=False, stream_output=False, repeat_penalty=True,
- allowed_repeat_line_length=10):
- self.prompt_type = prompt_type
- self.prompt_dict = prompt_dict
- self.debug = debug
- self.chat = chat
- self.stream_output = stream_output
- self.repeat_penalty = repeat_penalty
- self.allowed_repeat_line_length = allowed_repeat_line_length
- self.prompt = None
- context = "" # not for chat context
- reduced = False # not for chat context
- making_context = False # not for chat context
- self.promptA, self.promptB, self.PreInstruct, self.PreInput, self.PreResponse, \
- self.terminate_response, self.chat_sep, self.chat_turn_sep, self.humanstr, self.botstr, \
- self.generates_leading_space = \
- get_prompt(self.prompt_type, self.prompt_dict, chat, context, reduced, making_context)
- self.pre_response = self.PreResponse
-
- def generate_prompt(self, data_point, reduced=None):
- """
- data_point['context'] is assumed to be like a system prompt or pre-conversation, not inserted after user prompt
- :param data_point:
- :param reduced:
- :return:
- """
- reduced = data_point.get('context') not in ['', None] if reduced is None else reduced
- making_context = False # whether really making final prompt or just generating context
- prompt, _, _, _, _ = generate_prompt(data_point, self.prompt_type, self.prompt_dict, self.chat, reduced,
- making_context)
- if self.debug:
- print("prompt: %s" % prompt, flush=True)
- # if have context, should have always reduced and only preappend promptA/B here
- if data_point.get('context'):
- if data_point.get('input') and self.promptA:
- prompt = self.promptA + prompt
- elif self.promptB:
- prompt = self.promptB + prompt
-
- self.prompt = prompt
- return prompt
-
- def get_response(self, outputs, prompt=None, sanitize_bot_response=False):
- if isinstance(outputs, str):
- outputs = [outputs]
- if self.debug:
- print("output:\n%s" % '\n\n'.join(outputs), flush=True)
- if prompt is not None:
- self.prompt = prompt
-
- def clean_response(response):
- meaningless_words = ['', '', '<|endoftext|>']
- for word in meaningless_words:
- response = response.replace(word, "")
- if sanitize_bot_response:
- from better_profanity import profanity
- response = profanity.censor(response)
- if self.generates_leading_space and isinstance(response, str) and len(response) > 0 and response[0] == ' ':
- response = response[1:]
- return response
-
- def clean_repeats(response):
- lines = response.split('\n')
- new_lines = []
- [new_lines.append(line) for line in lines if
- line not in new_lines or len(line) < self.allowed_repeat_line_length]
- if self.debug and len(lines) != len(new_lines):
- print("cleaned repeats: %s %s" % (len(lines), len(new_lines)), flush=True)
- response = '\n'.join(new_lines)
- return response
-
- multi_output = len(outputs) > 1
-
- for oi, output in enumerate(outputs):
- if self.prompt_type in [PromptType.plain.value, str(PromptType.plain.value), PromptType.plain.name]:
- output = clean_response(output)
- elif prompt is None:
- # then use most basic parsing like pipeline
- if not self.botstr:
- pass
- elif self.botstr in output:
- if self.humanstr:
- output = clean_response(output.split(self.botstr)[1].split(self.humanstr)[0])
- else:
- # i.e. use after bot but only up to next bot
- output = clean_response(output.split(self.botstr)[1].split(self.botstr)[0])
- else:
- # output = clean_response(output)
- # assume just not printed yet
- output = ""
- else:
- # find first instance of prereponse
- # prompt sometimes has odd characters, that mutate length,
- # so can't go by length alone
- if self.pre_response:
- outputi = output.find(prompt)
- if outputi >= 0:
- output = output[outputi + len(prompt):]
- allow_terminate = True
- else:
- # subtraction is risky due to space offsets sometimes, so only do if necessary
- output = output[len(prompt) - len(self.pre_response):]
- # [1] to avoid repeated pre_response, just take first (after prompt - pre_response for chat)
- if self.pre_response in output:
- output = output.split(self.pre_response)[1]
- allow_terminate = True
- else:
- if output:
- print("Failure of parsing or not enough output yet: %s" % output, flush=True)
- allow_terminate = False
- else:
- allow_terminate = True
- output = output[len(prompt):]
- # clean after subtract prompt out, so correct removal of pre_response
- output = clean_response(output)
- if self.repeat_penalty:
- output = clean_repeats(output)
- if self.terminate_response and allow_terminate:
- finds = []
- for term in self.terminate_response:
- finds.append(output.find(term))
- finds = [x for x in finds if x >= 0]
- if len(finds) > 0:
- termi = finds[0]
- output = output[:termi]
- else:
- output = output
- if multi_output:
- # prefix with output counter
- output = "\n=========== Output %d\n\n" % (1 + oi) + output
- if oi > 0:
- # post fix outputs with seperator
- output += '\n'
- output = self.fix_text(self.prompt_type, output)
- outputs[oi] = output
- # join all outputs, only one extra new line between outputs
- output = '\n'.join(outputs)
- if self.debug:
- print("outputclean:\n%s" % '\n\n'.join(outputs), flush=True)
- return output
-
- @staticmethod
- def fix_text(prompt_type1, text1):
- if prompt_type1 == 'human_bot':
- # hack bug in vLLM with stopping, stops right, but doesn't return last token
- hfix = 'Androidvoy Sisx Para Nokia C7
Critical Path Method Scheduling: A Powerful Technique for Project Management
-
Do you have a complex project that involves multiple tasks, dependencies, and deadlines? Do you want to plan, organize, and execute your project in the most efficient and effective way possible? If so, you may want to learn about critical path method scheduling, a proven technique that can help you achieve your project goals.
-
Critical path method scheduling (CPM) is a mathematical algorithm that analyzes the sequence, duration, and interrelationships of the tasks that make up a project. It identifies the critical path, which is the longest chain of dependent tasks that determines the minimum time required to complete the project. It also calculates the amount of slack or float for each task, which is the maximum time that a task can be delayed without affecting the project completion date.
By using CPM, you can optimize your resource allocation, reduce your project risk, improve your project quality, and enhance your communication and collaboration with your team members and stakeholders. In this article, we will explain what CPM is, how it works, and how you can apply it to your own projects.
-
What is Critical Path Method Scheduling?
-
CPM was developed in the 1950s by Morgan Walker and James Kelley Jr., who were working on projects for DuPont and Remington Rand respectively. They realized that they needed a systematic way to schedule complex projects that involved many interdependent activities. They came up with a method that used network diagrams to represent the project activities and their logical relationships. They also devised a way to calculate the earliest and latest start and finish times for each activity, as well as the critical path and the float.
-
A network diagram is a graphical representation of a project that shows all the activities and their dependencies using nodes and arrows. Each node represents an activity, which is a discrete unit of work that consumes time and resources. Each arrow represents a dependency, which is a logical relationship that indicates which activity must precede or follow another activity. For example, if activity A must be finished before activity B can start, then there is a finish-to-start dependency between A and B.
-
The duration of each activity is estimated based on historical data, expert judgment, or other methods. The duration can be expressed in any unit of time, such as hours, days, weeks, or months. The duration can also be deterministic or probabilistic. A deterministic duration is a fixed value that does not change. A probabilistic duration is a range of values that reflects the uncertainty or variability of the activity.
-
The critical path is the longest path of dependent activities in the network diagram. It represents the shortest possible time to complete the project. Any delay in any activity on the critical path will directly affect the project completion date. The activities on the critical path are called critical activities, and they have zero float.
-
The float or slack of an activity is the amount of time that an activity can be delayed without affecting the project completion date. It is calculated by subtracting the earliest start time from the latest start time or the earliest finish time from the latest finish time of an activity. The float can be positive or negative. A positive float means that an activity has some flexibility and can be delayed up to that amount without affecting the project completion date. A negative float means that an activity is already behind schedule and must be expedited to avoid delaying the project completion date.
-
How to Use Critical Path Method Scheduling for Your Project?
-
To use CPM for your project scheduling, you need to follow these steps:
-
-
Define your project scope and objectives.
-
Identify all the activities that are required to complete your project.
-
Estimate the duration of each activity based on available resources and information.
-
Determine the logical relationships and dependencies among the activities using one of these types: finish-to-start (FS), start-to-start (SS), finish-to-finish (FF), or start-to-finish (SF).
-
Draw a network diagram that shows all the activities and their dependencies using nodes and arrows.
-
Calculate the earliest start time (ES) and earliest finish time (EF) for each activity by performing a forward pass through the network diagram. Start with ES = 0 for the first activity and add its duration to get its EF. Then move to the next activity and assign its ES as the maximum EF of its predecessors. Repeat this process until you reach the last activity.
-
Calculate the latest start time (LS) and latest finish time (LF) for each activity by performing a backward pass through the network diagram. Start with LF = EF for the last activity and subtract its duration to get its LS. Then move to the previous activity and assign its LF as the minimum LS of its successors. Repeat this process until you reach the first activity.
-
Calculate the float or slack for each activity by subtracting its ES from its LS or its EF from its LF.
-
Identify the critical path by tracing all the activities that have zero float from start to finish.
-
Monitor and control your project schedule by updating your network diagram with actual progress data, revising your estimates if necessary, resolving any issues or risks that may cause delays, and taking corrective actions if any deviations occur.
-
-
What are Some Resources for Critical Path Method Scheduling PDF Download?
-
If you want to learn more about CPM or download some useful resources for applying it to your projects, you can visit some of these websites:
-
-
-
ResearchGate: This is a platform where you can find academic papers, articles, books, and reports on various topics related to CPM. You can also connect with other researchers and experts who can answer your questions or share their insights.
-
CIMT: This is a website where you can find a chapter on critical path analysis from a discrete mathematics textbook. You can also access other chapters and exercises on related topics.
-
NASA: This is a website where you can find a presentation on CPM scheduling basics by Harry Sparrow, a project management consultant. You can also access other information and resources on NASA's projects and programs.
-
-
What are Some Examples of Critical Path Method Scheduling?
-
To illustrate how CPM works in practice, let us look at some examples of projects that use this technique for scheduling.
-
One example is the construction of a house, which involves many activities such as preparing the site, laying the foundation, building the walls, installing the roof, plumbing, electrical wiring, painting, etc. Each activity has a certain duration and depends on other activities to be completed before it can start. For instance, plumbing cannot start until the walls are built, and painting cannot start until the plumbing is done. By using CPM, the project manager can identify the critical path of activities that determines the minimum time to finish the house, and also the float of non-critical activities that can be delayed without affecting the project deadline.
-
Another example is the development of a software product, which involves many tasks such as planning, designing, coding, testing, debugging, documenting, deploying, etc. Each task has a certain effort and depends on other tasks to be completed before it can start. For example, coding cannot start until designing is done, and testing cannot start until coding is done. By using CPM, the software engineer can identify the critical path of tasks that determines the minimum time to deliver the product, and also the slack of non-critical tasks that can be postponed without affecting the product quality.
-
What are Some Challenges and Limitations of Critical Path Method Scheduling?
-
While CPM is a useful and widely used technique for project scheduling, it also has some challenges and limitations that need to be considered.
-
-
One challenge is to define and estimate the activities and their durations accurately and realistically. This may require a lot of data collection, analysis, and judgment from various sources and experts. If the estimates are too optimistic or pessimistic, they may affect the validity and reliability of the network diagram and the critical path.
-
Another challenge is to update and revise the network diagram and the critical path as the project progresses and changes occur. This may require constant monitoring and control of the project status, performance, issues, risks, and changes. If the network diagram and the critical path are not updated regularly and accurately, they may lose their relevance and usefulness for project management.
-
One limitation is that CPM assumes that the activity durations are fixed and deterministic. However, in reality, many activities may have uncertain or variable durations due to factors such as weather conditions, resource availability, human errors, etc. In such cases, CPM may not reflect the true probability distribution of the project completion time.
-
Another limitation is that CPM focuses on time as the main criterion for project scheduling. However, in reality, there may be other criteria that are equally or more important for project success, such as cost, quality, scope, risk, stakeholder satisfaction, etc. In such cases, CPM may not capture the trade-offs and balances among these criteria.
-
-
How to Download Critical Path Method Scheduling PDF?
-
If you want to download a PDF file that contains more information and examples on CPM scheduling, you can click on this link: Scheduling Network by Critical Path Method. This is a paper by Dr. Rakesh Kumar Sharma that explains how to apply CPM to various types of projects using network diagrams and calculations.
-
You can also download other PDF files that are related to CPM scheduling from these websites:
-
-
Critical Path Method (CPM): This is a paper by Abas Khan and Mohammad Sarwar Mir that provides an overview of CPM history, definition, algorithm, advantages, disadvantages, and applications.
-
Critical Path Analysis: This is a chapter from a discrete mathematics textbook by CIMT that introduces CPM concepts such as activity networks, precedence relations, earliest and latest times, float or slack, critical path identification.
-
Critical Path Method Scheduling Basics: This is a presentation by Harry Sparrow for NASA Project Management Conference that covers CPM topics such as network diagrams types and relationships types.
-
-
Conclusion
-
Critical path method scheduling is a powerful technique that can help you plan, organize, and execute your projects in the most efficient and effective way possible. It can help you identify the critical path of activities that determines the minimum time to complete your project, and also the float of non-critical activities that gives you some flexibility and buffer. It can help you optimize your resource allocation, reduce your project risk, improve your project quality, and enhance your communication and collaboration with your team members and stakeholders.
-
To use CPM for your project scheduling, you need to define your project scope and objectives, identify all the activities and their durations and dependencies, draw a network diagram that shows all the activities and their relationships, calculate the earliest and latest start and finish times for each activity, calculate the float or slack for each activity, identify the critical path of activities, and monitor and control your project schedule. You can also use various software tools or online platforms to create and edit your network diagram and perform CPM calculations.
-
If you want to learn more about CPM or download some useful resources for applying it to your projects, you can visit some of the websites that we have mentioned in this article. You can also find other websites that offer more information and examples on CPM scheduling. We hope that this article has given you a good introduction to CPM scheduling and how to use it for your project management.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/innnky/nyaru-svc2.0/attentions.py b/spaces/innnky/nyaru-svc2.0/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/innnky/nyaru-svc2.0/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Goanimate Free Download Crack For 267 HOT!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Goanimate Free Download Crack For 267 HOT!.md
deleted file mode 100644
index e9d481439907815dcada71191bf4858f5f1a8786..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Goanimate Free Download Crack For 267 HOT!.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-Sensual Harassment (2015) HD Porn Movie ... To play Movie Click on Play icon on Player 2-3 times until Movie Starts, During this Few Useless ... 4d29de3e1b
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/1st Studio Siberian Mouse Hd 125 Tor VERIFIED.md b/spaces/inreVtussa/clothingai/Examples/1st Studio Siberian Mouse Hd 125 Tor VERIFIED.md
deleted file mode 100644
index 3a2305054423e317b9a95af47dadfb16251029af..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/1st Studio Siberian Mouse Hd 125 Tor VERIFIED.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-New Victor Records demonstrated at all dealers on the 1st of each month. ... r Heroes, containing 12 portraits in full color and 125 in black and white, biographies, ... a throne; that is, if you use the word 'beetle' as including a field mouse or a weasel. ... May I venture to think that the increase in your Siberian forces points to a ... 1fdad05405
-
-
-
diff --git a/spaces/ivntl/MMS/uroman/lib/JSON/backportPP.pm b/spaces/ivntl/MMS/uroman/lib/JSON/backportPP.pm
deleted file mode 100644
index db4f8bbb3b741e95c5817edde612718af0f889e4..0000000000000000000000000000000000000000
--- a/spaces/ivntl/MMS/uroman/lib/JSON/backportPP.pm
+++ /dev/null
@@ -1,2806 +0,0 @@
-package # This is JSON::backportPP
- JSON::PP;
-
-# JSON-2.0
-
-use 5.005;
-use strict;
-use base qw(Exporter);
-use overload ();
-
-use Carp ();
-use B ();
-#use Devel::Peek;
-
-use vars qw($VERSION);
-$VERSION = '2.27204';
-
-@JSON::PP::EXPORT = qw(encode_json decode_json from_json to_json);
-
-# instead of hash-access, i tried index-access for speed.
-# but this method is not faster than what i expected. so it will be changed.
-
-use constant P_ASCII => 0;
-use constant P_LATIN1 => 1;
-use constant P_UTF8 => 2;
-use constant P_INDENT => 3;
-use constant P_CANONICAL => 4;
-use constant P_SPACE_BEFORE => 5;
-use constant P_SPACE_AFTER => 6;
-use constant P_ALLOW_NONREF => 7;
-use constant P_SHRINK => 8;
-use constant P_ALLOW_BLESSED => 9;
-use constant P_CONVERT_BLESSED => 10;
-use constant P_RELAXED => 11;
-
-use constant P_LOOSE => 12;
-use constant P_ALLOW_BIGNUM => 13;
-use constant P_ALLOW_BAREKEY => 14;
-use constant P_ALLOW_SINGLEQUOTE => 15;
-use constant P_ESCAPE_SLASH => 16;
-use constant P_AS_NONBLESSED => 17;
-
-use constant P_ALLOW_UNKNOWN => 18;
-
-use constant OLD_PERL => $] < 5.008 ? 1 : 0;
-
-BEGIN {
- my @xs_compati_bit_properties = qw(
- latin1 ascii utf8 indent canonical space_before space_after allow_nonref shrink
- allow_blessed convert_blessed relaxed allow_unknown
- );
- my @pp_bit_properties = qw(
- allow_singlequote allow_bignum loose
- allow_barekey escape_slash as_nonblessed
- );
-
- # Perl version check, Unicode handling is enable?
- # Helper module sets @JSON::PP::_properties.
- if ($] < 5.008 ) {
- my $helper = $] >= 5.006 ? 'JSON::backportPP::Compat5006' : 'JSON::backportPP::Compat5005';
- eval qq| require $helper |;
- if ($@) { Carp::croak $@; }
- }
-
- for my $name (@xs_compati_bit_properties, @pp_bit_properties) {
- my $flag_name = 'P_' . uc($name);
-
- eval qq/
- sub $name {
- my \$enable = defined \$_[1] ? \$_[1] : 1;
-
- if (\$enable) {
- \$_[0]->{PROPS}->[$flag_name] = 1;
- }
- else {
- \$_[0]->{PROPS}->[$flag_name] = 0;
- }
-
- \$_[0];
- }
-
- sub get_$name {
- \$_[0]->{PROPS}->[$flag_name] ? 1 : '';
- }
- /;
- }
-
-}
-
-
-
-# Functions
-
-my %encode_allow_method
- = map {($_ => 1)} qw/utf8 pretty allow_nonref latin1 self_encode escape_slash
- allow_blessed convert_blessed indent indent_length allow_bignum
- as_nonblessed
- /;
-my %decode_allow_method
- = map {($_ => 1)} qw/utf8 allow_nonref loose allow_singlequote allow_bignum
- allow_barekey max_size relaxed/;
-
-
-my $JSON; # cache
-
-sub encode_json ($) { # encode
- ($JSON ||= __PACKAGE__->new->utf8)->encode(@_);
-}
-
-
-sub decode_json { # decode
- ($JSON ||= __PACKAGE__->new->utf8)->decode(@_);
-}
-
-# Obsoleted
-
-sub to_json($) {
- Carp::croak ("JSON::PP::to_json has been renamed to encode_json.");
-}
-
-
-sub from_json($) {
- Carp::croak ("JSON::PP::from_json has been renamed to decode_json.");
-}
-
-
-# Methods
-
-sub new {
- my $class = shift;
- my $self = {
- max_depth => 512,
- max_size => 0,
- indent => 0,
- FLAGS => 0,
- fallback => sub { encode_error('Invalid value. JSON can only reference.') },
- indent_length => 3,
- };
-
- bless $self, $class;
-}
-
-
-sub encode {
- return $_[0]->PP_encode_json($_[1]);
-}
-
-
-sub decode {
- return $_[0]->PP_decode_json($_[1], 0x00000000);
-}
-
-
-sub decode_prefix {
- return $_[0]->PP_decode_json($_[1], 0x00000001);
-}
-
-
-# accessor
-
-
-# pretty printing
-
-sub pretty {
- my ($self, $v) = @_;
- my $enable = defined $v ? $v : 1;
-
- if ($enable) { # indent_length(3) for JSON::XS compatibility
- $self->indent(1)->indent_length(3)->space_before(1)->space_after(1);
- }
- else {
- $self->indent(0)->space_before(0)->space_after(0);
- }
-
- $self;
-}
-
-# etc
-
-sub max_depth {
- my $max = defined $_[1] ? $_[1] : 0x80000000;
- $_[0]->{max_depth} = $max;
- $_[0];
-}
-
-
-sub get_max_depth { $_[0]->{max_depth}; }
-
-
-sub max_size {
- my $max = defined $_[1] ? $_[1] : 0;
- $_[0]->{max_size} = $max;
- $_[0];
-}
-
-
-sub get_max_size { $_[0]->{max_size}; }
-
-
-sub filter_json_object {
- $_[0]->{cb_object} = defined $_[1] ? $_[1] : 0;
- $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0;
- $_[0];
-}
-
-sub filter_json_single_key_object {
- if (@_ > 1) {
- $_[0]->{cb_sk_object}->{$_[1]} = $_[2];
- }
- $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0;
- $_[0];
-}
-
-sub indent_length {
- if (!defined $_[1] or $_[1] > 15 or $_[1] < 0) {
- Carp::carp "The acceptable range of indent_length() is 0 to 15.";
- }
- else {
- $_[0]->{indent_length} = $_[1];
- }
- $_[0];
-}
-
-sub get_indent_length {
- $_[0]->{indent_length};
-}
-
-sub sort_by {
- $_[0]->{sort_by} = defined $_[1] ? $_[1] : 1;
- $_[0];
-}
-
-sub allow_bigint {
- Carp::carp("allow_bigint() is obsoleted. use allow_bignum() insted.");
-}
-
-###############################
-
-###
-### Perl => JSON
-###
-
-
-{ # Convert
-
- my $max_depth;
- my $indent;
- my $ascii;
- my $latin1;
- my $utf8;
- my $space_before;
- my $space_after;
- my $canonical;
- my $allow_blessed;
- my $convert_blessed;
-
- my $indent_length;
- my $escape_slash;
- my $bignum;
- my $as_nonblessed;
-
- my $depth;
- my $indent_count;
- my $keysort;
-
-
- sub PP_encode_json {
- my $self = shift;
- my $obj = shift;
-
- $indent_count = 0;
- $depth = 0;
-
- my $idx = $self->{PROPS};
-
- ($ascii, $latin1, $utf8, $indent, $canonical, $space_before, $space_after, $allow_blessed,
- $convert_blessed, $escape_slash, $bignum, $as_nonblessed)
- = @{$idx}[P_ASCII .. P_SPACE_AFTER, P_ALLOW_BLESSED, P_CONVERT_BLESSED,
- P_ESCAPE_SLASH, P_ALLOW_BIGNUM, P_AS_NONBLESSED];
-
- ($max_depth, $indent_length) = @{$self}{qw/max_depth indent_length/};
-
- $keysort = $canonical ? sub { $a cmp $b } : undef;
-
- if ($self->{sort_by}) {
- $keysort = ref($self->{sort_by}) eq 'CODE' ? $self->{sort_by}
- : $self->{sort_by} =~ /\D+/ ? $self->{sort_by}
- : sub { $a cmp $b };
- }
-
- encode_error("hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this)")
- if(!ref $obj and !$idx->[ P_ALLOW_NONREF ]);
-
- my $str = $self->object_to_json($obj);
-
- $str .= "\n" if ( $indent ); # JSON::XS 2.26 compatible
-
- unless ($ascii or $latin1 or $utf8) {
- utf8::upgrade($str);
- }
-
- if ($idx->[ P_SHRINK ]) {
- utf8::downgrade($str, 1);
- }
-
- return $str;
- }
-
-
- sub object_to_json {
- my ($self, $obj) = @_;
- my $type = ref($obj);
-
- if($type eq 'HASH'){
- return $self->hash_to_json($obj);
- }
- elsif($type eq 'ARRAY'){
- return $self->array_to_json($obj);
- }
- elsif ($type) { # blessed object?
- if (blessed($obj)) {
-
- return $self->value_to_json($obj) if ( $obj->isa('JSON::PP::Boolean') );
-
- if ( $convert_blessed and $obj->can('TO_JSON') ) {
- my $result = $obj->TO_JSON();
- if ( defined $result and ref( $result ) ) {
- if ( refaddr( $obj ) eq refaddr( $result ) ) {
- encode_error( sprintf(
- "%s::TO_JSON method returned same object as was passed instead of a new one",
- ref $obj
- ) );
- }
- }
-
- return $self->object_to_json( $result );
- }
-
- return "$obj" if ( $bignum and _is_bignum($obj) );
- return $self->blessed_to_json($obj) if ($allow_blessed and $as_nonblessed); # will be removed.
-
- encode_error( sprintf("encountered object '%s', but neither allow_blessed "
- . "nor convert_blessed settings are enabled", $obj)
- ) unless ($allow_blessed);
-
- return 'null';
- }
- else {
- return $self->value_to_json($obj);
- }
- }
- else{
- return $self->value_to_json($obj);
- }
- }
-
-
- sub hash_to_json {
- my ($self, $obj) = @_;
- my @res;
-
- encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)")
- if (++$depth > $max_depth);
-
- my ($pre, $post) = $indent ? $self->_up_indent() : ('', '');
- my $del = ($space_before ? ' ' : '') . ':' . ($space_after ? ' ' : '');
-
- for my $k ( _sort( $obj ) ) {
- if ( OLD_PERL ) { utf8::decode($k) } # key for Perl 5.6 / be optimized
- push @res, string_to_json( $self, $k )
- . $del
- . ( $self->object_to_json( $obj->{$k} ) || $self->value_to_json( $obj->{$k} ) );
- }
-
- --$depth;
- $self->_down_indent() if ($indent);
-
- return '{' . ( @res ? $pre : '' ) . ( @res ? join( ",$pre", @res ) . $post : '' ) . '}';
- }
-
-
- sub array_to_json {
- my ($self, $obj) = @_;
- my @res;
-
- encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)")
- if (++$depth > $max_depth);
-
- my ($pre, $post) = $indent ? $self->_up_indent() : ('', '');
-
- for my $v (@$obj){
- push @res, $self->object_to_json($v) || $self->value_to_json($v);
- }
-
- --$depth;
- $self->_down_indent() if ($indent);
-
- return '[' . ( @res ? $pre : '' ) . ( @res ? join( ",$pre", @res ) . $post : '' ) . ']';
- }
-
-
- sub value_to_json {
- my ($self, $value) = @_;
-
- return 'null' if(!defined $value);
-
- my $b_obj = B::svref_2object(\$value); # for round trip problem
- my $flags = $b_obj->FLAGS;
-
- return $value # as is
- if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV?
-
- my $type = ref($value);
-
- if(!$type){
- return string_to_json($self, $value);
- }
- elsif( blessed($value) and $value->isa('JSON::PP::Boolean') ){
- return $$value == 1 ? 'true' : 'false';
- }
- elsif ($type) {
- if ((overload::StrVal($value) =~ /=(\w+)/)[0]) {
- return $self->value_to_json("$value");
- }
-
- if ($type eq 'SCALAR' and defined $$value) {
- return $$value eq '1' ? 'true'
- : $$value eq '0' ? 'false'
- : $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ? 'null'
- : encode_error("cannot encode reference to scalar");
- }
-
- if ( $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ) {
- return 'null';
- }
- else {
- if ( $type eq 'SCALAR' or $type eq 'REF' ) {
- encode_error("cannot encode reference to scalar");
- }
- else {
- encode_error("encountered $value, but JSON can only represent references to arrays or hashes");
- }
- }
-
- }
- else {
- return $self->{fallback}->($value)
- if ($self->{fallback} and ref($self->{fallback}) eq 'CODE');
- return 'null';
- }
-
- }
-
-
- my %esc = (
- "\n" => '\n',
- "\r" => '\r',
- "\t" => '\t',
- "\f" => '\f',
- "\b" => '\b',
- "\"" => '\"',
- "\\" => '\\\\',
- "\'" => '\\\'',
- );
-
-
- sub string_to_json {
- my ($self, $arg) = @_;
-
- $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g;
- $arg =~ s/\//\\\//g if ($escape_slash);
- $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg;
-
- if ($ascii) {
- $arg = JSON_PP_encode_ascii($arg);
- }
-
- if ($latin1) {
- $arg = JSON_PP_encode_latin1($arg);
- }
-
- if ($utf8) {
- utf8::encode($arg);
- }
-
- return '"' . $arg . '"';
- }
-
-
- sub blessed_to_json {
- my $reftype = reftype($_[1]) || '';
- if ($reftype eq 'HASH') {
- return $_[0]->hash_to_json($_[1]);
- }
- elsif ($reftype eq 'ARRAY') {
- return $_[0]->array_to_json($_[1]);
- }
- else {
- return 'null';
- }
- }
-
-
- sub encode_error {
- my $error = shift;
- Carp::croak "$error";
- }
-
-
- sub _sort {
- defined $keysort ? (sort $keysort (keys %{$_[0]})) : keys %{$_[0]};
- }
-
-
- sub _up_indent {
- my $self = shift;
- my $space = ' ' x $indent_length;
-
- my ($pre,$post) = ('','');
-
- $post = "\n" . $space x $indent_count;
-
- $indent_count++;
-
- $pre = "\n" . $space x $indent_count;
-
- return ($pre,$post);
- }
-
-
- sub _down_indent { $indent_count--; }
-
-
- sub PP_encode_box {
- {
- depth => $depth,
- indent_count => $indent_count,
- };
- }
-
-} # Convert
-
-
-sub _encode_ascii {
- join('',
- map {
- $_ <= 127 ?
- chr($_) :
- $_ <= 65535 ?
- sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_));
- } unpack('U*', $_[0])
- );
-}
-
-
-sub _encode_latin1 {
- join('',
- map {
- $_ <= 255 ?
- chr($_) :
- $_ <= 65535 ?
- sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_));
- } unpack('U*', $_[0])
- );
-}
-
-
-sub _encode_surrogates { # from perlunicode
- my $uni = $_[0] - 0x10000;
- return ($uni / 0x400 + 0xD800, $uni % 0x400 + 0xDC00);
-}
-
-
-sub _is_bignum {
- $_[0]->isa('Math::BigInt') or $_[0]->isa('Math::BigFloat');
-}
-
-
-
-#
-# JSON => Perl
-#
-
-my $max_intsize;
-
-BEGIN {
- my $checkint = 1111;
- for my $d (5..64) {
- $checkint .= 1;
- my $int = eval qq| $checkint |;
- if ($int =~ /[eE]/) {
- $max_intsize = $d - 1;
- last;
- }
- }
-}
-
-{ # PARSE
-
- my %escapes = ( # by Jeremy Muhlich
- b => "\x8",
- t => "\x9",
- n => "\xA",
- f => "\xC",
- r => "\xD",
- '\\' => '\\',
- '"' => '"',
- '/' => '/',
- );
-
- my $text; # json data
- my $at; # offset
- my $ch; # 1chracter
- my $len; # text length (changed according to UTF8 or NON UTF8)
- # INTERNAL
- my $depth; # nest counter
- my $encoding; # json text encoding
- my $is_valid_utf8; # temp variable
- my $utf8_len; # utf8 byte length
- # FLAGS
- my $utf8; # must be utf8
- my $max_depth; # max nest number of objects and arrays
- my $max_size;
- my $relaxed;
- my $cb_object;
- my $cb_sk_object;
-
- my $F_HOOK;
-
- my $allow_bigint; # using Math::BigInt
- my $singlequote; # loosely quoting
- my $loose; #
- my $allow_barekey; # bareKey
-
- # $opt flag
- # 0x00000001 .... decode_prefix
- # 0x10000000 .... incr_parse
-
- sub PP_decode_json {
- my ($self, $opt); # $opt is an effective flag during this decode_json.
-
- ($self, $text, $opt) = @_;
-
- ($at, $ch, $depth) = (0, '', 0);
-
- if ( !defined $text or ref $text ) {
- decode_error("malformed JSON string, neither array, object, number, string or atom");
- }
-
- my $idx = $self->{PROPS};
-
- ($utf8, $relaxed, $loose, $allow_bigint, $allow_barekey, $singlequote)
- = @{$idx}[P_UTF8, P_RELAXED, P_LOOSE .. P_ALLOW_SINGLEQUOTE];
-
- if ( $utf8 ) {
- utf8::downgrade( $text, 1 ) or Carp::croak("Wide character in subroutine entry");
- }
- else {
- utf8::upgrade( $text );
- }
-
- $len = length $text;
-
- ($max_depth, $max_size, $cb_object, $cb_sk_object, $F_HOOK)
- = @{$self}{qw/max_depth max_size cb_object cb_sk_object F_HOOK/};
-
- if ($max_size > 1) {
- use bytes;
- my $bytes = length $text;
- decode_error(
- sprintf("attempted decode of JSON text of %s bytes size, but max_size is set to %s"
- , $bytes, $max_size), 1
- ) if ($bytes > $max_size);
- }
-
- # Currently no effect
- # should use regexp
- my @octets = unpack('C4', $text);
- $encoding = ( $octets[0] and $octets[1]) ? 'UTF-8'
- : (!$octets[0] and $octets[1]) ? 'UTF-16BE'
- : (!$octets[0] and !$octets[1]) ? 'UTF-32BE'
- : ( $octets[2] ) ? 'UTF-16LE'
- : (!$octets[2] ) ? 'UTF-32LE'
- : 'unknown';
-
- white(); # remove head white space
-
- my $valid_start = defined $ch; # Is there a first character for JSON structure?
-
- my $result = value();
-
- return undef if ( !$result && ( $opt & 0x10000000 ) ); # for incr_parse
-
- decode_error("malformed JSON string, neither array, object, number, string or atom") unless $valid_start;
-
- if ( !$idx->[ P_ALLOW_NONREF ] and !ref $result ) {
- decode_error(
- 'JSON text must be an object or array (but found number, string, true, false or null,'
- . ' use allow_nonref to allow this)', 1);
- }
-
- Carp::croak('something wrong.') if $len < $at; # we won't arrive here.
-
- my $consumed = defined $ch ? $at - 1 : $at; # consumed JSON text length
-
- white(); # remove tail white space
-
- if ( $ch ) {
- return ( $result, $consumed ) if ($opt & 0x00000001); # all right if decode_prefix
- decode_error("garbage after JSON object");
- }
-
- ( $opt & 0x00000001 ) ? ( $result, $consumed ) : $result;
- }
-
-
- sub next_chr {
- return $ch = undef if($at >= $len);
- $ch = substr($text, $at++, 1);
- }
-
-
- sub value {
- white();
- return if(!defined $ch);
- return object() if($ch eq '{');
- return array() if($ch eq '[');
- return string() if($ch eq '"' or ($singlequote and $ch eq "'"));
- return number() if($ch =~ /[0-9]/ or $ch eq '-');
- return word();
- }
-
- sub string {
- my ($i, $s, $t, $u);
- my $utf16;
- my $is_utf8;
-
- ($is_valid_utf8, $utf8_len) = ('', 0);
-
- $s = ''; # basically UTF8 flag on
-
- if($ch eq '"' or ($singlequote and $ch eq "'")){
- my $boundChar = $ch;
-
- OUTER: while( defined(next_chr()) ){
-
- if($ch eq $boundChar){
- next_chr();
-
- if ($utf16) {
- decode_error("missing low surrogate character in surrogate pair");
- }
-
- utf8::decode($s) if($is_utf8);
-
- return $s;
- }
- elsif($ch eq '\\'){
- next_chr();
- if(exists $escapes{$ch}){
- $s .= $escapes{$ch};
- }
- elsif($ch eq 'u'){ # UNICODE handling
- my $u = '';
-
- for(1..4){
- $ch = next_chr();
- last OUTER if($ch !~ /[0-9a-fA-F]/);
- $u .= $ch;
- }
-
- # U+D800 - U+DBFF
- if ($u =~ /^[dD][89abAB][0-9a-fA-F]{2}/) { # UTF-16 high surrogate?
- $utf16 = $u;
- }
- # U+DC00 - U+DFFF
- elsif ($u =~ /^[dD][c-fC-F][0-9a-fA-F]{2}/) { # UTF-16 low surrogate?
- unless (defined $utf16) {
- decode_error("missing high surrogate character in surrogate pair");
- }
- $is_utf8 = 1;
- $s .= JSON_PP_decode_surrogates($utf16, $u) || next;
- $utf16 = undef;
- }
- else {
- if (defined $utf16) {
- decode_error("surrogate pair expected");
- }
-
- if ( ( my $hex = hex( $u ) ) > 127 ) {
- $is_utf8 = 1;
- $s .= JSON_PP_decode_unicode($u) || next;
- }
- else {
- $s .= chr $hex;
- }
- }
-
- }
- else{
- unless ($loose) {
- $at -= 2;
- decode_error('illegal backslash escape sequence in string');
- }
- $s .= $ch;
- }
- }
- else{
-
- if ( ord $ch > 127 ) {
- if ( $utf8 ) {
- unless( $ch = is_valid_utf8($ch) ) {
- $at -= 1;
- decode_error("malformed UTF-8 character in JSON string");
- }
- else {
- $at += $utf8_len - 1;
- }
- }
- else {
- utf8::encode( $ch );
- }
-
- $is_utf8 = 1;
- }
-
- if (!$loose) {
- if ($ch =~ /[\x00-\x1f\x22\x5c]/) { # '/' ok
- $at--;
- decode_error('invalid character encountered while parsing JSON string');
- }
- }
-
- $s .= $ch;
- }
- }
- }
-
- decode_error("unexpected end of string while parsing JSON string");
- }
-
-
- sub white {
- while( defined $ch ){
- if($ch le ' '){
- next_chr();
- }
- elsif($ch eq '/'){
- next_chr();
- if(defined $ch and $ch eq '/'){
- 1 while(defined(next_chr()) and $ch ne "\n" and $ch ne "\r");
- }
- elsif(defined $ch and $ch eq '*'){
- next_chr();
- while(1){
- if(defined $ch){
- if($ch eq '*'){
- if(defined(next_chr()) and $ch eq '/'){
- next_chr();
- last;
- }
- }
- else{
- next_chr();
- }
- }
- else{
- decode_error("Unterminated comment");
- }
- }
- next;
- }
- else{
- $at--;
- decode_error("malformed JSON string, neither array, object, number, string or atom");
- }
- }
- else{
- if ($relaxed and $ch eq '#') { # correctly?
- pos($text) = $at;
- $text =~ /\G([^\n]*(?:\r\n|\r|\n|$))/g;
- $at = pos($text);
- next_chr;
- next;
- }
-
- last;
- }
- }
- }
-
-
- sub array {
- my $a = $_[0] || []; # you can use this code to use another array ref object.
-
- decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)')
- if (++$depth > $max_depth);
-
- next_chr();
- white();
-
- if(defined $ch and $ch eq ']'){
- --$depth;
- next_chr();
- return $a;
- }
- else {
- while(defined($ch)){
- push @$a, value();
-
- white();
-
- if (!defined $ch) {
- last;
- }
-
- if($ch eq ']'){
- --$depth;
- next_chr();
- return $a;
- }
-
- if($ch ne ','){
- last;
- }
-
- next_chr();
- white();
-
- if ($relaxed and $ch eq ']') {
- --$depth;
- next_chr();
- return $a;
- }
-
- }
- }
-
- decode_error(", or ] expected while parsing array");
- }
-
-
- sub object {
- my $o = $_[0] || {}; # you can use this code to use another hash ref object.
- my $k;
-
- decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)')
- if (++$depth > $max_depth);
- next_chr();
- white();
-
- if(defined $ch and $ch eq '}'){
- --$depth;
- next_chr();
- if ($F_HOOK) {
- return _json_object_hook($o);
- }
- return $o;
- }
- else {
- while (defined $ch) {
- $k = ($allow_barekey and $ch ne '"' and $ch ne "'") ? bareKey() : string();
- white();
-
- if(!defined $ch or $ch ne ':'){
- $at--;
- decode_error("':' expected");
- }
-
- next_chr();
- $o->{$k} = value();
- white();
-
- last if (!defined $ch);
-
- if($ch eq '}'){
- --$depth;
- next_chr();
- if ($F_HOOK) {
- return _json_object_hook($o);
- }
- return $o;
- }
-
- if($ch ne ','){
- last;
- }
-
- next_chr();
- white();
-
- if ($relaxed and $ch eq '}') {
- --$depth;
- next_chr();
- if ($F_HOOK) {
- return _json_object_hook($o);
- }
- return $o;
- }
-
- }
-
- }
-
- $at--;
- decode_error(", or } expected while parsing object/hash");
- }
-
-
- sub bareKey { # doesn't strictly follow Standard ECMA-262 3rd Edition
- my $key;
- while($ch =~ /[^\x00-\x23\x25-\x2F\x3A-\x40\x5B-\x5E\x60\x7B-\x7F]/){
- $key .= $ch;
- next_chr();
- }
- return $key;
- }
-
-
- sub word {
- my $word = substr($text,$at-1,4);
-
- if($word eq 'true'){
- $at += 3;
- next_chr;
- return $JSON::PP::true;
- }
- elsif($word eq 'null'){
- $at += 3;
- next_chr;
- return undef;
- }
- elsif($word eq 'fals'){
- $at += 3;
- if(substr($text,$at,1) eq 'e'){
- $at++;
- next_chr;
- return $JSON::PP::false;
- }
- }
-
- $at--; # for decode_error report
-
- decode_error("'null' expected") if ($word =~ /^n/);
- decode_error("'true' expected") if ($word =~ /^t/);
- decode_error("'false' expected") if ($word =~ /^f/);
- decode_error("malformed JSON string, neither array, object, number, string or atom");
- }
-
-
- sub number {
- my $n = '';
- my $v;
-
- # According to RFC4627, hex or oct digits are invalid.
- if($ch eq '0'){
- my $peek = substr($text,$at,1);
- my $hex = $peek =~ /[xX]/; # 0 or 1
-
- if($hex){
- decode_error("malformed number (leading zero must not be followed by another digit)");
- ($n) = ( substr($text, $at+1) =~ /^([0-9a-fA-F]+)/);
- }
- else{ # oct
- ($n) = ( substr($text, $at) =~ /^([0-7]+)/);
- if (defined $n and length $n > 1) {
- decode_error("malformed number (leading zero must not be followed by another digit)");
- }
- }
-
- if(defined $n and length($n)){
- if (!$hex and length($n) == 1) {
- decode_error("malformed number (leading zero must not be followed by another digit)");
- }
- $at += length($n) + $hex;
- next_chr;
- return $hex ? hex($n) : oct($n);
- }
- }
-
- if($ch eq '-'){
- $n = '-';
- next_chr;
- if (!defined $ch or $ch !~ /\d/) {
- decode_error("malformed number (no digits after initial minus)");
- }
- }
-
- while(defined $ch and $ch =~ /\d/){
- $n .= $ch;
- next_chr;
- }
-
- if(defined $ch and $ch eq '.'){
- $n .= '.';
-
- next_chr;
- if (!defined $ch or $ch !~ /\d/) {
- decode_error("malformed number (no digits after decimal point)");
- }
- else {
- $n .= $ch;
- }
-
- while(defined(next_chr) and $ch =~ /\d/){
- $n .= $ch;
- }
- }
-
- if(defined $ch and ($ch eq 'e' or $ch eq 'E')){
- $n .= $ch;
- next_chr;
-
- if(defined($ch) and ($ch eq '+' or $ch eq '-')){
- $n .= $ch;
- next_chr;
- if (!defined $ch or $ch =~ /\D/) {
- decode_error("malformed number (no digits after exp sign)");
- }
- $n .= $ch;
- }
- elsif(defined($ch) and $ch =~ /\d/){
- $n .= $ch;
- }
- else {
- decode_error("malformed number (no digits after exp sign)");
- }
-
- while(defined(next_chr) and $ch =~ /\d/){
- $n .= $ch;
- }
-
- }
-
- $v .= $n;
-
- if ($v !~ /[.eE]/ and length $v > $max_intsize) {
- if ($allow_bigint) { # from Adam Sussman
- require Math::BigInt;
- return Math::BigInt->new($v);
- }
- else {
- return "$v";
- }
- }
- elsif ($allow_bigint) {
- require Math::BigFloat;
- return Math::BigFloat->new($v);
- }
-
- return 0+$v;
- }
-
-
- sub is_valid_utf8 {
-
- $utf8_len = $_[0] =~ /[\x00-\x7F]/ ? 1
- : $_[0] =~ /[\xC2-\xDF]/ ? 2
- : $_[0] =~ /[\xE0-\xEF]/ ? 3
- : $_[0] =~ /[\xF0-\xF4]/ ? 4
- : 0
- ;
-
- return unless $utf8_len;
-
- my $is_valid_utf8 = substr($text, $at - 1, $utf8_len);
-
- return ( $is_valid_utf8 =~ /^(?:
- [\x00-\x7F]
- |[\xC2-\xDF][\x80-\xBF]
- |[\xE0][\xA0-\xBF][\x80-\xBF]
- |[\xE1-\xEC][\x80-\xBF][\x80-\xBF]
- |[\xED][\x80-\x9F][\x80-\xBF]
- |[\xEE-\xEF][\x80-\xBF][\x80-\xBF]
- |[\xF0][\x90-\xBF][\x80-\xBF][\x80-\xBF]
- |[\xF1-\xF3][\x80-\xBF][\x80-\xBF][\x80-\xBF]
- |[\xF4][\x80-\x8F][\x80-\xBF][\x80-\xBF]
- )$/x ) ? $is_valid_utf8 : '';
- }
-
-
- sub decode_error {
- my $error = shift;
- my $no_rep = shift;
- my $str = defined $text ? substr($text, $at) : '';
- my $mess = '';
- my $type = $] >= 5.008 ? 'U*'
- : $] < 5.006 ? 'C*'
- : utf8::is_utf8( $str ) ? 'U*' # 5.6
- : 'C*'
- ;
-
- for my $c ( unpack( $type, $str ) ) { # emulate pv_uni_display() ?
- $mess .= $c == 0x07 ? '\a'
- : $c == 0x09 ? '\t'
- : $c == 0x0a ? '\n'
- : $c == 0x0d ? '\r'
- : $c == 0x0c ? '\f'
- : $c < 0x20 ? sprintf('\x{%x}', $c)
- : $c == 0x5c ? '\\\\'
- : $c < 0x80 ? chr($c)
- : sprintf('\x{%x}', $c)
- ;
- if ( length $mess >= 20 ) {
- $mess .= '...';
- last;
- }
- }
-
- unless ( length $mess ) {
- $mess = '(end of string)';
- }
-
- Carp::croak (
- $no_rep ? "$error" : "$error, at character offset $at (before \"$mess\")"
- );
-
- }
-
-
- sub _json_object_hook {
- my $o = $_[0];
- my @ks = keys %{$o};
-
- if ( $cb_sk_object and @ks == 1 and exists $cb_sk_object->{ $ks[0] } and ref $cb_sk_object->{ $ks[0] } ) {
- my @val = $cb_sk_object->{ $ks[0] }->( $o->{$ks[0]} );
- if (@val == 1) {
- return $val[0];
- }
- }
-
- my @val = $cb_object->($o) if ($cb_object);
- if (@val == 0 or @val > 1) {
- return $o;
- }
- else {
- return $val[0];
- }
- }
-
-
- sub PP_decode_box {
- {
- text => $text,
- at => $at,
- ch => $ch,
- len => $len,
- depth => $depth,
- encoding => $encoding,
- is_valid_utf8 => $is_valid_utf8,
- };
- }
-
-} # PARSE
-
-
-sub _decode_surrogates { # from perlunicode
- my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00);
- my $un = pack('U*', $uni);
- utf8::encode( $un );
- return $un;
-}
-
-
-sub _decode_unicode {
- my $un = pack('U', hex shift);
- utf8::encode( $un );
- return $un;
-}
-
-#
-# Setup for various Perl versions (the code from JSON::PP58)
-#
-
-BEGIN {
-
- unless ( defined &utf8::is_utf8 ) {
- require Encode;
- *utf8::is_utf8 = *Encode::is_utf8;
- }
-
- if ( $] >= 5.008 ) {
- *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii;
- *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1;
- *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates;
- *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode;
- }
-
- if ($] >= 5.008 and $] < 5.008003) { # join() in 5.8.0 - 5.8.2 is broken.
- package # hide from PAUSE
- JSON::PP;
- require subs;
- subs->import('join');
- eval q|
- sub join {
- return '' if (@_ < 2);
- my $j = shift;
- my $str = shift;
- for (@_) { $str .= $j . $_; }
- return $str;
- }
- |;
- }
-
-
- sub JSON::PP::incr_parse {
- local $Carp::CarpLevel = 1;
- ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_parse( @_ );
- }
-
-
- sub JSON::PP::incr_skip {
- ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_skip;
- }
-
-
- sub JSON::PP::incr_reset {
- ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_reset;
- }
-
- eval q{
- sub JSON::PP::incr_text : lvalue {
- $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new;
-
- if ( $_[0]->{_incr_parser}->{incr_parsing} ) {
- Carp::croak("incr_text can not be called when the incremental parser already started parsing");
- }
- $_[0]->{_incr_parser}->{incr_text};
- }
- } if ( $] >= 5.006 );
-
-} # Setup for various Perl versions (the code from JSON::PP58)
-
-
-###############################
-# Utilities
-#
-
-BEGIN {
- eval 'require Scalar::Util';
- unless($@){
- *JSON::PP::blessed = \&Scalar::Util::blessed;
- *JSON::PP::reftype = \&Scalar::Util::reftype;
- *JSON::PP::refaddr = \&Scalar::Util::refaddr;
- }
- else{ # This code is from Scalar::Util.
- # warn $@;
- eval 'sub UNIVERSAL::a_sub_not_likely_to_be_here { ref($_[0]) }';
- *JSON::PP::blessed = sub {
- local($@, $SIG{__DIE__}, $SIG{__WARN__});
- ref($_[0]) ? eval { $_[0]->a_sub_not_likely_to_be_here } : undef;
- };
- my %tmap = qw(
- B::NULL SCALAR
- B::HV HASH
- B::AV ARRAY
- B::CV CODE
- B::IO IO
- B::GV GLOB
- B::REGEXP REGEXP
- );
- *JSON::PP::reftype = sub {
- my $r = shift;
-
- return undef unless length(ref($r));
-
- my $t = ref(B::svref_2object($r));
-
- return
- exists $tmap{$t} ? $tmap{$t}
- : length(ref($$r)) ? 'REF'
- : 'SCALAR';
- };
- *JSON::PP::refaddr = sub {
- return undef unless length(ref($_[0]));
-
- my $addr;
- if(defined(my $pkg = blessed($_[0]))) {
- $addr .= bless $_[0], 'Scalar::Util::Fake';
- bless $_[0], $pkg;
- }
- else {
- $addr .= $_[0]
- }
-
- $addr =~ /0x(\w+)/;
- local $^W;
- #no warnings 'portable';
- hex($1);
- }
- }
-}
-
-
-# shamelessly copied and modified from JSON::XS code.
-
-unless ( $INC{'JSON/PP.pm'} ) {
- eval q|
- package
- JSON::PP::Boolean;
-
- use overload (
- "0+" => sub { ${$_[0]} },
- "++" => sub { $_[0] = ${$_[0]} + 1 },
- "--" => sub { $_[0] = ${$_[0]} - 1 },
- fallback => 1,
- );
- |;
-}
-
-$JSON::PP::true = do { bless \(my $dummy = 1), "JSON::PP::Boolean" };
-$JSON::PP::false = do { bless \(my $dummy = 0), "JSON::PP::Boolean" };
-
-sub is_bool { defined $_[0] and UNIVERSAL::isa($_[0], "JSON::PP::Boolean"); }
-
-sub true { $JSON::PP::true }
-sub false { $JSON::PP::false }
-sub null { undef; }
-
-###############################
-
-###############################
-
-package # hide from PAUSE
- JSON::PP::IncrParser;
-
-use strict;
-
-use constant INCR_M_WS => 0; # initial whitespace skipping
-use constant INCR_M_STR => 1; # inside string
-use constant INCR_M_BS => 2; # inside backslash
-use constant INCR_M_JSON => 3; # outside anything, count nesting
-use constant INCR_M_C0 => 4;
-use constant INCR_M_C1 => 5;
-
-use vars qw($VERSION);
-$VERSION = '1.01';
-
-my $unpack_format = $] < 5.006 ? 'C*' : 'U*';
-
-sub new {
- my ( $class ) = @_;
-
- bless {
- incr_nest => 0,
- incr_text => undef,
- incr_parsing => 0,
- incr_p => 0,
- }, $class;
-}
-
-
-sub incr_parse {
- my ( $self, $coder, $text ) = @_;
-
- $self->{incr_text} = '' unless ( defined $self->{incr_text} );
-
- if ( defined $text ) {
- if ( utf8::is_utf8( $text ) and !utf8::is_utf8( $self->{incr_text} ) ) {
- utf8::upgrade( $self->{incr_text} ) ;
- utf8::decode( $self->{incr_text} ) ;
- }
- $self->{incr_text} .= $text;
- }
-
-
- my $max_size = $coder->get_max_size;
-
- if ( defined wantarray ) {
-
- $self->{incr_mode} = INCR_M_WS unless defined $self->{incr_mode};
-
- if ( wantarray ) {
- my @ret;
-
- $self->{incr_parsing} = 1;
-
- do {
- push @ret, $self->_incr_parse( $coder, $self->{incr_text} );
-
- unless ( !$self->{incr_nest} and $self->{incr_mode} == INCR_M_JSON ) {
- $self->{incr_mode} = INCR_M_WS if $self->{incr_mode} != INCR_M_STR;
- }
-
- } until ( length $self->{incr_text} >= $self->{incr_p} );
-
- $self->{incr_parsing} = 0;
-
- return @ret;
- }
- else { # in scalar context
- $self->{incr_parsing} = 1;
- my $obj = $self->_incr_parse( $coder, $self->{incr_text} );
- $self->{incr_parsing} = 0 if defined $obj; # pointed by Martin J. Evans
- return $obj ? $obj : undef; # $obj is an empty string, parsing was completed.
- }
-
- }
-
-}
-
-
-sub _incr_parse {
- my ( $self, $coder, $text, $skip ) = @_;
- my $p = $self->{incr_p};
- my $restore = $p;
-
- my @obj;
- my $len = length $text;
-
- if ( $self->{incr_mode} == INCR_M_WS ) {
- while ( $len > $p ) {
- my $s = substr( $text, $p, 1 );
- $p++ and next if ( 0x20 >= unpack($unpack_format, $s) );
- $self->{incr_mode} = INCR_M_JSON;
- last;
- }
- }
-
- while ( $len > $p ) {
- my $s = substr( $text, $p++, 1 );
-
- if ( $s eq '"' ) {
- if (substr( $text, $p - 2, 1 ) eq '\\' ) {
- next;
- }
-
- if ( $self->{incr_mode} != INCR_M_STR ) {
- $self->{incr_mode} = INCR_M_STR;
- }
- else {
- $self->{incr_mode} = INCR_M_JSON;
- unless ( $self->{incr_nest} ) {
- last;
- }
- }
- }
-
- if ( $self->{incr_mode} == INCR_M_JSON ) {
-
- if ( $s eq '[' or $s eq '{' ) {
- if ( ++$self->{incr_nest} > $coder->get_max_depth ) {
- Carp::croak('json text or perl structure exceeds maximum nesting level (max_depth set too low?)');
- }
- }
- elsif ( $s eq ']' or $s eq '}' ) {
- last if ( --$self->{incr_nest} <= 0 );
- }
- elsif ( $s eq '#' ) {
- while ( $len > $p ) {
- last if substr( $text, $p++, 1 ) eq "\n";
- }
- }
-
- }
-
- }
-
- $self->{incr_p} = $p;
-
- return if ( $self->{incr_mode} == INCR_M_STR and not $self->{incr_nest} );
- return if ( $self->{incr_mode} == INCR_M_JSON and $self->{incr_nest} > 0 );
-
- return '' unless ( length substr( $self->{incr_text}, 0, $p ) );
-
- local $Carp::CarpLevel = 2;
-
- $self->{incr_p} = $restore;
- $self->{incr_c} = $p;
-
- my ( $obj, $tail ) = $coder->PP_decode_json( substr( $self->{incr_text}, 0, $p ), 0x10000001 );
-
- $self->{incr_text} = substr( $self->{incr_text}, $p );
- $self->{incr_p} = 0;
-
- return $obj || '';
-}
-
-
-sub incr_text {
- if ( $_[0]->{incr_parsing} ) {
- Carp::croak("incr_text can not be called when the incremental parser already started parsing");
- }
- $_[0]->{incr_text};
-}
-
-
-sub incr_skip {
- my $self = shift;
- $self->{incr_text} = substr( $self->{incr_text}, $self->{incr_c} );
- $self->{incr_p} = 0;
-}
-
-
-sub incr_reset {
- my $self = shift;
- $self->{incr_text} = undef;
- $self->{incr_p} = 0;
- $self->{incr_mode} = 0;
- $self->{incr_nest} = 0;
- $self->{incr_parsing} = 0;
-}
-
-###############################
-
-
-1;
-__END__
-=pod
-
-=head1 NAME
-
-JSON::PP - JSON::XS compatible pure-Perl module.
-
-=head1 SYNOPSIS
-
- use JSON::PP;
-
- # exported functions, they croak on error
- # and expect/generate UTF-8
-
- $utf8_encoded_json_text = encode_json $perl_hash_or_arrayref;
- $perl_hash_or_arrayref = decode_json $utf8_encoded_json_text;
-
- # OO-interface
-
- $coder = JSON::PP->new->ascii->pretty->allow_nonref;
-
- $json_text = $json->encode( $perl_scalar );
- $perl_scalar = $json->decode( $json_text );
-
- $pretty_printed = $json->pretty->encode( $perl_scalar ); # pretty-printing
-
- # Note that JSON version 2.0 and above will automatically use
- # JSON::XS or JSON::PP, so you should be able to just:
-
- use JSON;
-
-
-=head1 VERSION
-
- 2.27200
-
-L 2.27 (~2.30) compatible.
-
-=head1 DESCRIPTION
-
-This module is L compatible pure Perl module.
-(Perl 5.8 or later is recommended)
-
-JSON::XS is the fastest and most proper JSON module on CPAN.
-It is written by Marc Lehmann in C, so must be compiled and
-installed in the used environment.
-
-JSON::PP is a pure-Perl module and has compatibility to JSON::XS.
-
-
-=head2 FEATURES
-
-=over
-
-=item * correct unicode handling
-
-This module knows how to handle Unicode (depending on Perl version).
-
-See to L and
-L.
-
-
-=item * round-trip integrity
-
-When you serialise a perl data structure using only data types
-supported by JSON and Perl, the deserialised data structure is
-identical on the Perl level. (e.g. the string "2.0" doesn't suddenly
-become "2" just because it looks like a number). There I minor
-exceptions to this, read the MAPPING section below to learn about
-those.
-
-
-=item * strict checking of JSON correctness
-
-There is no guessing, no generating of illegal JSON texts by default,
-and only JSON is accepted as input by default (the latter is a
-security feature). But when some options are set, loose checking
-features are available.
-
-=back
-
-=head1 FUNCTIONAL INTERFACE
-
-Some documents are copied and modified from L.
-
-=head2 encode_json
-
- $json_text = encode_json $perl_scalar
-
-Converts the given Perl data structure to a UTF-8 encoded, binary string.
-
-This function call is functionally identical to:
-
- $json_text = JSON::PP->new->utf8->encode($perl_scalar)
-
-=head2 decode_json
-
- $perl_scalar = decode_json $json_text
-
-The opposite of C: expects an UTF-8 (binary) string and tries
-to parse that as an UTF-8 encoded JSON text, returning the resulting
-reference.
-
-This function call is functionally identical to:
-
- $perl_scalar = JSON::PP->new->utf8->decode($json_text)
-
-=head2 JSON::PP::is_bool
-
- $is_boolean = JSON::PP::is_bool($scalar)
-
-Returns true if the passed scalar represents either JSON::PP::true or
-JSON::PP::false, two constants that act like C<1> and C<0> respectively
-and are also used to represent JSON C and C in Perl strings.
-
-=head2 JSON::PP::true
-
-Returns JSON true value which is blessed object.
-It C JSON::PP::Boolean object.
-
-=head2 JSON::PP::false
-
-Returns JSON false value which is blessed object.
-It C JSON::PP::Boolean object.
-
-=head2 JSON::PP::null
-
-Returns C.
-
-See L, below, for more information on how JSON values are mapped to
-Perl.
-
-
-=head1 HOW DO I DECODE A DATA FROM OUTER AND ENCODE TO OUTER
-
-This section supposes that your perl version is 5.8 or later.
-
-If you know a JSON text from an outer world - a network, a file content, and so on,
-is encoded in UTF-8, you should use C or C module object
-with C enable. And the decoded result will contain UNICODE characters.
-
- # from network
- my $json = JSON::PP->new->utf8;
- my $json_text = CGI->new->param( 'json_data' );
- my $perl_scalar = $json->decode( $json_text );
-
- # from file content
- local $/;
- open( my $fh, '<', 'json.data' );
- $json_text = <$fh>;
- $perl_scalar = decode_json( $json_text );
-
-If an outer data is not encoded in UTF-8, firstly you should C it.
-
- use Encode;
- local $/;
- open( my $fh, '<', 'json.data' );
- my $encoding = 'cp932';
- my $unicode_json_text = decode( $encoding, <$fh> ); # UNICODE
-
- # or you can write the below code.
- #
- # open( my $fh, "<:encoding($encoding)", 'json.data' );
- # $unicode_json_text = <$fh>;
-
-In this case, C<$unicode_json_text> is of course UNICODE string.
-So you B use C nor C module object with C enable.
-Instead of them, you use C module object with C disable.
-
- $perl_scalar = $json->utf8(0)->decode( $unicode_json_text );
-
-Or C and C:
-
- $perl_scalar = decode_json( encode( 'utf8', $unicode_json_text ) );
- # this way is not efficient.
-
-And now, you want to convert your C<$perl_scalar> into JSON data and
-send it to an outer world - a network or a file content, and so on.
-
-Your data usually contains UNICODE strings and you want the converted data to be encoded
-in UTF-8, you should use C or C module object with C enable.
-
- print encode_json( $perl_scalar ); # to a network? file? or display?
- # or
- print $json->utf8->encode( $perl_scalar );
-
-If C<$perl_scalar> does not contain UNICODE but C<$encoding>-encoded strings
-for some reason, then its characters are regarded as B for perl
-(because it does not concern with your $encoding).
-You B use C nor C module object with C enable.
-Instead of them, you use C module object with C disable.
-Note that the resulted text is a UNICODE string but no problem to print it.
-
- # $perl_scalar contains $encoding encoded string values
- $unicode_json_text = $json->utf8(0)->encode( $perl_scalar );
- # $unicode_json_text consists of characters less than 0x100
- print $unicode_json_text;
-
-Or C all string values and C:
-
- $perl_scalar->{ foo } = decode( $encoding, $perl_scalar->{ foo } );
- # ... do it to each string values, then encode_json
- $json_text = encode_json( $perl_scalar );
-
-This method is a proper way but probably not efficient.
-
-See to L, L.
-
-
-=head1 METHODS
-
-Basically, check to L or L.
-
-=head2 new
-
- $json = JSON::PP->new
-
-Returns a new JSON::PP object that can be used to de/encode JSON
-strings.
-
-All boolean flags described below are by default I.
-
-The mutators for flags all return the JSON object again and thus calls can
-be chained:
-
- my $json = JSON::PP->new->utf8->space_after->encode({a => [1,2]})
- => {"a": [1, 2]}
-
-=head2 ascii
-
- $json = $json->ascii([$enable])
-
- $enabled = $json->get_ascii
-
-If $enable is true (or missing), then the encode method will not generate characters outside
-the code range 0..127. Any Unicode characters outside that range will be escaped using either
-a single \uXXXX or a double \uHHHH\uLLLLL escape sequence, as per RFC4627.
-(See to L).
-
-In Perl 5.005, there is no character having high value (more than 255).
-See to L.
-
-If $enable is false, then the encode method will not escape Unicode characters unless
-required by the JSON syntax or other flags. This results in a faster and more compact format.
-
- JSON::PP->new->ascii(1)->encode([chr 0x10401])
- => ["\ud801\udc01"]
-
-=head2 latin1
-
- $json = $json->latin1([$enable])
-
- $enabled = $json->get_latin1
-
-If $enable is true (or missing), then the encode method will encode the resulting JSON
-text as latin1 (or iso-8859-1), escaping any characters outside the code range 0..255.
-
-If $enable is false, then the encode method will not escape Unicode characters
-unless required by the JSON syntax or other flags.
-
- JSON::XS->new->latin1->encode (["\x{89}\x{abc}"]
- => ["\x{89}\\u0abc"] # (perl syntax, U+abc escaped, U+89 not)
-
-See to L.
-
-=head2 utf8
-
- $json = $json->utf8([$enable])
-
- $enabled = $json->get_utf8
-
-If $enable is true (or missing), then the encode method will encode the JSON result
-into UTF-8, as required by many protocols, while the decode method expects to be handled
-an UTF-8-encoded string. Please note that UTF-8-encoded strings do not contain any
-characters outside the range 0..255, they are thus useful for bytewise/binary I/O.
-
-(In Perl 5.005, any character outside the range 0..255 does not exist.
-See to L.)
-
-In future versions, enabling this option might enable autodetection of the UTF-16 and UTF-32
-encoding families, as described in RFC4627.
-
-If $enable is false, then the encode method will return the JSON string as a (non-encoded)
-Unicode string, while decode expects thus a Unicode string. Any decoding or encoding
-(e.g. to UTF-8 or UTF-16) needs to be done yourself, e.g. using the Encode module.
-
-Example, output UTF-16BE-encoded JSON:
-
- use Encode;
- $jsontext = encode "UTF-16BE", JSON::PP->new->encode ($object);
-
-Example, decode UTF-32LE-encoded JSON:
-
- use Encode;
- $object = JSON::PP->new->decode (decode "UTF-32LE", $jsontext);
-
-
-=head2 pretty
-
- $json = $json->pretty([$enable])
-
-This enables (or disables) all of the C, C and
-C flags in one call to generate the most readable
-(or most compact) form possible.
-
-Equivalent to:
-
- $json->indent->space_before->space_after
-
-=head2 indent
-
- $json = $json->indent([$enable])
-
- $enabled = $json->get_indent
-
-The default indent space length is three.
-You can use C to change the length.
-
-=head2 space_before
-
- $json = $json->space_before([$enable])
-
- $enabled = $json->get_space_before
-
-If C<$enable> is true (or missing), then the C method will add an extra
-optional space before the C<:> separating keys from values in JSON objects.
-
-If C<$enable> is false, then the C method will not add any extra
-space at those places.
-
-This setting has no effect when decoding JSON texts.
-
-Example, space_before enabled, space_after and indent disabled:
-
- {"key" :"value"}
-
-=head2 space_after
-
- $json = $json->space_after([$enable])
-
- $enabled = $json->get_space_after
-
-If C<$enable> is true (or missing), then the C method will add an extra
-optional space after the C<:> separating keys from values in JSON objects
-and extra whitespace after the C<,> separating key-value pairs and array
-members.
-
-If C<$enable> is false, then the C method will not add any extra
-space at those places.
-
-This setting has no effect when decoding JSON texts.
-
-Example, space_before and indent disabled, space_after enabled:
-
- {"key": "value"}
-
-=head2 relaxed
-
- $json = $json->relaxed([$enable])
-
- $enabled = $json->get_relaxed
-
-If C<$enable> is true (or missing), then C will accept some
-extensions to normal JSON syntax (see below). C will not be
-affected in anyway. I. I suggest only to use this option to
-parse application-specific files written by humans (configuration files,
-resource files etc.)
-
-If C<$enable> is false (the default), then C will only accept
-valid JSON texts.
-
-Currently accepted extensions are:
-
-=over 4
-
-=item * list items can have an end-comma
-
-JSON I array elements and key-value pairs with commas. This
-can be annoying if you write JSON texts manually and want to be able to
-quickly append elements, so this extension accepts comma at the end of
-such items not just between them:
-
- [
- 1,
- 2, <- this comma not normally allowed
- ]
- {
- "k1": "v1",
- "k2": "v2", <- this comma not normally allowed
- }
-
-=item * shell-style '#'-comments
-
-Whenever JSON allows whitespace, shell-style comments are additionally
-allowed. They are terminated by the first carriage-return or line-feed
-character, after which more white-space and comments are allowed.
-
- [
- 1, # this comment not allowed in JSON
- # neither this one...
- ]
-
-=back
-
-=head2 canonical
-
- $json = $json->canonical([$enable])
-
- $enabled = $json->get_canonical
-
-If C<$enable> is true (or missing), then the C method will output JSON objects
-by sorting their keys. This is adding a comparatively high overhead.
-
-If C<$enable> is false, then the C method will output key-value
-pairs in the order Perl stores them (which will likely change between runs
-of the same script).
-
-This option is useful if you want the same data structure to be encoded as
-the same JSON text (given the same overall settings). If it is disabled,
-the same hash might be encoded differently even if contains the same data,
-as key-value pairs have no inherent ordering in Perl.
-
-This setting has no effect when decoding JSON texts.
-
-If you want your own sorting routine, you can give a code reference
-or a subroutine name to C. See to C.
-
-=head2 allow_nonref
-
- $json = $json->allow_nonref([$enable])
-
- $enabled = $json->get_allow_nonref
-
-If C<$enable> is true (or missing), then the C method can convert a
-non-reference into its corresponding string, number or null JSON value,
-which is an extension to RFC4627. Likewise, C will accept those JSON
-values instead of croaking.
-
-If C<$enable> is false, then the C method will croak if it isn't
-passed an arrayref or hashref, as JSON texts must either be an object
-or array. Likewise, C will croak if given something that is not a
-JSON object or array.
-
- JSON::PP->new->allow_nonref->encode ("Hello, World!")
- => "Hello, World!"
-
-=head2 allow_unknown
-
- $json = $json->allow_unknown ([$enable])
-
- $enabled = $json->get_allow_unknown
-
-If $enable is true (or missing), then "encode" will *not* throw an
-exception when it encounters values it cannot represent in JSON (for
-example, filehandles) but instead will encode a JSON "null" value.
-Note that blessed objects are not included here and are handled
-separately by c.
-
-If $enable is false (the default), then "encode" will throw an
-exception when it encounters anything it cannot encode as JSON.
-
-This option does not affect "decode" in any way, and it is
-recommended to leave it off unless you know your communications
-partner.
-
-=head2 allow_blessed
-
- $json = $json->allow_blessed([$enable])
-
- $enabled = $json->get_allow_blessed
-
-If C<$enable> is true (or missing), then the C method will not
-barf when it encounters a blessed reference. Instead, the value of the
-B option will decide whether C (C
-disabled or no C method found) or a representation of the
-object (C enabled and C method found) is being
-encoded. Has no effect on C.
-
-If C<$enable> is false (the default), then C will throw an
-exception when it encounters a blessed object.
-
-=head2 convert_blessed
-
- $json = $json->convert_blessed([$enable])
-
- $enabled = $json->get_convert_blessed
-
-If C<$enable> is true (or missing), then C, upon encountering a
-blessed object, will check for the availability of the C method
-on the object's class. If found, it will be called in scalar context
-and the resulting scalar will be encoded instead of the object. If no
-C method is found, the value of C will decide what
-to do.
-
-The C method may safely call die if it wants. If C
-returns other blessed objects, those will be handled in the same
-way. C must take care of not causing an endless recursion cycle
-(== crash) in this case. The name of C was chosen because other
-methods called by the Perl core (== not by the user of the object) are
-usually in upper case letters and to avoid collisions with the C
-function or method.
-
-This setting does not yet influence C in any way.
-
-If C<$enable> is false, then the C setting will decide what
-to do when a blessed object is found.
-
-=head2 filter_json_object
-
- $json = $json->filter_json_object([$coderef])
-
-When C<$coderef> is specified, it will be called from C each
-time it decodes a JSON object. The only argument passed to the coderef
-is a reference to the newly-created hash. If the code references returns
-a single scalar (which need not be a reference), this value
-(i.e. a copy of that scalar to avoid aliasing) is inserted into the
-deserialised data structure. If it returns an empty list
-(NOTE: I C, which is a valid scalar), the original deserialised
-hash will be inserted. This setting can slow down decoding considerably.
-
-When C<$coderef> is omitted or undefined, any existing callback will
-be removed and C will not change the deserialised hash in any
-way.
-
-Example, convert all JSON objects into the integer 5:
-
- my $js = JSON::PP->new->filter_json_object (sub { 5 });
- # returns [5]
- $js->decode ('[{}]'); # the given subroutine takes a hash reference.
- # throw an exception because allow_nonref is not enabled
- # so a lone 5 is not allowed.
- $js->decode ('{"a":1, "b":2}');
-
-=head2 filter_json_single_key_object
-
- $json = $json->filter_json_single_key_object($key [=> $coderef])
-
-Works remotely similar to C, but is only called for
-JSON objects having a single key named C<$key>.
-
-This C<$coderef> is called before the one specified via
-C, if any. It gets passed the single value in the JSON
-object. If it returns a single value, it will be inserted into the data
-structure. If it returns nothing (not even C but the empty list),
-the callback from C will be called next, as if no
-single-key callback were specified.
-
-If C<$coderef> is omitted or undefined, the corresponding callback will be
-disabled. There can only ever be one callback for a given key.
-
-As this callback gets called less often then the C
-one, decoding speed will not usually suffer as much. Therefore, single-key
-objects make excellent targets to serialise Perl objects into, especially
-as single-key JSON objects are as close to the type-tagged value concept
-as JSON gets (it's basically an ID/VALUE tuple). Of course, JSON does not
-support this in any way, so you need to make sure your data never looks
-like a serialised Perl hash.
-
-Typical names for the single object key are C<__class_whatever__>, or
-C<$__dollars_are_rarely_used__$> or C<}ugly_brace_placement>, or even
-things like C<__class_md5sum(classname)__>, to reduce the risk of clashing
-with real hashes.
-
-Example, decode JSON objects of the form C<< { "__widget__" => } >>
-into the corresponding C<< $WIDGET{} >> object:
-
- # return whatever is in $WIDGET{5}:
- JSON::PP
- ->new
- ->filter_json_single_key_object (__widget__ => sub {
- $WIDGET{ $_[0] }
- })
- ->decode ('{"__widget__": 5')
-
- # this can be used with a TO_JSON method in some "widget" class
- # for serialisation to json:
- sub WidgetBase::TO_JSON {
- my ($self) = @_;
-
- unless ($self->{id}) {
- $self->{id} = ..get..some..id..;
- $WIDGET{$self->{id}} = $self;
- }
-
- { __widget__ => $self->{id} }
- }
-
-=head2 shrink
-
- $json = $json->shrink([$enable])
-
- $enabled = $json->get_shrink
-
-In JSON::XS, this flag resizes strings generated by either
-C or C to their minimum size possible.
-It will also try to downgrade any strings to octet-form if possible.
-
-In JSON::PP, it is noop about resizing strings but tries
-C to the returned string by C.
-See to L.
-
-See to L
-
-=head2 max_depth
-
- $json = $json->max_depth([$maximum_nesting_depth])
-
- $max_depth = $json->get_max_depth
-
-Sets the maximum nesting level (default C<512>) accepted while encoding
-or decoding. If a higher nesting level is detected in JSON text or a Perl
-data structure, then the encoder and decoder will stop and croak at that
-point.
-
-Nesting level is defined by number of hash- or arrayrefs that the encoder
-needs to traverse to reach a given point or the number of C<{> or C<[>
-characters without their matching closing parenthesis crossed to reach a
-given character in a string.
-
-If no argument is given, the highest possible setting will be used, which
-is rarely useful.
-
-See L for more info on why this is useful.
-
-When a large value (100 or more) was set and it de/encodes a deep nested object/text,
-it may raise a warning 'Deep recursion on subroutine' at the perl runtime phase.
-
-=head2 max_size
-
- $json = $json->max_size([$maximum_string_size])
-
- $max_size = $json->get_max_size
-
-Set the maximum length a JSON text may have (in bytes) where decoding is
-being attempted. The default is C<0>, meaning no limit. When C
-is called on a string that is longer then this many bytes, it will not
-attempt to decode the string but throw an exception. This setting has no
-effect on C (yet).
-
-If no argument is given, the limit check will be deactivated (same as when
-C<0> is specified).
-
-See L for more info on why this is useful.
-
-=head2 encode
-
- $json_text = $json->encode($perl_scalar)
-
-Converts the given Perl data structure (a simple scalar or a reference
-to a hash or array) to its JSON representation. Simple scalars will be
-converted into JSON string or number sequences, while references to arrays
-become JSON arrays and references to hashes become JSON objects. Undefined
-Perl values (e.g. C) become JSON C values.
-References to the integers C<0> and C<1> are converted into C and C.
-
-=head2 decode
-
- $perl_scalar = $json->decode($json_text)
-
-The opposite of C: expects a JSON text and tries to parse it,
-returning the resulting simple scalar or reference. Croaks on error.
-
-JSON numbers and strings become simple Perl scalars. JSON arrays become
-Perl arrayrefs and JSON objects become Perl hashrefs. C becomes
-C<1> (C), C becomes C<0> (C) and
-C becomes C.
-
-=head2 decode_prefix
-
- ($perl_scalar, $characters) = $json->decode_prefix($json_text)
-
-This works like the C method, but instead of raising an exception
-when there is trailing garbage after the first JSON object, it will
-silently stop parsing there and return the number of characters consumed
-so far.
-
- JSON->new->decode_prefix ("[1] the tail")
- => ([], 3)
-
-=head1 INCREMENTAL PARSING
-
-Most of this section are copied and modified from L.
-
-In some cases, there is the need for incremental parsing of JSON texts.
-This module does allow you to parse a JSON stream incrementally.
-It does so by accumulating text until it has a full JSON object, which
-it then can decode. This process is similar to using C
-to see if a full JSON object is available, but is much more efficient
-(and can be implemented with a minimum of method calls).
-
-This module will only attempt to parse the JSON text once it is sure it
-has enough text to get a decisive result, using a very simple but
-truly incremental parser. This means that it sometimes won't stop as
-early as the full parser, for example, it doesn't detect parenthesis
-mismatches. The only thing it guarantees is that it starts decoding as
-soon as a syntactically valid JSON text has been seen. This means you need
-to set resource limits (e.g. C) to ensure the parser will stop
-parsing in the presence if syntax errors.
-
-The following methods implement this incremental parser.
-
-=head2 incr_parse
-
- $json->incr_parse( [$string] ) # void context
-
- $obj_or_undef = $json->incr_parse( [$string] ) # scalar context
-
- @obj_or_empty = $json->incr_parse( [$string] ) # list context
-
-This is the central parsing function. It can both append new text and
-extract objects from the stream accumulated so far (both of these
-functions are optional).
-
-If C<$string> is given, then this string is appended to the already
-existing JSON fragment stored in the C<$json> object.
-
-After that, if the function is called in void context, it will simply
-return without doing anything further. This can be used to add more text
-in as many chunks as you want.
-
-If the method is called in scalar context, then it will try to extract
-exactly I JSON object. If that is successful, it will return this
-object, otherwise it will return C. If there is a parse error,
-this method will croak just as C would do (one can then use
-C to skip the erroneous part). This is the most common way of
-using the method.
-
-And finally, in list context, it will try to extract as many objects
-from the stream as it can find and return them, or the empty list
-otherwise. For this to work, there must be no separators between the JSON
-objects or arrays, instead they must be concatenated back-to-back. If
-an error occurs, an exception will be raised as in the scalar context
-case. Note that in this case, any previously-parsed JSON texts will be
-lost.
-
-Example: Parse some JSON arrays/objects in a given string and return them.
-
- my @objs = JSON->new->incr_parse ("[5][7][1,2]");
-
-=head2 incr_text
-
- $lvalue_string = $json->incr_text
-
-This method returns the currently stored JSON fragment as an lvalue, that
-is, you can manipulate it. This I works when a preceding call to
-C in I successfully returned an object. Under
-all other circumstances you must not call this function (I mean it.
-although in simple tests it might actually work, it I fail under
-real world conditions). As a special exception, you can also call this
-method before having parsed anything.
-
-This function is useful in two cases: a) finding the trailing text after a
-JSON object or b) parsing multiple JSON objects separated by non-JSON text
-(such as commas).
-
- $json->incr_text =~ s/\s*,\s*//;
-
-In Perl 5.005, C attribute is not available.
-You must write codes like the below:
-
- $string = $json->incr_text;
- $string =~ s/\s*,\s*//;
- $json->incr_text( $string );
-
-=head2 incr_skip
-
- $json->incr_skip
-
-This will reset the state of the incremental parser and will remove the
-parsed text from the input buffer. This is useful after C
-died, in which case the input buffer and incremental parser state is left
-unchanged, to skip the text parsed so far and to reset the parse state.
-
-=head2 incr_reset
-
- $json->incr_reset
-
-This completely resets the incremental parser, that is, after this call,
-it will be as if the parser had never parsed anything.
-
-This is useful if you want to repeatedly parse JSON objects and want to
-ignore any trailing data, which means you have to reset the parser after
-each successful decode.
-
-See to L for examples.
-
-
-=head1 JSON::PP OWN METHODS
-
-=head2 allow_singlequote
-
- $json = $json->allow_singlequote([$enable])
-
-If C<$enable> is true (or missing), then C will accept
-JSON strings quoted by single quotations that are invalid JSON
-format.
-
- $json->allow_singlequote->decode({"foo":'bar'});
- $json->allow_singlequote->decode({'foo':"bar"});
- $json->allow_singlequote->decode({'foo':'bar'});
-
-As same as the C option, this option may be used to parse
-application-specific files written by humans.
-
-
-=head2 allow_barekey
-
- $json = $json->allow_barekey([$enable])
-
-If C<$enable> is true (or missing), then C will accept
-bare keys of JSON object that are invalid JSON format.
-
-As same as the C option, this option may be used to parse
-application-specific files written by humans.
-
- $json->allow_barekey->decode('{foo:"bar"}');
-
-=head2 allow_bignum
-
- $json = $json->allow_bignum([$enable])
-
-If C<$enable> is true (or missing), then C will convert
-the big integer Perl cannot handle as integer into a L
-object and convert a floating number (any) into a L.
-
-On the contrary, C converts C objects and C
-objects into JSON numbers with C enable.
-
- $json->allow_nonref->allow_blessed->allow_bignum;
- $bigfloat = $json->decode('2.000000000000000000000000001');
- print $json->encode($bigfloat);
- # => 2.000000000000000000000000001
-
-See to L about the normal conversion of JSON number.
-
-=head2 loose
-
- $json = $json->loose([$enable])
-
-The unescaped [\x00-\x1f\x22\x2f\x5c] strings are invalid in JSON strings
-and the module doesn't allow to C to these (except for \x2f).
-If C<$enable> is true (or missing), then C will accept these
-unescaped strings.
-
- $json->loose->decode(qq|["abc
- def"]|);
-
-See L.
-
-=head2 escape_slash
-
- $json = $json->escape_slash([$enable])
-
-According to JSON Grammar, I (U+002F) is escaped. But default
-JSON::PP (as same as JSON::XS) encodes strings without escaping slash.
-
-If C<$enable> is true (or missing), then C will escape slashes.
-
-=head2 indent_length
-
- $json = $json->indent_length($length)
-
-JSON::XS indent space length is 3 and cannot be changed.
-JSON::PP set the indent space length with the given $length.
-The default is 3. The acceptable range is 0 to 15.
-
-=head2 sort_by
-
- $json = $json->sort_by($function_name)
- $json = $json->sort_by($subroutine_ref)
-
-If $function_name or $subroutine_ref are set, its sort routine are used
-in encoding JSON objects.
-
- $js = $pc->sort_by(sub { $JSON::PP::a cmp $JSON::PP::b })->encode($obj);
- # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|);
-
- $js = $pc->sort_by('own_sort')->encode($obj);
- # is($js, q|{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9}|);
-
- sub JSON::PP::own_sort { $JSON::PP::a cmp $JSON::PP::b }
-
-As the sorting routine runs in the JSON::PP scope, the given
-subroutine name and the special variables C<$a>, C<$b> will begin
-'JSON::PP::'.
-
-If $integer is set, then the effect is same as C on.
-
-=head1 INTERNAL
-
-For developers.
-
-=over
-
-=item PP_encode_box
-
-Returns
-
- {
- depth => $depth,
- indent_count => $indent_count,
- }
-
-
-=item PP_decode_box
-
-Returns
-
- {
- text => $text,
- at => $at,
- ch => $ch,
- len => $len,
- depth => $depth,
- encoding => $encoding,
- is_valid_utf8 => $is_valid_utf8,
- };
-
-=back
-
-=head1 MAPPING
-
-This section is copied from JSON::XS and modified to C.
-JSON::XS and JSON::PP mapping mechanisms are almost equivalent.
-
-See to L.
-
-=head2 JSON -> PERL
-
-=over 4
-
-=item object
-
-A JSON object becomes a reference to a hash in Perl. No ordering of object
-keys is preserved (JSON does not preserver object key ordering itself).
-
-=item array
-
-A JSON array becomes a reference to an array in Perl.
-
-=item string
-
-A JSON string becomes a string scalar in Perl - Unicode codepoints in JSON
-are represented by the same codepoints in the Perl string, so no manual
-decoding is necessary.
-
-=item number
-
-A JSON number becomes either an integer, numeric (floating point) or
-string scalar in perl, depending on its range and any fractional parts. On
-the Perl level, there is no difference between those as Perl handles all
-the conversion details, but an integer may take slightly less memory and
-might represent more values exactly than floating point numbers.
-
-If the number consists of digits only, C will try to represent
-it as an integer value. If that fails, it will try to represent it as
-a numeric (floating point) value if that is possible without loss of
-precision. Otherwise it will preserve the number as a string value (in
-which case you lose roundtripping ability, as the JSON number will be
-re-encoded to a JSON string).
-
-Numbers containing a fractional or exponential part will always be
-represented as numeric (floating point) values, possibly at a loss of
-precision (in which case you might lose perfect roundtripping ability, but
-the JSON number will still be re-encoded as a JSON number).
-
-Note that precision is not accuracy - binary floating point values cannot
-represent most decimal fractions exactly, and when converting from and to
-floating point, C only guarantees precision up to but not including
-the least significant bit.
-
-When C is enable, the big integers
-and the numeric can be optionally converted into L and
-L objects.
-
-=item true, false
-
-These JSON atoms become C and C,
-respectively. They are overloaded to act almost exactly like the numbers
-C<1> and C<0>. You can check whether a scalar is a JSON boolean by using
-the C function.
-
- print JSON::PP::true . "\n";
- => true
- print JSON::PP::true + 1;
- => 1
-
- ok(JSON::true eq '1');
- ok(JSON::true == 1);
-
-C will install these missing overloading features to the backend modules.
-
-
-=item null
-
-A JSON null atom becomes C in Perl.
-
-C returns C.
-
-=back
-
-
-=head2 PERL -> JSON
-
-The mapping from Perl to JSON is slightly more difficult, as Perl is a
-truly typeless language, so we can only guess which JSON type is meant by
-a Perl value.
-
-=over 4
-
-=item hash references
-
-Perl hash references become JSON objects. As there is no inherent ordering
-in hash keys (or JSON objects), they will usually be encoded in a
-pseudo-random order that can change between runs of the same program but
-stays generally the same within a single run of a program. C
-optionally sort the hash keys (determined by the I flag), so
-the same data structure will serialise to the same JSON text (given same
-settings and version of JSON::XS), but this incurs a runtime overhead
-and is only rarely useful, e.g. when you want to compare some JSON text
-against another for equality.
-
-
-=item array references
-
-Perl array references become JSON arrays.
-
-=item other references
-
-Other unblessed references are generally not allowed and will cause an
-exception to be thrown, except for references to the integers C<0> and
-C<1>, which get turned into C and C atoms in JSON. You can
-also use C and C to improve readability.
-
- to_json [\0,JSON::PP::true] # yields [false,true]
-
-=item JSON::PP::true, JSON::PP::false, JSON::PP::null
-
-These special values become JSON true and JSON false values,
-respectively. You can also use C<\1> and C<\0> directly if you want.
-
-JSON::PP::null returns C.
-
-=item blessed objects
-
-Blessed objects are not directly representable in JSON. See the
-C and C methods on various options on
-how to deal with this: basically, you can choose between throwing an
-exception, encoding the reference as if it weren't blessed, or provide
-your own serialiser method.
-
-See to L.
-
-=item simple scalars
-
-Simple Perl scalars (any scalar that is not a reference) are the most
-difficult objects to encode: JSON::XS and JSON::PP will encode undefined scalars as
-JSON C values, scalars that have last been used in a string context
-before encoding as JSON strings, and anything else as number value:
-
- # dump as number
- encode_json [2] # yields [2]
- encode_json [-3.0e17] # yields [-3e+17]
- my $value = 5; encode_json [$value] # yields [5]
-
- # used as string, so dump as string
- print $value;
- encode_json [$value] # yields ["5"]
-
- # undef becomes null
- encode_json [undef] # yields [null]
-
-You can force the type to be a string by stringifying it:
-
- my $x = 3.1; # some variable containing a number
- "$x"; # stringified
- $x .= ""; # another, more awkward way to stringify
- print $x; # perl does it for you, too, quite often
-
-You can force the type to be a number by numifying it:
-
- my $x = "3"; # some variable containing a string
- $x += 0; # numify it, ensuring it will be dumped as a number
- $x *= 1; # same thing, the choice is yours.
-
-You can not currently force the type in other, less obscure, ways.
-
-Note that numerical precision has the same meaning as under Perl (so
-binary to decimal conversion follows the same rules as in Perl, which
-can differ to other languages). Also, your perl interpreter might expose
-extensions to the floating point numbers of your platform, such as
-infinities or NaN's - these cannot be represented in JSON, and it is an
-error to pass those in.
-
-=item Big Number
-
-When C