Adobe Photoshop 7.0: A Classic Photo Editing Software That Still Works
-
Adobe Photoshop 7.0 is one of the most popular and widely used photo editing software in the world. It was released in 2002 and has been a favorite among professional and amateur photographers, graphic designers, and digital artists ever since. Adobe Photoshop 7.0 offers a range of features and tools that allow you to create, edit, enhance, and manipulate images with ease and precision. In this article, we will review some of the main features and benefits of Adobe Photoshop 7.0 and why it is still a great choice for photo editing in 2023.
-
One of the main advantages of Adobe Photoshop 7.0 is its compatibility and performance. Adobe Photoshop 7.0 can run smoothly on almost any Windows or Mac computer, even if it has low specifications or an older operating system. It does not require much disk space or memory to install and operate, unlike newer versions of Photoshop that may slow down your system or crash frequently. Adobe Photoshop 7.0 also supports a wide range of file formats, such as JPEG, PNG, GIF, TIFF, PSD, PDF, and more. You can easily import and export images from different sources and devices without losing quality or data.
Another key feature of Adobe Photoshop 7.0 is its user interface and functionality. Adobe Photoshop 7.0 has a simple and intuitive interface that makes it easy to navigate and access the various tools and options. You can customize the layout and appearance of the interface according to your preferences and needs. You can also use keyboard shortcuts and mouse gestures to speed up your workflow and productivity. Adobe Photoshop 7.0 also has a powerful and versatile functionality that allows you to perform a variety of tasks and effects on your images. You can crop, resize, rotate, flip, skew, distort, warp, transform, align, merge, blend, layer, mask, filter, adjust, colorize, retouch, sharpen, blur, smudge, clone, heal, dodge, burn, sponge, gradient, texturize, stylize, draw, paint, erase, fill, stroke, select, cut, copy, paste, undo, redo, save, print, and more.
-
A third benefit of Adobe Photoshop 7.0 is its creativity and innovation. Adobe Photoshop 7.0 offers a range of creative and innovative features and tools that allow you to unleash your imagination and express your vision. You can use Adobe Photoshop 7.0 to create stunning graphics and artworks for various purposes and platforms. You can design logos, banners, posters, flyers, brochures, cards,
-invitations,
-stickers,
-labels,
-t-shirts,
-mugs,
-calendars,
-wallpapers,
-icons,
-buttons,
-illustrations,
-comics,
-cartoons,
-animations,
-games,
-websites,
-apps,
-and more.
-You can also use Adobe Photoshop 7.0 to enhance your photos and make them look more professional
-and artistic.
-You can edit your photos to improve their brightness,
-contrast,
-color,
-exposure,
-white balance,
-sharpness,
-noise reduction,
-red-eye removal,
-blemish removal,
-skin smoothing,
-teeth whitening,
-eye color changing,
-hair color changing,
-face reshaping,
-body slimming,
-background changing,
-object adding or removing,
-and more.
-You can also apply various effects
-and filters
-to your photos
-to make them look more
-dramatic,
-romantic,
-vintage,
-retro,
-glamorous,
-grunge,
-pop art,
-watercolor,
-oil painting,
-sketching,
-and more.
-
In conclusion
-Adobe Photoshop 7.0
-is a classic photo editing software
-that still works
-in 2023.
-It has a range of features
-and benefits
-that make it a great choice
-for photo editing
-in terms of compatibility
-performance
-user interface
-functionality
-creativity
-and innovation.
-You can use Adobe Photoshop 7.0
-to create
-edit
-enhance
-and manipulate images
-with ease
-and precision.
-You can also use Adobe Photoshop 7.0
-to express your vision
-and unleash your imagination.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Compendio De Obstetricia Votta Pdf.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Compendio De Obstetricia Votta Pdf.md
deleted file mode 100644
index 649eb7d94ec0c91a9d181c8c0798b51afc00a74b..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Compendio De Obstetricia Votta Pdf.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
Compendio De Obstetricia Votta Pdf: A Comprehensive Guide for Obstetrics Students and Professionals
-
-
If you are looking for a reliable and updated source of information on obstetrics, you may want to check out the Compendio De Obstetricia Votta Pdf. This is a book written by Osvaldo H. Parada and Roberto A. Votta, two renowned obstetricians from Argentina, who have compiled their extensive knowledge and experience in this field.
-
-
The Compendio De Obstetricia Votta Pdf covers all the aspects of obstetrics, from normal pregnancy and delivery to complications and emergencies. It also includes chapters on gynecology, neonatology, genetics, ultrasound, and more. The book is organized in a clear and concise way, with tables, figures, algorithms, and clinical cases to illustrate the concepts.
The Compendio De Obstetricia Votta Pdf is a valuable resource for obstetrics students, residents, and specialists who want to update their skills and knowledge. It is also useful for other health professionals who work with pregnant women and newborns, such as nurses, midwives, pediatricians, and family doctors.
-
-
You can download the Compendio De Obstetricia Votta Pdf for free from various websites on the internet[^1^] [^2^] [^3^]. However, we recommend that you buy the original book from a reputable publisher or bookstore to support the authors and ensure the quality of the content.
-
-
The Compendio De Obstetricia Votta Pdf is a must-have for anyone who wants to learn more about obstetrics and improve their practice. It is a comprehensive guide that will help you provide the best care for your patients.
-
-
Obstetrics Trends in 2022
-
-
Obstetrics is a dynamic and evolving field that constantly adapts to new evidence, technologies, and challenges. In 2022, some of the trends that may shape obstetrics practice and research include:
-
-
-
Malaria prevention in pregnancy. Malaria is a major cause of maternal and fetal morbidity and mortality in endemic regions. A recent trial in East Africa compared different regimens of intermittent preventive treatment in pregnancy (IPTp) with sulfadoxine-pyrimethamine (SP) or dihydroartemisinin-piperaquine (DP), with or without azithromycin [ 1 ]. The results showed that DP was more effective than SP in reducing clinical malaria, but also associated with higher rates of adverse pregnancy outcomes. Further studies are needed to optimize malaria prevention strategies in areas with SP resistance.
-
Intrauterine transfusion for alpha thalassemia major. Alpha thalassemia major (ATM) is a severe form of hemolytic anemia that usually results in fetal demise unless intrauterine transfusions (IUT) are performed. A series of 19 pregnancies with prenatally diagnosed ATM showed that IUT can improve survival and neurodevelopmental outcomes, especially if initiated early [ 2 ]. IUT should be offered as a fetal therapy option for patients with ATM who wish to continue their pregnancies.
-
Timing of aspirin discontinuation in preeclampsia prophylaxis. Aspirin is widely used for preventing preeclampsia in high-risk pregnancies, but the optimal time to stop it before delivery is unclear. A randomized trial in Spain compared two strategies: stopping aspirin at 36 weeks or continuing it until delivery [ 3 ]. The trial found no significant difference between the groups in the incidence of preeclampsia or other maternal or neonatal outcomes. Thus, either approach may be reasonable depending on individual preferences and circumstances.
-
-
-
These are just some of the examples of the current trends in obstetrics that may influence clinical practice and research in 2022. Obstetricians should stay updated on the latest evidence and guidelines to provide the best care for their patients.
- 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Windows XP SP3 Angel Live V.2.0.iso The Features and Benefits of this Superb XP.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Windows XP SP3 Angel Live V.2.0.iso The Features and Benefits of this Superb XP.md
deleted file mode 100644
index 6d44fd5c2a5d87848a89b1e15df7d7f4121c7b37..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FULL Windows XP SP3 Angel Live V.2.0.iso The Features and Benefits of this Superb XP.md
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
What is Windows XP SP3 Angel Live V.2.0.iso?
-
Windows XP is one of the most popular and widely used operating systems in the world, even though it was released more than 20 years ago. However, Microsoft stopped supporting it in 2014, which means that it no longer receives security updates or bug fixes.
Fortunately, there are some unofficial versions of Windows XP that are still being maintained and updated by enthusiasts and developers who want to keep this operating system alive and functional.
-
One of these versions is Windows XP SP3 Angel Live V.2.0.iso, which is a modified and enhanced version of Windows XP that can run from a CD or a USB drive without installation.
-
This version of Windows XP has many features that make it faster, more stable, more secure, and more customizable than the original one.
-
In this article, we will show you what are these features, why you should choose this version of Windows XP, how to download it, how to install it, how to use it, and how to troubleshoot it.
-
Why choose Windows XP SP3 Angel Live V.2.0.iso?
-
There are many reasons why you might want to choose Windows XP SP3 Angel Live V.2.0.iso over other versions of Windows XP or other operating systems.
-
Here are some of the benefits of using this version of Windows XP:
-
-
Speed: This version of Windows XP is optimized for performance and runs faster than the original one. It has less bloatware and unnecessary services that slow down your system.
-
Stability: This version of Windows XP is more stable and reliable than the original one. It has fewer bugs and errors that cause crashes or freezes.
-
Security: This version of Windows XP is more secure than the original one. It has all the latest updates and patches that fix the vulnerabilities and exploits that affect Windows XP.
-
Customization: This version of Windows XP is more customizable than the original one. It has many tools and options that let you change the appearance and functionality of your system according to your preferences.
-
-
How to download Windows XP SP3 Angel Live V.2.0.iso?
-
If you want to try Windows XP SP3 Angel Live V.2.0.iso, you need to download the ISO file first.
-
An ISO file is an image file that contains all the data and files that are needed to create a bootable CD or USB drive.
-
You can download Windows XP SP3 Angel Live V.2.0.iso from various sources on the internet, but you need to be careful about where you get it from.
-
Download Windows XP SP3 Angel Live V.2.0.iso full version
-How to install Windows XP SP3 Angel Live V.2.0.iso on a USB drive
-Windows XP SP3 Angel Live V.2.0.iso bootable CD/DVD
-Windows XP SP3 Angel Live V.2.0.iso torrent link
-Windows XP SP3 Angel Live V.2.0.iso free download with crack
-Windows XP SP3 Angel Live V.2.0.iso system requirements and features
-Windows XP SP3 Angel Live V.2.0.iso review and rating
-Windows XP SP3 Angel Live V.2.0.iso activation key generator
-Windows XP SP3 Angel Live V.2.0.iso serial number and product key
-Windows XP SP3 Angel Live V.2.0.iso update and patch
-Windows XP SP3 Angel Live V.2.0.iso comparison with other Windows versions
-Windows XP SP3 Angel Live V.2.0.iso customization and optimization tips
-Windows XP SP3 Angel Live V.2.0.iso troubleshooting and error fixing guide
-Windows XP SP3 Angel Live V.2.0.iso backup and restore options
-Windows XP SP3 Angel Live V.2.0.iso security and antivirus software
-Windows XP SP3 Angel Live V.2.0.iso compatible drivers and hardware
-Windows XP SP3 Angel Live V.2.0.iso best apps and games
-Windows XP SP3 Angel Live V.2.0.iso online support and community forum
-Windows XP SP3 Angel Live V.2.0.iso alternative download sources and mirrors
-Windows XP SP3 Angel Live V.2.0.iso file size and format
-Windows XP SP3 Angel Live V.2.0.iso license agreement and terms of use
-Windows XP SP3 Angel Live V.2.0.iso history and development
-Windows XP SP3 Angel Live V.2.0.iso advantages and disadvantages
-Windows XP SP3 Angel Live V.2.0.iso screenshots and videos
-Windows XP SP3 Angel Live V.2.0.iso FAQs and answers
-
Some sources may provide fake or corrupted files that may harm your system or contain malware or viruses.
-
To avoid these risks, we recommend you download Windows XP SP3 Angel Live V.2.0.iso from a reliable source such as Archive.org or YouTube. These sources provide direct links to download the ISO file without any surveys or ads.
-
The size of the ISO file is about 633 MB, so make sure you have enough space on your hard drive or your USB drive before downloading it.
-
After downloading the ISO file, you need to verify its integrity by checking its checksum or hash value.
-
A checksum or hash value is a unique code that identifies a file based on its content.
-
If two files have the same checksum or hash value, it means they are identical.
-
If they have different checksums or hash values, it means they are different or corrupted.
-
You can use various tools such as MD5 & SHA Checksum Utility or HashTab to calculate and compare the checksum or hash value of your downloaded ISO file with the original one provided by the source.
-
If they match, it means your downloaded ISO file is valid and safe.
-
If they don't match, it means your downloaded ISO file is invalid or tampered with.
-
How to install Windows XP SP3 Angel Live V.2.0.iso?
-
After downloading and verifying Windows XP SP3 Angel Live V.2.0.iso, you can install it on your system in two ways:
-
-
On a computer: You can install this version of Windows XP on a physical computer by burning the ISO file to a CD or a USB drive and booting from it.
-
On a virtual machine: You can install this version of Windows XP on a virtual machine by creating a virtual machine and mounting the ISO file as a virtual CD.
-
-
How to install Windows XP SP3 Angel Live V.2.0.iso on a computer?
-
To install Windows XP SP3 Angel Live V.2.0.iso on a computer, you need to burn the ISO file to a CD or a USB drive first.
-
You can use various tools such as ImgBurn or Rufus to burn the ISO file to a CD or a USB drive respectively.
-
You need to make sure that your CD or USB drive has enough space (at least 700 MB) and is formatted as FAT32.
-
You also need to make sure that your computer supports booting from a CD or a USB drive.
-
To do that, you need to access your computer's BIOS settings by pressing a specific key (usually F1, F2, F10, F12, ESC, DEL) during startup.
-
In your BIOS settings, you need to find the boot order option and set your CD or USB drive as the first boot device.
-
You can save your changes and exit your BIOS settings by pressing another specific key (usually F10).
-
Your computer will restart and boot from your CD or USB drive automatically.
-
How to install Windows XP SP3 Angel Live V.2 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Grand Ages Rome Gold Edition Serial What You Need to Know Before You Buy.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Grand Ages Rome Gold Edition Serial What You Need to Know Before You Buy.md
deleted file mode 100644
index 815e24995d9fe9f780758696ebcbab29be06de36..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Grand Ages Rome Gold Edition Serial What You Need to Know Before You Buy.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
Grand Ages Rome Gold Edition Serial: How to Get It and Play the Game
- If you are a fan of strategy games set in historical periods, you might have heard of Grand Ages Rome. This is a city-building and management simulation game that lets you take control of one of the greatest civilizations in history. You can raise massive armies, embark on epic campaigns, expand your empire, and engage in grand-scale city building. You can also create magnificent cities with creativity and control like never before. But what if you want to play the enhanced version of the game, which includes the original Grand Ages Rome and its expansion pack, Reign of Augustus? This is where Grand Ages Rome Gold Edition comes in. This package offers more features, content, and gameplay options than the base game. For example, you can play as one of four new factions, access 12 new maps, build 6 new buildings, and enjoy improved graphics and performance. However, to play Grand Ages Rome Gold Edition, you need a valid serial number. This is a unique code that activates and registers your copy of the game. Without it, you won't be able to install or play the game properly. So how do you get a serial number for Grand Ages Rome Gold Edition? And how do you use it to install and play the game? In this article, we will answer these questions and more.
Why do you need a serial number for Grand Ages Rome Gold Edition?
- A serial number is a sequence of letters and numbers that identifies your copy of the game. It is also known as a product key or an activation code. You need a serial number for Grand Ages Rome Gold Edition for two main reasons: - To activate the game: This means verifying that your copy of the game is legitimate and not pirated. Activation is usually done online, by entering your serial number on a website or through a software client. Activation prevents unauthorized copying and distribution of the game. - To register the game: This means creating an account that allows you to access online features of the game, such as multiplayer mode, leaderboards, achievements, and updates. Registration is usually done by entering your serial number and your email address on a website or through a software client. If you don't have a valid serial number for Grand Ages Rome Gold Edition, you might encounter some problems when trying to install or play the game. For example: - You might not be able to install the game at all, or only partially. - You might not be able to launch or run the game properly. - You might not be able to access online features or multiplayer mode. - You might get error messages or warnings that your copy of the game is invalid or duplicate. Therefore, it is important to have a valid serial number for Grand Ages Rome Gold Edition if you want to enjoy the full experience of the game.
How to get a valid serial number for Grand Ages Rome Gold Edition?
- There are two main ways to get a valid serial number for Grand Ages Rome Gold Edition: the official way and the unofficial way. The official way is to buy the game from Steam or other authorized retailers. This is the most legal and safe way to get a serial number for Grand Ages Rome Gold Edition. When you buy the game from Steam or other authorized retailers, you will receive a serial number along with your purchase confirmation. You can then use this serial number to activate and register your copy of the game. The unofficial way is to use a crack or a keygen from online sources. This is an illegal and risky way to get a serial number for Grand Ages Rome Gold Edition. A crack is a file that modifies or bypasses the activation or registration process of the game. A keygen is a program that generates random serial numbers that might work for the game. When you download a crack or a keygen from online sources, you might be able to install and play the game without buying it. However, there are some drawbacks and dangers of using a crack or a keygen for Grand Ages Rome Gold Edition. For example: - You might violate the terms of service or end-user license agreement of the game developer or publisher. - You might infringe on the intellectual property rights or copyrights of the game developer or publisher. - You might expose your computer to viruses, malware, spyware, or other harmful software that might damage your system or steal your personal information. - You might not be able to access online features or multiplayer mode of the game. - You might not be able to update or patch your copy of the game. - You might not be able to get technical support or customer service from the game developer or publisher. Therefore, it is advisable to avoid using a crack or a keygen for Grand Ages Rome Gold Edition if you want to avoid legal troubles or security risks.
How to install and play Grand Ages Rome Gold Edition with a serial number?
- Depending on whether you bought the game from Steam or other authorized retailers, or downloaded it from online sources, there are different steps for installing and playing Grand Ages Rome Gold Edition with a serial number. If you bought the game from Steam or other authorized retailers, here are the steps for installing and playing Grand Ages Rome Gold Edition with a serial number: - Download and install Steam on your computer if you don't have it already. - Launch Steam and log in with your account credentials. - Go to Library > Games > Add A Game > Activate A Product On Steam. - Enter your serial number for Grand Ages Rome Gold Edition when prompted. - Follow the instructions on screen to complete the activation process. - Once activated, you can download and install Grand Ages Rome Gold Edition from your Steam library. - Launch Grand Ages Rome Gold Edition from Steam and enjoy playing. Alternatively, if you bought a physical disc of Grand Ages Rome Gold Edition from an authorized retailer, here are the steps for installing and playing Grand Ages Rome Gold Edition with a serial number: - Insert your disc into your computer's CD/DVD drive. - Follow the instructions on screen to start the installation process. - Enter your serial number for Grand Ages Rome Gold Edition when prompted. - Follow the instructions on screen to complete the installation process. - Once installed, launch Grand Ages Rome Gold Edition from your desktop shortcut or start menu and enjoy playing. If you downloaded Grand Ages Rome Gold Edition from online sources along with a crack or a keygen file, here are the steps for installing and playing Grand Ages Rome Gold Edition with a serial number: - Extract your downloaded file using an archive program such as WinRAR or 7-Zip. - Run your keygen program and generate a random serial number for Grand Ages Rome Gold Edition. - Copy this serial number somewhere safe for later use. - Run your setup program and start installing Grand Ages Rome Gold Edition on your computer. - Enter your generated serial number when prompted during installation. - Follow any other instructions on screen to complete installation process. - Once installed, copy your crack file into your installation folder where your main executable file (Rome.exe) is located. Replace any existing files if asked. - Block your main executable file (Rome.exe) in your firewall program by creating an outbound rule that prevents it from accessing internet connection. This will prevent any online verification checks that might invalidate your copy of the game. - Launch Grand Ages Rome Gold Edition from your desktop shortcut or start menu and enjoy playing.
Conclusion: Enjoy The grand strategy Game Set In Ancient Rome
- Grand Ages Rome Gold Edition is an amazing strategy game that lets you experience what it was like to be part of one of history's most powerful empires. You can build cities, wage wars, manage politics, and shape history as you see fit. However, to play this game, you need a valid serial number that activates and registers your copy of the game. You can get a serial number by buying the game from Steam or other authorized retailers, or by using a crack a keygen from online sources. However, each method has its own pros and cons, and you should be aware of the legal and security implications of using a crack or a keygen. Once you have a serial number, you can install and play Grand Ages Rome Gold Edition by following the steps for your chosen method. Whether you bought the game from Steam or other authorized retailers, or downloaded it from online sources, you should block the game in your firewall to prevent any online verification checks that might invalidate your copy of the game. Now that you have installed and played Grand Ages Rome Gold Edition with a serial number, you can enjoy the grand strategy game set in ancient Rome. You can choose from five different families, each with their own traits and abilities. You can also customize your character's appearance, skills, and talents. You can explore a vast map that covers Europe, Africa, and Asia. You can build and manage cities with over 40 different buildings and 50 different units. You can engage in real-time battles with thousands of soldiers and hundreds of weapons. You can also participate in historical events and scenarios that will shape the fate of Rome. Grand Ages Rome Gold Edition is a game that will challenge your strategic thinking and immerse you in a rich historical setting. With its stunning graphics, realistic sound effects, and captivating gameplay, Grand Ages Rome Gold Edition is a game that you will not regret playing.
FAQs
- Here are some frequently asked questions about Grand Ages Rome Gold Edition Serial: - Q: Where can I buy Grand Ages Rome Gold Edition? - A: You can buy Grand Ages Rome Gold Edition from Steam or other authorized retailers such as Amazon, GOG.com, or Humble Bundle. - Q: How much does Grand Ages Rome Gold Edition cost? - A: Grand Ages Rome Gold Edition costs $14.99 on Steam, but it is often on sale for a lower price. - Q: What are the system requirements for Grand Ages Rome Gold Edition? - A: The minimum system requirements for Grand Ages Rome Gold Edition are: - OS: Windows XP or Vista - Processor: 2.5 GHz Single Core Processor - Memory: 1 GB RAM - Graphics: 128 MB 3D Video Card (GeForce 6600/Radeon 9600 or better) - DirectX: Version 9.0c - Storage: 4 GB available space - Sound Card: DirectX Compatible The recommended system requirements for Grand Ages Rome Gold Edition are: - OS: Windows XP or Vista - Processor: 2.5 GHz Dual Core Processor - Memory: 2 GB RAM - Graphics: 256 MB 3D Video Card (GeForce 8800/Radeon HD2900 or better) - DirectX: Version 9.0c - Storage: 4 GB available space - Sound Card: DirectX Compatible - Q: How many players can play Grand Ages Rome Gold Edition online? - A: Grand Ages Rome Gold Edition supports up to four players in online multiplayer mode. - Q: What are the differences between Grand Ages Rome and Grand Ages Rome Gold Edition? - A: Grand Ages Rome Gold Edition includes the original Grand Ages Rome and its expansion pack, Reign of Augustus. The expansion pack adds four new factions, 12 new maps, six new buildings, improved graphics and performance, and more gameplay options.
-
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Clash Royale for Windows 11 The Ultimate Guide to Install and Play.md b/spaces/1phancelerku/anime-remove-background/Clash Royale for Windows 11 The Ultimate Guide to Install and Play.md
deleted file mode 100644
index fdddb5dc2f75d0a1082c4b58c56dc1ded041ad12..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Clash Royale for Windows 11 The Ultimate Guide to Install and Play.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
How to Download and Play Clash Royale on Windows 11
-
Are you a fan of strategy games that are fast-paced, fun, and competitive? Do you want to experience a new way of playing your favorite mobile game on your PC? If you answered yes to both questions, then you should definitely try out Clash Royale on Windows 11.
Clash Royale is a real-time multiplayer game developed and published by Supercell, the makers of the popular Clash of Clans. In this game, you collect and upgrade cards that feature characters, spells, and defenses from the Clash universe. You use these cards to battle other players online in a three-minute match where the goal is to destroy their towers and win trophies, crowns, and glory.
-
The game has over 90 unique cards that belong to different rarities, types, and arenas. You can create your own battle deck with up to eight cards and customize it according to your play style and strategy. You can also join or form a clan with other players to share cards, chat, and participate in clan wars for big rewards.
-
Clash Royale is constantly updated with new features, events, and challenges that keep the game fresh and exciting. You can unlock new cards, arenas, skins, emotes, magic items, and more as you progress through the game. You can also compete in global tournaments, seasonal events, special modes, and ladder matches to test your skills against the best players in the world.
-
Why play Clash Royale on Windows 11?
-
The benefits of playing on a larger screen, better graphics, and smoother controls
-
While Clash Royale is primarily designed for mobile devices, playing it on Windows 11 can offer you some advantages that can enhance your gaming experience. Here are some of them:
-
How to install clash royale on windows 11 PC
-Clash royale windows 11 emulator download
-Best clash royale decks for windows 11 players
-Clash royale windows 11 update and new features
-Clash royale windows 11 gameplay and tips
-How to play clash royale with friends on windows 11
-Clash royale windows 11 vs android comparison
-How to fix clash royale not working on windows 11
-Clash royale windows 11 system requirements and performance
-How to transfer clash royale account from android to windows 11
-Clash royale windows 11 review and rating
-How to get free gems and coins in clash royale on windows 11
-Clash royale windows 11 cheats and hacks
-How to join a clan in clash royale on windows 11
-Clash royale windows 11 tournaments and events
-How to stream clash royale on windows 11 using OBS
-Clash royale windows 11 keyboard and mouse controls
-How to customize clash royale settings on windows 11
-Clash royale windows 11 best graphics and sound options
-How to backup and restore clash royale data on windows 11
-Clash royale windows 11 offline mode and online mode
-How to chat and communicate in clash royale on windows 11
-Clash royale windows 11 challenges and quests
-How to unlock new cards and upgrade them in clash royale on windows 11
-Clash royale windows 11 strategy and guide
-How to level up and rank up in clash royale on windows 11
-Clash royale windows 11 achievements and rewards
-How to report and block players in clash royale on windows 11
-Clash royale windows 11 support and feedback
-How to uninstall and reinstall clash royale on windows 11
-Clash royale windows 11 vs ios comparison
-How to download clash royale mod apk for windows 11
-Clash royale windows 11 beta version and release date
-Clash royale windows 11 news and updates
-Clash royale windows 11 fan art and wallpapers
-How to create and edit your own deck in clash royale on windows 11
-Clash royale windows 11 best practices and tips for beginners
-Clash royale windows 11 pros and cons
-How to watch clash royale videos and live streams on windows 11
-Clash royale windows 11 FAQ and troubleshooting
-
-
You can enjoy the game on a larger screen with higher resolution and better graphics. This can help you see the details of the cards, units, towers, and arena more clearly. It can also make the game more immersive and engaging.
-
You can use your mouse and keyboard to control the game instead of tapping on a small touchscreen. This can give you more accuracy, speed, and comfort when playing. You can also customize your key bindings and settings according to your preference.
-
You can avoid battery drain, overheating, lagging, or crashing issues that may occur on some mobile devices. Playing on Windows 11 can ensure that your PC runs smoothly and efficiently without compromising your performance or enjoyment.
-
How to Download and Install Clash Royale on Windows 11
-
The minimum system requirements for Windows 11 and Clash Royale
-
Before you can download and play Clash Royale on Windows 11, you need to make sure that your PC meets the minimum system requirements for both the operating system and the game. Here are the specifications you need to check:
-
-
-
Windows 11
-
Clash Royale
-
-
-
-
-
Processor: 1 GHz or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC)
-
RAM: 4 GB
-
Storage: 64 GB or larger storage device
-
Graphics card: Compatible with DirectX 12 or later with WDDM 2.0 driver
-
Display: High definition (720p) display that is greater than 9” diagonally, 8 bits per color channel
-
Internet connection: Required for updates and some features
-
-
-
-
-
Android version: 4.1 and up
-
RAM: 1 GB (recommended)
-
Storage: 116 MB (additional files may be downloaded)
-
Graphics: OpenGL ES 3.0 support (recommended)
-
Internet connection: Required to play online
-
-
-
-
-
If your PC meets or exceeds these requirements, you can proceed to the next step. If not, you may need to upgrade your hardware or look for other alternatives.
-
The steps to download and install an Android emulator (Bluestacks 5) on Windows 11
-
An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available online, but one of the most popular and reliable ones is Bluestacks 5. Bluestacks 5 is the latest version of the Bluestacks app player that offers improved performance, compatibility, and features for Windows 11 users.
-
To download and install Bluestacks 5 on Windows 11, follow these steps:
Click on the Download Bluestacks 5 button and wait for the installer file to download.
-
Double-click on the installer file and follow the instructions on the screen to install Bluestacks 5 on your PC.
-
Once the installation is complete, launch Bluestacks 5 from your desktop or start menu.
-
Sign in with your Google account or create a new one if you don't have one.
-
You are now ready to use Bluestacks 5 and access the Google Play Store.
-
-
The steps to download and install Clash Royale from the Google Play Store on Bluestacks 5
-
Now that you have Bluestacks 5 installed on your PC, you can easily download and install Clash Royale from the Google Play Store. Here are the steps to do so:
-
-
On the Bluestacks home screen, click on the Google Play Store icon.
-
In the search bar, type Clash Royale and hit enter.
-
Select Clash Royale from the list of results and click on the Install button.
-
Wait for the game to download and install on your PC.
-
Once the installation is done, click on the Open button or go back to the Bluestacks home screen and click on the Clash Royale icon.
-
You can now enjoy playing Clash Royale on your PC with Bluestacks 5.
-
-
By using buildings, spells, and high HP troops to defend your towers, you can prevent your opponent from gaining an elixir or tower advantage and turn the tide of the battle in your favor. You can also save your towers from being destroyed and losing the game.
-
Use a win condition card to target enemy towers
-
A fifth way to improve your gameplay in Clash Royale is to use a win condition card to target enemy towers. A win condition card is a card that can directly or indirectly deal damage to enemy towers and help you win the game. Some examples of win condition cards are Hog Rider, Royal Giant, Graveyard, Miner, Goblin Barrel, and X-Bow. These cards have different strengths and weaknesses, but they all share the same goal: to destroy enemy towers.
-
By using a win condition card to target enemy towers, you can increase your chances of winning the game by dealing consistent and significant damage to your opponent's towers. You can also force your opponent to react and spend elixir to defend their towers, which can give you an elixir or tower advantage.
-
Conclusion
-
A summary of the main points and a call to action for the readers to try out Clash Royale on Windows 11
-
In conclusion, Clash Royale is a fun and addictive game that you can enjoy on Windows 11 with the help of an Android emulator like Bluestacks 5. By playing Clash Royale on Windows 11, you can benefit from a larger screen, better graphics, and smoother controls. You can also improve your gameplay by following some tips and tricks, such as joining a clan, attacking in pairs, counting elixir, defending your towers, and using a win condition card.
-
If you are interested in trying out Clash Royale on Windows 11, you can download and install Bluestacks 5 from their official website and then download and install Clash Royale from the Google Play Store on Bluestacks 5. You can then start playing Clash Royale on your PC and have a blast with your friends and foes.
-
What are you waiting for? Download Clash Royale on Windows 11 today and join the millions of players who are already enjoying this amazing game!
-
FAQs
-
What are the best cards in Clash Royale?
-
There is no definitive answer to this question, as different cards may suit different players, decks, strategies, and situations. However, some of the most popular and versatile cards in Clash Royale are:
-
-
Mega Knight: A legendary card that costs 7 elixir and can deal massive damage with its jump and splash attacks. It can counter swarms, tanks, buildings, and ground troops effectively.
-
Skeleton Dragons: A common card that costs 4 elixir and spawns two flying skeletons that shoot fireballs. It can deal decent damage to air and ground troops and buildings.
-
Mother Witch: A legendary card that costs 4 elixir and shoots cursed bolts that turn enemy troops into hogs when they die. It can create a swarm of hogs that can overwhelm the enemy's defense.
-
Royal Delivery: A rare card that costs 3 elixir and drops a crate that deals area damage and spawns a Royal Recruit. It can be used to surprise and counter enemy troops or buildings.
-
Goblin Cage: A rare card that costs 4 elixir and spawns a building that releases a Goblin Brawler when destroyed. It can be used to lure and distract enemy troops or deal damage to enemy towers.
-
How do I get more gems and gold in Clash Royale?
-
Gems and gold are two of the most important resources in Clash Royale, as they allow you to buy chests, cards, magic items, emotes, skins, and more. There are several ways to get more gems and gold in Clash Royale:
-
-
Complete quests, achievements, challenges, tournaments, clan wars, seasonal events, special modes, ladder matches, etc. These activities can reward you with gems, gold, chests, magic items, etc.
-
Open chests that you get from battles or quests. These chests can contain gems, gold, cards, magic items, etc.
-
Donate or request cards from your clanmates. This can earn you gold and XP for each card you donate or request.
-
Buy gems or gold from the shop with real money. This is the fastest but most expensive way to get more gems and gold in Clash Royale.
-
-
How do I join or create a clan in Clash Royale?
-
Joining or creating a clan in Clash Royale is a great way to interact with other players, share cards, chat, and participate in clan wars. To join or create a clan in Clash Royale, you need to reach at least level 1 in the game. You can then follow these steps:
-
-
Tap on the Clan tab on the main screen.
-
Tap on the Join a Clan button to browse or search for a clan that suits your preferences. You can filter the clans by name, location, trophy requirement, type, etc.
-
Tap on the Request to Join button to send a request to the clan leader or co-leader. You can also write a message to introduce yourself and explain why you want to join the clan.
-
Wait for the clan leader or co-leader to accept or reject your request. If they accept your request, you will become a member of the clan and be able to access the clan chat, shop, wars, etc.
-
If you want to create your own clan instead of joining an existing one, you can tap on the Create a Clan button instead of the Join a Clan button. You will need to spend 1000 gold to create a clan.
-
You can then choose a name, badge, location, type, trophy requirement, description, and tag for your clan. You can also invite your friends or family to join your clan or accept requests from other players who want to join your clan.
-
You will become the leader of your clan and be able to manage it as you wish. You can promote or demote members, start or cancel clan wars, edit the clan settings, etc.
-
-
How do I change my name or avatar in Clash Royale?
-
Changing your name or avatar in Clash Royale is a simple and quick process that can help you personalize your profile and express your identity. To change your name or avatar in Clash Royale, follow these steps:
-
-
Tap on your profile icon on the top left corner of the main screen.
-
Tap on the Name Change button or the Edit Avatar button depending on what you want to change.
-
If you want to change your name, you can enter a new name in the text box and tap on the Confirm button. You can only change your name once for free, so choose wisely. If you want to change your name again, you will need to spend 500 gems.
-
If you want to change your avatar, you can choose from a variety of avatars that feature different characters, animals, objects, etc. You can also unlock more avatars by completing achievements, challenges, events, etc. Tap on the avatar that you like and tap on the Select button.
-
Your name or avatar will be changed immediately and be visible to other players in the game.
-
-
How do I contact Supercell for support or feedback?
-
If you have any issues, questions, suggestions, or feedback regarding Clash Royale or any other Supercell game, you can contact Supercell for support or feedback through their official channels. Here are some ways to do so:
-
-
You can use the in-game support feature by tapping on the Settings icon on the top right corner of the main screen and then tapping on the Help and Support button. You can then browse through the frequently asked questions (FAQs) or contact Supercell directly by tapping on the Contact Us button.
-
You can visit the official website of Supercell at https://supercell.com/en/ and click on the Contact Us link at the bottom of the page. You can then fill out a form with your details and message and submit it to Supercell.
-
You can follow Supercell on their social media platforms such as Facebook, Twitter, Instagram, YouTube, Reddit, Discord, etc. You can then send them a message or comment on their posts with your feedback or inquiry.
-
Supercell is usually responsive and helpful when it comes to addressing their players' concerns and opinions. However, please be respectful and polite when contacting them and avoid spamming or abusing them.
-
I
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Nguwe by Q-Mark TpZee Afriikan Papi - Amapiano Mp3 2022.md b/spaces/1phancelerku/anime-remove-background/Download Nguwe by Q-Mark TpZee Afriikan Papi - Amapiano Mp3 2022.md
deleted file mode 100644
index 00c1eeb0b0ac8803720898d9db480238eb0db8d7..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Nguwe by Q-Mark TpZee Afriikan Papi - Amapiano Mp3 2022.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
How to Download Q Mark Nguwe Mp3
-
Q Mark Nguwe mp3 is a hit song by South African artists Q-Mark, TpZee, and Afriikan Papi. It is a love-themed track with a nostalgic eighties dance feel, a simple baseline, and smooth vocals. The song has been streamed millions of times on various platforms, such as YouTube, Spotify, Apple Music, and more. If you are a fan of this song and want to download it as an mp3 file, you might be wondering how to do it.
Downloading mp3 files has many advantages. You can listen to your favorite music offline, without using data or Wi-Fi. You can also transfer the files to different devices, such as your phone, tablet, computer, or mp3 player. You can also create playlists, edit tags, and customize your music library.
-
There are different ways to download mp3 files, depending on your device, budget, and preference. In this article, we will show you three main methods to download Q Mark Nguwe mp3: buying music on desktop with iTunes, downloading music for free from YouTube and SoundCloud, and downloading music from other websites or apps. Let's get started!
-
Method 1: Buying Music on Desktop with iTunes
-
If you have a Windows or Mac computer, you can use iTunes to buy and download Q Mark Nguwe mp3. iTunes is a software that allows you to manage your music library, sync your devices, and access the iTunes Store. Here are the steps to follow:
-
download q mark nguwe mp3 free
-download q mark nguwe mp3 song
-download q mark nguwe mp3 2022
-download q mark nguwe mp3 audio
-download q mark nguwe mp3 music
-download q mark nguwe mp3 online
-download q mark nguwe mp3 portalmoznews
-download q mark nguwe mp3 fakaza
-download q mark nguwe mp3 zamusic
-download q mark nguwe mp3 tubidy
-download q mark nguwe mp3 waploaded
-download q mark nguwe mp3 hiphopza
-download q mark nguwe mp3 sahiphop
-download q mark nguwe mp3 naijavibes
-download q mark nguwe mp3 tooxclusive
-download q mark nguwe mp3 justnaija
-download q mark nguwe mp3 afrobeat
-download q mark nguwe mp3 amapiano
-download q mark nguwe mp3 gqom
-download q mark nguwe mp3 house
-download q mark nguwe mp3 kizomba
-download q mark nguwe mp3 zouk
-download q mark nguwe mp3 bongo flava
-download q mark nguwe mp3 afro pop
-download q mark nguwe mp3 r&b
-download q mark nguwe mp3 hip hop
-download q mark nguwe mp3 rap
-download q mark nguwe mp3 reggae
-download q mark nguwe mp3 dancehall
-download q mark nguwe mp3 gospel
-download q mark nguwe mp3 instrumental
-download q mark nguwe mp3 remix
-download q mark nguwe mp3 cover
-download q mark nguwe mp3 video
-download q mark nguwe mp3 lyrics
-download q mark nguwe tpzee afriikan papi mp3
-download tpzee afriikan papi ft. q-mark -nguwe 2022.mp3
-how to download q-mark -nguwe 2022.mp3
-where to download tpzee afriikan papi ft. q-mark -nguwe 2022.mp3
-best site to download tpzee afriikan papi ft. q-mark -nguwe 2022.mp3
-
-
Install iTunes and sign in with Apple ID. If you are using a Mac, iTunes is already installed on your computer. If you are using Windows, you need to download and install iTunes from [17](http://www.apple.com/itunes/download). You also need to create an Apple ID account and enter payment information for it before you can buy music from iTunes.
-
Search for music and buy it with iTunes. Open iTunes and click Store at the top of the window. In the search bar, type in Q Mark Nguwe mp3 or any other song, album, or artist you want. Select the music you want to buy and click the price button next to it. Enter your Apple ID password or use Touch ID if you have a MacBook with a Touch Bar.
-
View and transfer the music files on Windows or Mac. After buying the music, it will be added to your iTunes library automatically. You can view the files by clicking Library at the top of the window. You can also transfer the files to different devices by connecting them to your computer with a USB cable or using iCloud Music Library if you have an Apple Music subscription.
-
-
Method 2: Downloading Music for Free from YouTube and SoundCloud
-
If you don't want to spend money on buying music, you can also download Q Mark Nguwe mp3 for free from YouTube or SoundCloud. These are two popular platforms that host millions of music videos and audio tracks. However, you need to use a third-party website or app to convert and download the mp3 file Here are the steps to follow:
-
-
Find and copy the link of the music video or audio track. Go to YouTube or SoundCloud and search for Q Mark Nguwe mp3 or any other song you want. Select the video or track you want and copy the link from the address bar of your browser.
-
Use a third-party website or app to convert and download the mp3 file. There are many websites and apps that allow you to convert and download mp3 files from YouTube or SoundCloud, such as [16](https://ytmp3.cc/en13/), [15](https://www.4kdownload.com/products/product-youtubetomp3), [14](https://sclouddownloader.net/), and [13](https://soundcloudmp3.org/). Choose one that suits your needs and preferences, and paste the link you copied in the input box. Click the convert or download button and wait for the process to finish.
-
Check the quality and legality of the downloaded file. After downloading the mp3 file, you can check its quality by playing it on your device or using a software like [12](https://spek.cc/). You can also check its legality by reading the terms and conditions of the website or app you used, and the license of the original music. Some music may be protected by copyright laws, which means you cannot download or use it without permission from the owner.
-
-
Method 3: Downloading Music from Other Websites or Apps
-
If you are not satisfied with iTunes, YouTube, or SoundCloud, you can also download Q Mark Nguwe mp3 from other websites or apps that offer mp3 downloads. However, you need to be careful when choosing these sources, as some of them may be unreliable, unsafe, or illegal. Here are some tips to follow:
-
-
Search for reliable and safe websites or apps that offer mp3 downloads. You can use a search engine like Bing to find websites or apps that offer mp3 downloads. You can also use a tool like [11](https://www.virustotal.com/gui/home/url) to scan the URL of the website or app before visiting it. You can also read reviews and ratings from other users to see if they are trustworthy and secure.
-
Choose the best format and quality for your device and preference. Different websites or apps may offer different formats and qualities for mp3 downloads, such as 128 kbps, 192 kbps, 320 kbps, etc. The higher the bitrate, the better the sound quality, but also the larger the file size. You should choose a format and quality that matches your device's storage capacity and your listening preference.
-
Avoid malware and viruses when downloading mp3 files. Some websites or apps may try to trick you into downloading unwanted software, malware, or viruses along with the mp3 files. You should avoid clicking on pop-ups, ads, or suspicious links that appear on these sources. You should also use an antivirus software like [10](https://www.microsoft.com/en-us/windows/comprehensive-security) to scan your device regularly and remove any threats.
-
-
Conclusion
-
In this article, we have shown you three main methods to download Q Mark Nguwe mp3: buying music on desktop with iTunes, downloading music for free from YouTube and SoundCloud, and downloading music from other websites or apps. Each method has its pros and cons, so you should choose the one that suits your needs and preferences best.
-
We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
-
Frequently Asked Questions
-
What is Q Mark Nguwe mp3?
-
Q Mark Nguwe mp3 is a hit song by South African artists Q-Mark, TpZee, and Afriikan Papi. It is a love-themed track with a nostalgic eighties dance feel, a simple baseline, and smooth vocals.
-
Why should I download mp3 files?
-
Downloading mp3 files has many advantages. You can listen to your favorite music offline, without using data or Wi-Fi. You can also transfer the files to different devices, such as your phone, tablet, computer, or mp3 player. You can also create playlists, edit tags, and customize your music library.
-
How can I buy music on desktop with iTunes?
-
You can buy music on desktop with iTunes by installing iTunes on your Windows or Mac computer, signing in with your Apple ID account, searching for music and buying it with iTunes, and viewing and transferring the music files on Windows or Mac.
-
How can I download music for free from YouTube and SoundCloud?
-
You can download music for free from YouTube and SoundCloud by finding and copying the link of the music video or audio track, using a third-party website or app to convert and download the mp3 file, and checking the quality and legality of the downloaded file.
-
How can I download music from other websites or apps?
-
You can download music from other websites or apps by searching for reliable and safe websites or apps that offer mp3 downloads, choosing the best format and quality for your device and preference, and avoiding malware and viruses when downloading mp3 files.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for iOS and Enjoy Unlimited Money.md b/spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for iOS and Enjoy Unlimited Money.md
deleted file mode 100644
index 6793ac4027528b403a483f41bce92c7c5616c975..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Traffic Racer MOD APK for iOS and Enjoy Unlimited Money.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
Traffic Racer Mod APK for iOS: How to Install and Play
-
If you are looking for a fun and addictive racing game that will keep you entertained for hours, you might want to check out traffic racer mod apk. This is a modified version of the popular traffic racer game that offers unlimited money, unlocked cars, and other features that make the game more enjoyable. But what if you want to play this game on your iOS device? Is it possible to install and run traffic racer mod apk on iOS? In this article, we will answer these questions and show you how to install and play traffic racer mod apk on iOS devices. We will also tell you about the benefits and features of this game and some frequently asked questions.
-
What is Traffic Racer Mod APK?
-
Traffic Racer is a milestone in the genre of endless arcade racing. It is a game where you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can try to be one of the fastest drivers in the global leaderboards and enjoy the stunning 3D graphics and smooth car handling. The game has over 40 different cars, 5 detailed environments, and 5 game modes to choose from. You can also customize your car through paint and wheels and compete with other players online.
Traffic Racer Mod APK is a modified version of the original game that gives you some extra features that are not available in the official version. For example, you can get unlimited money to buy any car you want, unlock all cars and levels, remove ads, and enjoy faster gameplay. These features make the game more fun and exciting, as you can drive any car you like and race without any limitations.
-
Why Would Someone Want to Play Traffic Racer Mod APK on iOS?
-
There are many reasons why someone would want to play traffic racer mod apk on iOS devices. Some of them are:
-
-
They love racing games and want to try something new and different.
-
They want to enjoy the benefits and features of the modded version without spending any money.
-
They want to challenge themselves and compete with other players online.
-
They want to have fun and kill some time.
-
-
However, there is one problem: traffic racer mod apk is not available on the Apple App Store. This means that you cannot download and install it directly from there. So, how can you play this game on your iOS device? There are two methods that you can use:
-
How to Install Traffic Racer Mod APK on iOS Devices
-
Method 1: Jailbreak Your Device and Use Cydia
-
The first method is to jailbreak your device and use Cydia. Jailbreaking is a process that allows you to modify the file system of your device and install custom applications that are not authorized by Apple. Cydia is an app store for jailbroken devices that lets you download and install various apps, tweaks, themes, and mods.
-
To use this method, you need to follow these steps:
-
-
Jailbreak your device using a tool like Checkra1n or Unc0ver. You can find tutorials online on how to do this.
-
Open Cydia and add a source that has traffic racer mod apk. You can search online for such sources or use this one: [10](https://oceanofgamesu.com/traffic-racer-mod-apk-download).
-
Search for traffic racer mod apk in the search bar and tap on the install button.
-
Wait for the installation to finish and then launch the game from your home screen.
-
Enjoy playing traffic racer mod apk on your iOS device.
-
-
This method is easy and fast, but it has some drawbacks. First, you need to jailbreak your device, which can void your warranty and expose your device to security risks. Second, you need to find a reliable source that has traffic racer mod apk, which can be hard to do. Third, you may encounter some compatibility issues or bugs while playing the game.
-
Method 2: Find the IPA Equivalent and Use Cydia Impactor
-
The second method is to find the IPA equivalent of traffic racer mod apk and use Cydia Impactor. IPA is the file format for iOS applications that can be installed on your device using a computer. Cydia Impactor is a tool that allows you to sideload IPA files onto your device without jailbreaking it.
-
To use this method, you need to follow these steps:
-
traffic racer hack ios download
-traffic racer unlimited money ios
-traffic racer modded apk for iphone
-traffic racer cheats ios no jailbreak
-traffic racer game mod apk ios
-traffic racer ios mod menu
-traffic racer hack version download ios
-traffic racer mod apk free download for ios
-traffic racer unlimited cash ios
-traffic racer modded game for ios
-traffic racer hack tool ios
-traffic racer mod apk latest version ios
-traffic racer cheat codes ios
-traffic racer mod apk online for ios
-traffic racer unlimited coins ios
-traffic racer hacked apk for iphone
-traffic racer mod apk offline for ios
-traffic racer hack app ios
-traffic racer mod apk 2023 for ios
-traffic racer mod apk revdl for ios
-traffic racer hack ipa download
-traffic racer mod apk rexdl for ios
-traffic racer mod apk happymod for ios
-traffic racer hack cydia ios
-traffic racer mod apk an1 for ios
-traffic racer hack tweakbox ios
-traffic racer mod apk andropalace for ios
-traffic racer mod apk android 1 for ios
-traffic racer hack tutuapp ios
-traffic racer mod apk apkpure for ios
-traffic racer hack appvalley ios
-traffic racer mod apk mob.org for ios
-traffic racer mod apk android republic for ios
-traffic racer hack panda helper ios
-traffic racer mod apk ihackedit for ios
-traffic racer mod apk lenov.ru for ios
-traffic racer hack ignition app ios
-traffic racer mod apk platinmods for ios
-traffic racer mod apk 5play.ru for ios
-traffic racer hack appcake ios
-traffic racer mod apk blackmod for ios
-traffic racer mod apk apkmody for ios
-traffic racer hack tweakdoor ios
-traffic racer mod apk apknite for ios
-traffic racer mod apk apkmirror for ios
-traffic racer hack appvn ios
-traffic racer mod apk apksfree for ios
-traffic racer mod apk apktada for ios
-traffic racer hack ipogo app ios
-
-
Find the IPA equivalent of traffic racer mod apk. You can search online for such files or use this one: [9](https://iosninja.io/ipa-library/download-traffic-racer-hack-ipa-ios).
-
Download Cydia Impactor from [8](https://cydiaimpactor.com) and install it on your computer.
-
Connect your iOS device to your computer using a USB cable and launch Cydia Impactor.
-
Drag and drop the IPA file onto Cydia Impactor and enter your Apple ID and password when prompted.
-
Wait for the installation to finish and then trust the app from your device settings.
-
Launch the game from your home screen and enjoy playing traffic racer mod apk on your iOS device.
-
-
This method is safer and more reliable than the first one, but it has some limitations. First, you need to have a computer and a USB cable to perform this method. Second, you need to enter your Apple ID and password, which can be risky if you use a fake or hacked one. Third, you need to trust the app from your device settings, which can be revoked by Apple at any time.
-
Benefits and Features of Traffic Racer Game
-
Whether you use the first or the second method, you will be able to enjoy the benefits and features of traffic racer game on your iOS device. Some of them are:
-
-
Stunning 3D graphics and realistic car handling: The game has amazing 3D graphics that make you feel like you are driving in real life. The cars have realistic physics and sound effects that enhance the gameplay experience.
-
Over 40 different cars and 5 game modes to choose from: The game has a variety of cars that you can drive, from sedans and sports cars to trucks and buses. You can also choose from 5 different game modes, such as endless, two-way, time trial, police chase, and free ride.
-
Customization options and online leaderboards: The game allows you to customize your car through paint and wheels. You can also upgrade your car's speed, handling, and braking. You can also compete with other players online through the global leaderboards and see how you rank among them.
-
-
Conclusion
-
Traffic Racer Mod APK is a great racing game that you can play on your iOS device. It offers unlimited money, unlocked cars, and other features that make the game more fun and exciting. However, since it is not available on the App Store, you need to use either jailbreaking or sideloading methods to install it on your device. Both methods have their pros and cons, so you need to choose the one that suits you best. Once you install the game, you can enjoy its benefits and features and have a blast driving through highway traffic.
-
FAQs
-
What are the risks of installing traffic racer mod apk on iOS devices?
-
The risks of installing traffic racer mod apk on iOS devices depend on the method that you use. If you use jailbreaking, you may void your warranty, expose your device to security risks, or encounter compatibility issues or bugs. If you use sideloading, you may risk your Apple ID and password, or lose access to the app if Apple revokes it.
-
How can I update traffic racer mod apk on iOS devices?
-
To update traffic racer mod apk on iOS devices, you need to follow the same steps that you used to install it. You need to find the latest version of the modded file (either apk or ipa) and install it using the same tool (either Cydia or Cydia Impactor). You may need to delete the previous version of the game before installing the new one.
-
How can I get unlimited money in traffic racer mod apk?
-
To get unlimited money in traffic racer mod apk, you do not need to do anything special. The modded version of the game already gives you unlimited money to buy and upgrade any car you want. You can also earn more money by playing the game and completing missions.
-
What are some tips and tricks for playing traffic racer game?
-
Some tips and tricks for playing traffic racer game are:
-
-
Drive faster to get more scores and cash.
-
Use the nitro boost to overtake other cars and avoid collisions.
-
Drive in the opposite direction in two-way mode to get extra score and cash.
-
Do not crash into other cars or obstacles, as this will damage your car and reduce your speed.
-
Try different cars and game modes to find the one that suits your style.
-
-
What are some alternatives to traffic racer game?
-
If you are looking for some alternatives to traffic racer game, you can try these games:
-
-
Traffic Rider: This is a similar game where you ride a motorcycle instead of a car. You can enjoy the first-person view, realistic bike sounds, and over 30 different bikes.
-
Traffic Tour: This is another racing game where you drive through traffic, perform stunts, and challenge other players online. You can also customize your car, change the camera view, and use different controls.
-
Traffic Run: This is a casual game where you tap to control your car and avoid hitting other vehicles or obstacles. You can also collect coins, unlock new cars, and explore different environments.
-
-
I hope you enjoyed this article and learned how to install and play traffic racer mod apk on iOS devices. If you have any questions or feedback, please leave a comment below. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of KOF M.U.G.E.N 2020 on Your Smartphone.md b/spaces/1phancelerku/anime-remove-background/Experience the Thrill of KOF M.U.G.E.N 2020 on Your Smartphone.md
deleted file mode 100644
index 63e4dbe50610801911e7e45657b433cbd17dc892..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the Thrill of KOF M.U.G.E.N 2020 on Your Smartphone.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
KOF M.U.G.E.N 2020 Download APK: How to Play the Ultimate Fighting Game on Your Android Device
|
Do you love fighting games? Do you want to play one of the most popular and customizable fighting games on your Android device? If you answered yes to both questions, then you should definitely try KOF M.U.G.E.N 2020 APK.
KOF M.U.G.E.N 2020 APK is a fan-made game that combines characters, stages, music, and gameplay from various SNK franchises such as The King of Fighters, Fatal Fury, Art of Fighting, Samurai Shodown, Metal Slug, and more. It is based on the M.U.G.E.N engine, which allows anyone to create their own fighting games with ease.
-
In this article, we will tell you everything you need to know about KOF M.U.G.E.N 2020 APK, including what it is, how to download and install it on your Android device, how to customize and edit it according to your preferences, and why you should give it a try. We will also answer some frequently asked questions about KOF M.U.G.E.N 2020 APK at the end of this article.
-
What is KOF M.U.G.E.N 2020?
- | HTML Code | | --------- | |
A Brief History of KOF M.U.G.E.N
|
KOF M.U.G.E.N is a series of fan-made games that started in 2002 by a group of Brazilian fans who wanted to create their own version of The King of Fighters, a popular fighting game franchise by SNK. They used the M.U.G.E.N engine, which is a free and open-source game engine that allows anyone to create 2D fighting games with custom characters, stages, music, and gameplay.
-
Over the years, KOF M.U.G.E.N has evolved and improved, adding more characters, stages, modes, and features from various SNK games and other sources. KOF M.U.G.E.N 2020 is the latest and most advanced version of the series, featuring over 200 characters, over 100 stages, and many options and settings to customize the game to your liking.
KOF M.U.G.E.N 2020 is a 2D fighting game that follows the same basic rules and mechanics as The King of Fighters. You can choose from several modes, such as Arcade, Team Battle, Survival, Training, Watch, and more. You can also choose from different types of teams, such as Single, Simul, Turns, or Tag.
-
The gameplay of KOF M.U.G.E.N 2020 is fast-paced and fluid, with smooth animations and responsive controls. You can perform various moves and combos with your characters, such as punches, kicks, throws, special moves, super moves, and ultimate moves. You can also use different systems and mechanics, such as Power Gauge, Max Mode, Guard Cancel, Counter Attack, Roll Escape, and more.
-
KOF M.U.G.E.N 2020 also has many features that make it unique and fun to play. For example, you can adjust the difficulty level, the number of rounds, the time limit, the damage ratio, the life recovery rate, and other options. You can also enable or disable certain features, such as AI mode, debug mode, cheats mode, auto guard mode, and more. You can also change the screen resolution, the sound volume, the language, the input configuration, and other settings.
-
Characters and Stages of KOF M.U.G.E.N 2020
- | HTML Code | | --------- | | Iori Yagami, Terry Bogard, Mai Shiranui, etc.), Fatal Fury series (such as Geese Howard, Andy Bogard, Kim Kaphwan, etc.), Art of Fighting series (such as Ryo Sakazaki, Robert Garcia, Yuri Sakazaki, etc.), Samurai Shodown series (such as Haohmaru, Nakoruru, Genjuro Kibagami, etc.), Metal Slug series (such as Marco Rossi, Fio Germi, Tarma Roving, etc.), and more. You can also find characters from other games and media, such as Street Fighter, Mortal Kombat, Dragon Ball, Naruto, Bleach, One Piece, Marvel, DC, and more.
-
KOF M.U.G.E.N 2020 also has a large selection of stages that you can fight on. There are over 100 stages from various SNK games and other sources. You can find stages from The King of Fighters series (such as Esaka, Korea, China, etc.), Fatal Fury series (such as South Town, Pao Pao Cafe, Geese Tower, etc.), Art of Fighting series (such as Kyokugen Dojo, L'Amor Restaurant, Glass Hill Valley, etc.), Samurai Shodown series (such as Gairyu Isle, Amakusa Castle, Shimabara Hell Gate, etc.), Metal Slug series (such as Mission 1, Mission 2, Mission 3, etc.), and more. You can also find stages from other games and media, such as Street Fighter, Mortal Kombat, Dragon Ball, Naruto, Bleach, One Piece, Marvel, DC, and more.
-
How to Download and Install KOF M.U.G.E.N 2020 APK on Your Android Device
-
If you want to play KOF M.U.G.E.N 2020 APK on your Android device, you will need to download and install it first. Here are the requirements and compatibility information that you should know before downloading and installing KOF M.U.G.E.N 2020 APK:
-
Requirements and Compatibility
-
KOF M.U.G.E.N 2020 APK is a large file that requires a lot of storage space and memory to run smoothly. You will need at least 2 GB of free storage space on your Android device to download and install KOF M.U.G.E.N 2020 APK. You will also need at least 1 GB of RAM to play KOF M.U.G.E.N 2020 APK without lag or crashes.
-
KOF M.U.G.E.N 2020 APK is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not be able to run KOF M.U.G.E.N 2020 APK properly due to hardware limitations or software issues. If you encounter any problems while playing KOF M.U.G.E.N 2020 APK on your Android device, you can try to lower the game settings or contact the developer for support.
-
Steps to Download and Install KOF M.U.G.E.N 2020 APK
-
Here are the steps that you need to follow to download and install KOF M.U.G.E.N 2020 APK on your Android device:
-
-
Go to the official website of KOF M.U.G.E.N 2020 APK [here] and click on the download button.
-
Wait for the download to finish and locate the file in your device's file manager.
-
Tap on the file and allow the installation from unknown sources if prompted.
-
Wait for the installation to complete and launch the game from your app drawer or home screen.
-
Enjoy playing KOF M.U.G.E.N 2020 APK on your Android device!
-
-
Tips and Tricks to Enjoy KOF M.U.G.E.N 2020 APK
- | HTML Code | | --------- | | there are some tips and tricks that you can use to enjoy KOF M.U.G.E.N 2020 APK even more. Here are some of them:
-
-
Use the training mode to practice your moves and combos with different characters and learn their strengths and weaknesses.
-
Use the watch mode to watch AI-controlled matches between different characters and learn from their strategies and tactics.
-
Use the cheats mode to unlock all characters and stages, change the game speed, enable infinite power, and more.
-
Use the debug mode to access hidden features and options, such as changing the character size, color, position, and more.
-
Use the AI mode to make the game play itself and enjoy watching the action.
-
-
How to Customize and Edit KOF M.U.G.E.N 2020 APK
-
KOF M.U.G.E.N 2020 APK is a highly customizable and editable game that allows you to create your own fighting game experience. You can add or remove characters and stages, change the game settings and options, and even create your own characters and stages. Here are some ways that you can customize and edit KOF M.U.G.E.N 2020 APK:
-
How to Add or Remove Characters and Stages
-
KOF M.U.G.E.N 2020 APK comes with a large roster of characters and stages, but you can always add or remove them according to your preferences. You can download additional characters and stages from various websites, such as [this] or [this], or you can delete unwanted characters and stages from your device's storage. Here are the steps that you need to follow to add or remove characters and stages:
-
-
Download the character or stage file that you want to add from a reliable source and extract it if it is compressed.
-
Copy the character or stage folder to the chars or stages folder in your device's storage where KOF M.U.G.E.N 2020 APK is installed.
-
Edit the select.def file in the data folder using a text editor app such as [this] or [this].
-
Add the name of the character or stage folder to the select.def file under the appropriate section (such as kfm, bonus, hidden, etc.). For example, if you want to add a character named Ryu, you should write Ryu/Ryu.def under the kfm section.
-
Save the select.def file and launch KOF M.U.G.E.N 2020 APK. You should see the new character or stage in the game.
-
To remove a character or stage, simply delete its folder from the chars or stages folder and remove its name from the select.def file.
-
-
How to Change the Game Settings and Options
-
KOF M.U.G.E.N 2020 APK has many settings and options that you can change to customize the game to your liking. You can change things such as the screen resolution, the sound volume, the language, the input configuration, and more. Here are some ways that you can change the game settings and options:
-
-
To change the screen resolution, edit the mugen.cfg file in the data folder using a text editor app. Find the line that says "GameWidth" and "GameHeight" and change their values to your desired resolution. For example, if you want to play in 1280x720 resolution, you should write GameWidth = 1280 and GameHeight = 720. Save the mugen.cfg file and launch KOF M.U.G.E.N 2020 APK.
-| HTML Code | | --------- | | the sound volume slider for the master, music, and sound effects. You can also mute or unmute the sound by pressing the M key on your keyboard.
-
To change the language, go to the options menu in KOF M.U.G.E.N 2020 APK and select the language option. You can choose from English, Spanish, Portuguese, French, and Japanese. You can also edit the system.def file in the data folder using a text editor app and change the value of the "language" parameter to your desired language code. For example, if you want to play in German, you should write language = "de". Save the system.def file and launch KOF M.U.G.E.N 2020 APK.
-
To change the input configuration, go to the options menu in KOF M.U.G.E.N 2020 APK and select the input option. You can configure the buttons for each player and each mode, such as up, down, left, right, light punch, heavy punch, light kick, heavy kick, start, and select. You can also edit the mugen.cfg file in the data folder using a text editor app and change the values of the "Joystick" and "KeyConfig" parameters to your desired input settings. For example, if you want to use the A key for light punch, you should write KeyConfig[0].Button.A = a. Save the mugen.cfg file and launch KOF M.U.G.E.N 2020 APK.
-
-
How to Create Your Own Characters and Stages
-
KOF M.U.G.E.N 2020 APK is not only a game that you can play, but also a game that you can create. You can create your own characters and stages using the M.U.G.E.N engine and add them to KOF M.U.G.E.N 2020 APK. However, this is not an easy task and requires a lot of time, effort, and knowledge. Here are some resources that you can use to learn how to create your own characters and stages:
-
-
[This] is a tutorial that teaches you how to create your own character from scratch using Fighter Factory Studio, a tool that allows you to edit sprites, animations, sounds, and codes for your character.
-
[This] is a tutorial that teaches you how to create your own stage from scratch using Stage Tool, a tool that allows you to edit images, sounds, and codes for your stage.
-
[This] is a forum where you can find and download various resources for creating your own characters and stages, such as sprites, sounds, codes, templates, tools, tutorials, and more.
-
[This] is a website where you can find and download various characters and stages that other people have created for M.U.G.E.N games.
-
-
Conclusion
-
KOF M.U.G.E.N 2020 APK is a fan-made game that offers a unique and enjoyable fighting game experience on your Android device. It has a huge roster of characters and stages from various SNK franchises and other sources. It has a fast-paced and fluid gameplay with smooth animations and responsive controls. It has many features and options that allow you to customize the game to your liking. It also allows you to create your own characters and stages using the M.U.G.E.N engine.
-
If you are a fan of fighting games or SNK games, you should definitely try KOF M.U.G.E.N 2020 APK. It is free to download and easy to install on your Android device. It is fun to play alone or with friends. It is also a great way to express your creativity and imagination by creating your own characters and stages.
-
So what are you waiting for? Download KOF M.U.G.E.N 2020 APK now and enjoy playing the ultimate fighting game on your Android device!
-
Why You Should Try KOF M.U.G.E.N 2020 APK
-
Here are some reasons why you should try KOF M.U.G.E.N 2020 APK:
-
-
It is free to download and play.
-
It has over 200 characters and over 100 stages from various SNK franchises and other sources.
-
It has a fast-paced and fluid gameplay with smooth animations and responsive controls.
-
It has many features and options that allow you to customize the game to your liking.
-
It allows you to create your own characters and stages using the M.U.G.E.N engine.
-
It is fun to play alone or with friends.
-
- | HTML Code | | --------- | | FAQs |
Here are some frequently asked questions about KOF M.U.G.E.N 2020 APK:
-
-
Is KOF M.U.G.E.N 2020 APK safe to download and install?
-
Yes, KOF M.U.G.E.N 2020 APK is safe to download and install as long as you get it from the official website or a trusted source. However, you should always scan any file that you download with an antivirus app before installing it on your device.
-
Is KOF M.U.G.E.N 2020 APK legal to play?
-
KOF M.U.G.E.N 2020 APK is a fan-made game that is not affiliated with or endorsed by SNK or any other company. It is a non-profit game that is made for entertainment purposes only. It does not intend to infringe any copyrights or trademarks of SNK or any other company. However, you should always respect the rights and wishes of the original creators and owners of the characters and stages that are used in KOF M.U.G.E.N 2020 APK.
-
How can I play KOF M.U.G.E.N 2020 APK with my friends?
-
KOF M.U.G.E.N 2020 APK supports local multiplayer mode, which means that you can play with your friends on the same device using a split-screen or a gamepad. You can also play with your friends online using a third-party app such as [this] or [this], which allows you to create a virtual network and connect your devices over the internet.
-
How can I update KOF M.U.G.E.N 2020 APK to the latest version?
-
KOF M.U.G.E.N 2020 APK is constantly updated by the developer with new characters, stages, features, and bug fixes. You can check for updates on the official website or on the developer's social media pages. You can also enable the auto-update option in the game settings, which will notify you when a new update is available and download it automatically.
-
How can I contact the developer of KOF M.U.G.E.N 2020 APK?
-
If you have any questions, suggestions, feedback, or issues regarding KOF M.U.G.E.N 2020 APK, you can contact the developer by sending an email to [this] or by leaving a comment on the developer's YouTube channel [here]. The developer is very responsive and friendly and will try to help you as soon as possible.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/2hack2furious/anonymizer/app.py b/spaces/2hack2furious/anonymizer/app.py
deleted file mode 100644
index 567457a4584f7b639e5a6df297f0075ec5193ae4..0000000000000000000000000000000000000000
--- a/spaces/2hack2furious/anonymizer/app.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import modules
-import streamlit as st
-from streamlit_extras.let_it_rain import rain
-
-# Options
-DISCLAIMER = """
- *This app processes data using 2-anonymity, an implementation of the k-anonymity framework. While this is a great start to anonymizing your data, it is by no means perfect, and should be used with caution. For example, some sets of sensitive features which may clearly be identified by a human could be missed by our algorithm. Please keep this in mind.*
- """
-K = 2
-
-# Page Config
-st.set_page_config(layout="wide")
-
-### FILE LOADER for sidebar
-with st.sidebar:
- st.header("🕵️ 2anonymity")
- st.markdown("*Clean and anonymize data*")
- with st.container() as upload:
- file = st.file_uploader(f"Upload dataset:", type=modules.SUPPORTED_TYPES, label_visibility="collapsed")
- df, (filename, extension), result = modules.load_file(file)
-
-### MAIN
-if df is None: # Await file to be uploaded
- rain("🤠")
-else:
- ### PRE-TRANSFORM features for sidebar
- with st.sidebar:
- # Options for data loading
- with st.container() as loading_options:
- st.markdown("### Data loading options:")
- remove_duplicates = st.checkbox("Remove duplicate rows", value=True)
- drop_missing = st.checkbox("Remove rows with missing values", value=False)
-
- # Options for data optimization
- with st.container() as anonymizing_options:
- st.markdown("### Anonymizing options:")
- max_categorical_size = st.slider("Categorical Variable Threshold", min_value=2, max_value=200, value=50, step=1)
- bin_size = st.slider("Bin Size", min_value=2, max_value=200, value=20, step=1)
- redaction_selection = st.selectbox("Redaction strength", ["Low", "Medium", "High", "Extreme"])
- sensitivity_minimum = {"Low": 2, "Medium": 4, "High": 6, "Extreme": 12}[redaction_selection]
-
-
- ### DATA PREVIEW AND TRANSFORM
- # Preview data before transform
- with st.container() as before_data:
- s = df.style
- s = s.set_properties(**{'background-color': '#fce4e4'})
- st.dataframe(s)
-
- # Transform data
- df = modules.data_cleaner(df, drop_missing, remove_duplicates)
- df, unprocessed = modules.data_anonymizer(df, K, max_categorical_size, bin_size, sensitivity_minimum)
-
- # Preview data after before_data
- with st.container() as after_data:
- s = df.style
- s = s.set_properties(**{'background-color': '#e4fce4'})
- st.dataframe(s)
-
-
- ### POST-TRANSFORM features for sidebar
- with st.sidebar:
- # Options for download
- with st.container() as download_header:
- st.markdown("### Download options:")
- output_extension = st.selectbox("File type", [".csv", ".json", ".xlsx"])
- if unprocessed: st.markdown(f"Error encountered when processing columns {str(unprocessed)}")
-
- # Prepare file for download
- with st.container() as downloader:
- if output_extension == ".csv": output_file = df.to_csv().encode("utf-8")
- elif output_extension == ".json": output_file = df.to_json().encode("utf-8")
- elif output_extension == ".xlsx": output_file = df.to_excel().encode("utf-8")
- output_filename = f"""{filename.split(".")[:-1][0]}-clean{output_extension}"""
- st.download_button("Download", output_file, file_name=output_filename)
-
- # Add a disclaimer for data security
- with st.container() as disclaimer:
- st.markdown(
- f"""
- Disclaimer:
- {DISCLAIMER}
- """
- )
-
-# Attribution
-st.sidebar.markdown("Created by team #2hack2furious for the hackthethreat2023")
\ No newline at end of file
diff --git a/spaces/2ndelement/voicevox/test/test_full_context_label.py b/spaces/2ndelement/voicevox/test/test_full_context_label.py
deleted file mode 100644
index 7cdde34f4644ccf7b3048d707f99b0171e25114e..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/test/test_full_context_label.py
+++ /dev/null
@@ -1,404 +0,0 @@
-from copy import deepcopy
-from itertools import chain
-from unittest import TestCase
-
-from voicevox_engine.full_context_label import (
- AccentPhrase,
- BreathGroup,
- Mora,
- Phoneme,
- Utterance,
-)
-
-
-class TestBasePhonemes(TestCase):
- def setUp(self):
- super().setUp()
- # pyopenjtalk.extract_fullcontext("こんにちは、ヒホです。")の結果
- # 出来る限りテスト内で他のライブラリに依存しないため、
- # またテスト内容を透明化するために、テストケースを生成している
- self.test_case_hello_hiho = [
- # sil (無音)
- "xx^xx-sil+k=o/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:5_5%0_xx_xx/H:xx_xx/I:xx-xx"
- + "@xx+xx&xx-xx|xx+xx/J:1_5/K:2+2-9",
- # k
- "xx^sil-k+o=N/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # o
- "sil^k-o+N=n/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # N (ん)
- "k^o-N+n=i/A:-3+2+4/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # n
- "o^N-n+i=ch/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # i
- "N^n-i+ch=i/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # ch
- "n^i-ch+i=w/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # i
- "i^ch-i+w=a/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # w
- "ch^i-w+a=pau/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # a
- "i^w-a+pau=h/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # pau (読点)
- "w^a-pau+h=i/A:xx+xx+xx/B:09-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:5_5!0_xx-xx"
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:4_1%0_xx_xx/H:1_5/I:xx-xx"
- + "@xx+xx&xx-xx|xx+xx/J:1_4/K:2+2-9",
- # h
- "a^pau-h+i=h/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # i
- "pau^h-i+h=o/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # h
- "h^i-h+o=d/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # o
- "i^h-o+d=e/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # d
- "h^o-d+e=s/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # e
- "o^d-e+s=U/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # s
- "d^e-s+U=sil/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # U (無声母音)
- "e^s-U+sil=xx/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # sil (無音)
- "s^U-sil+xx=xx/A:xx+xx+xx/B:10-7_2/C:xx_xx+xx/D:xx+xx_xx/E:4_1!0_xx-xx"
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:xx_xx%xx_xx_xx/H:1_4/I:xx-xx"
- + "@xx+xx&xx-xx|xx+xx/J:xx_xx/K:2+2-9",
- ]
- self.phonemes_hello_hiho = [
- Phoneme.from_label(label) for label in self.test_case_hello_hiho
- ]
-
-
-class TestPhoneme(TestBasePhonemes):
- def test_phoneme(self):
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.phonemes_hello_hiho]),
- "sil k o N n i ch i w a pau h i h o d e s U sil",
- )
-
- def test_is_pause(self):
- self.assertEqual(
- [phoneme.is_pause() for phoneme in self.phonemes_hello_hiho],
- [
- True, # sil
- False, # k
- False, # o
- False, # N
- False, # n
- False, # i
- False, # ch
- False, # i
- False, # w
- False, # a
- True, # pau
- False, # h
- False, # i
- False, # h
- False, # o
- False, # d
- False, # e
- False, # s
- False, # u
- True, # sil
- ],
- )
-
- def test_label(self) -> None:
- self.assertEqual(
- [phoneme.label for phoneme in self.phonemes_hello_hiho],
- self.test_case_hello_hiho,
- )
-
-
-class TestMora(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- # contexts["a2"] == "1" ko
- self.mora_hello_1 = Mora(
- consonant=self.phonemes_hello_hiho[1], vowel=self.phonemes_hello_hiho[2]
- )
- # contexts["a2"] == "2" N
- self.mora_hello_2 = Mora(consonant=None, vowel=self.phonemes_hello_hiho[3])
- # contexts["a2"] == "3" ni
- self.mora_hello_3 = Mora(
- consonant=self.phonemes_hello_hiho[4], vowel=self.phonemes_hello_hiho[5]
- )
- # contexts["a2"] == "4" chi
- self.mora_hello_4 = Mora(
- consonant=self.phonemes_hello_hiho[6], vowel=self.phonemes_hello_hiho[7]
- )
- # contexts["a2"] == "5" wa
- self.mora_hello_5 = Mora(
- consonant=self.phonemes_hello_hiho[8], vowel=self.phonemes_hello_hiho[9]
- )
- # contexts["a2"] == "1" hi
- self.mora_hiho_1 = Mora(
- consonant=self.phonemes_hello_hiho[11], vowel=self.phonemes_hello_hiho[12]
- )
- # contexts["a2"] == "2" ho
- self.mora_hiho_2 = Mora(
- consonant=self.phonemes_hello_hiho[13], vowel=self.phonemes_hello_hiho[14]
- )
- # contexts["a2"] == "3" de
- self.mora_hiho_3 = Mora(
- consonant=self.phonemes_hello_hiho[15], vowel=self.phonemes_hello_hiho[16]
- )
- # contexts["a2"] == "1" sU
- self.mora_hiho_4 = Mora(
- consonant=self.phonemes_hello_hiho[17], vowel=self.phonemes_hello_hiho[18]
- )
-
- def assert_phonemes(self, mora: Mora, mora_str: str) -> None:
- self.assertEqual(
- "".join([phoneme.phoneme for phoneme in mora.phonemes]), mora_str
- )
-
- def assert_labels(self, mora: Mora, label_start: int, label_end: int) -> None:
- self.assertEqual(mora.labels, self.test_case_hello_hiho[label_start:label_end])
-
- def test_phonemes(self) -> None:
- self.assert_phonemes(self.mora_hello_1, "ko")
- self.assert_phonemes(self.mora_hello_2, "N")
- self.assert_phonemes(self.mora_hello_3, "ni")
- self.assert_phonemes(self.mora_hello_4, "chi")
- self.assert_phonemes(self.mora_hello_5, "wa")
- self.assert_phonemes(self.mora_hiho_1, "hi")
- self.assert_phonemes(self.mora_hiho_2, "ho")
- self.assert_phonemes(self.mora_hiho_3, "de")
- self.assert_phonemes(self.mora_hiho_4, "sU")
-
- def test_labels(self) -> None:
- self.assert_labels(self.mora_hello_1, 1, 3)
- self.assert_labels(self.mora_hello_2, 3, 4)
- self.assert_labels(self.mora_hello_3, 4, 6)
- self.assert_labels(self.mora_hello_4, 6, 8)
- self.assert_labels(self.mora_hello_5, 8, 10)
- self.assert_labels(self.mora_hiho_1, 11, 13)
- self.assert_labels(self.mora_hiho_2, 13, 15)
- self.assert_labels(self.mora_hiho_3, 15, 17)
- self.assert_labels(self.mora_hiho_4, 17, 19)
-
- def test_set_context(self):
- # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする
- mora_hello_1 = deepcopy(self.mora_hello_1)
- # phonemeにあたる"p3"を書き換える
- mora_hello_1.set_context("p3", "a")
- self.assert_phonemes(mora_hello_1, "aa")
-
-
-class TestAccentPhrase(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- # TODO: ValueErrorを吐く作為的ではない自然な例の模索
- # 存在しないなら放置でよい
- self.accent_phrase_hello = AccentPhrase.from_phonemes(
- self.phonemes_hello_hiho[1:10]
- )
- self.accent_phrase_hiho = AccentPhrase.from_phonemes(
- self.phonemes_hello_hiho[11:19]
- )
-
- def test_accent(self):
- self.assertEqual(self.accent_phrase_hello.accent, 5)
- self.assertEqual(self.accent_phrase_hiho.accent, 1)
-
- def test_set_context(self):
- accent_phrase_hello = deepcopy(self.accent_phrase_hello)
- # phonemeにあたる"p3"を書き換える
- accent_phrase_hello.set_context("p3", "a")
- self.assertEqual(
- "".join([phoneme.phoneme for phoneme in accent_phrase_hello.phonemes]),
- "aaaaaaaaa",
- )
-
- def test_phonemes(self):
- self.assertEqual(
- " ".join(
- [phoneme.phoneme for phoneme in self.accent_phrase_hello.phonemes]
- ),
- "k o N n i ch i w a",
- )
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.accent_phrase_hiho.phonemes]),
- "h i h o d e s U",
- )
-
- def test_labels(self):
- self.assertEqual(
- self.accent_phrase_hello.labels, self.test_case_hello_hiho[1:10]
- )
- self.assertEqual(
- self.accent_phrase_hiho.labels, self.test_case_hello_hiho[11:19]
- )
-
- def test_merge(self):
- # 「こんにちはヒホです」
- # 読点を無くしたものと同等
- merged_accent_phrase = self.accent_phrase_hello.merge(self.accent_phrase_hiho)
- self.assertEqual(merged_accent_phrase.accent, 5)
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in merged_accent_phrase.phonemes]),
- "k o N n i ch i w a h i h o d e s U",
- )
- self.assertEqual(
- merged_accent_phrase.labels,
- self.test_case_hello_hiho[1:10] + self.test_case_hello_hiho[11:19],
- )
-
-
-class TestBreathGroup(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- self.breath_group_hello = BreathGroup.from_phonemes(
- self.phonemes_hello_hiho[1:10]
- )
- self.breath_group_hiho = BreathGroup.from_phonemes(
- self.phonemes_hello_hiho[11:19]
- )
-
- def test_set_context(self):
- # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする
- breath_group_hello = deepcopy(self.breath_group_hello)
- # phonemeにあたる"p3"を書き換える
- breath_group_hello.set_context("p3", "a")
- self.assertEqual(
- "".join([phoneme.phoneme for phoneme in breath_group_hello.phonemes]),
- "aaaaaaaaa",
- )
-
- def test_phonemes(self):
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.breath_group_hello.phonemes]),
- "k o N n i ch i w a",
- )
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.breath_group_hiho.phonemes]),
- "h i h o d e s U",
- )
-
- def test_labels(self):
- self.assertEqual(
- self.breath_group_hello.labels, self.test_case_hello_hiho[1:10]
- )
- self.assertEqual(
- self.breath_group_hiho.labels, self.test_case_hello_hiho[11:19]
- )
-
-
-class TestUtterance(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- self.utterance_hello_hiho = Utterance.from_phonemes(self.phonemes_hello_hiho)
-
- def test_phonemes(self):
- self.assertEqual(
- " ".join(
- [phoneme.phoneme for phoneme in self.utterance_hello_hiho.phonemes]
- ),
- "sil k o N n i ch i w a pau h i h o d e s U sil",
- )
- changed_utterance = Utterance.from_phonemes(self.utterance_hello_hiho.phonemes)
- self.assertEqual(len(changed_utterance.breath_groups), 2)
- accent_phrases = list(
- chain.from_iterable(
- breath_group.accent_phrases
- for breath_group in changed_utterance.breath_groups
- )
- )
- for prev, cent, post in zip(
- [None] + accent_phrases[:-1],
- accent_phrases,
- accent_phrases[1:] + [None],
- ):
- mora_num = len(cent.moras)
- accent = cent.accent
-
- if prev is not None:
- for phoneme in prev.phonemes:
- self.assertEqual(phoneme.contexts["g1"], str(mora_num))
- self.assertEqual(phoneme.contexts["g2"], str(accent))
-
- if post is not None:
- for phoneme in post.phonemes:
- self.assertEqual(phoneme.contexts["e1"], str(mora_num))
- self.assertEqual(phoneme.contexts["e2"], str(accent))
-
- for phoneme in cent.phonemes:
- self.assertEqual(
- phoneme.contexts["k2"],
- str(
- sum(
- [
- len(breath_group.accent_phrases)
- for breath_group in changed_utterance.breath_groups
- ]
- )
- ),
- )
-
- for prev, cent, post in zip(
- [None] + changed_utterance.breath_groups[:-1],
- changed_utterance.breath_groups,
- changed_utterance.breath_groups[1:] + [None],
- ):
- accent_phrase_num = len(cent.accent_phrases)
-
- if prev is not None:
- for phoneme in prev.phonemes:
- self.assertEqual(phoneme.contexts["j1"], str(accent_phrase_num))
-
- if post is not None:
- for phoneme in post.phonemes:
- self.assertEqual(phoneme.contexts["h1"], str(accent_phrase_num))
-
- for phoneme in cent.phonemes:
- self.assertEqual(phoneme.contexts["i1"], str(accent_phrase_num))
- self.assertEqual(
- phoneme.contexts["i5"],
- str(accent_phrases.index(cent.accent_phrases[0]) + 1),
- )
- self.assertEqual(
- phoneme.contexts["i6"],
- str(
- len(accent_phrases)
- - accent_phrases.index(cent.accent_phrases[0])
- ),
- )
-
- def test_labels(self):
- self.assertEqual(self.utterance_hello_hiho.labels, self.test_case_hello_hiho)
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/setting/Setting.py b/spaces/2ndelement/voicevox/voicevox_engine/setting/Setting.py
deleted file mode 100644
index f8912c6bff9afa959f445d8aa9c89c440b36b8db..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/setting/Setting.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from enum import Enum
-from typing import Optional
-
-from pydantic import BaseModel, Field
-
-
-class CorsPolicyMode(str, Enum):
- """
- CORSの許可モード
- """
-
- all = "all" # 全てのオリジンからのリクエストを許可
- localapps = "localapps" # ローカルアプリケーションからのリクエストを許可
-
-
-class Setting(BaseModel):
- """
- エンジンの設定情報
- """
-
- cors_policy_mode: CorsPolicyMode = Field(title="リソース共有ポリシー")
- allow_origin: Optional[str] = Field(title="許可するオリジン")
-
- class Config:
- use_enum_values = True
diff --git a/spaces/52Hz/HWMNet_lowlight_enhancement/main_test_HWMNet.py b/spaces/52Hz/HWMNet_lowlight_enhancement/main_test_HWMNet.py
deleted file mode 100644
index db31fe1321dd8cd25136e6243c801ba822be8e8a..0000000000000000000000000000000000000000
--- a/spaces/52Hz/HWMNet_lowlight_enhancement/main_test_HWMNet.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import argparse
-import cv2
-import glob
-import numpy as np
-from collections import OrderedDict
-from skimage import img_as_ubyte
-import os
-import torch
-import requests
-from PIL import Image
-import torchvision.transforms.functional as TF
-import torch.nn.functional as F
-from natsort import natsorted
-from model.HWMNet import HWMNet
-
-def main():
- parser = argparse.ArgumentParser(description='Demo Low-light Image enhancement')
- parser.add_argument('--input_dir', default='test/', type=str, help='Input images')
- parser.add_argument('--result_dir', default='result/', type=str, help='Directory for results')
- parser.add_argument('--weights',
- default='experiments/pretrained_models/LOL_enhancement_HWMNet.pth', type=str,
- help='Path to weights')
-
- args = parser.parse_args()
-
- inp_dir = args.input_dir
- out_dir = args.result_dir
-
- os.makedirs(out_dir, exist_ok=True)
-
- files = natsorted(glob.glob(os.path.join(inp_dir, '*')))
-
- if len(files) == 0:
- raise Exception(f"No files found at {inp_dir}")
-
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- # Load corresponding models architecture and weights
- model = HWMNet(in_chn=3, wf=96, depth=4)
- model = model.to(device)
- model.eval()
- load_checkpoint(model, args.weights)
-
-
- mul = 16
- for file_ in files:
- img = Image.open(file_).convert('RGB')
- input_ = TF.to_tensor(img).unsqueeze(0).to(device)
-
- # Pad the input if not_multiple_of 8
- h, w = input_.shape[2], input_.shape[3]
- H, W = ((h + mul) // mul) * mul, ((w + mul) // mul) * mul
- padh = H - h if h % mul != 0 else 0
- padw = W - w if w % mul != 0 else 0
- input_ = F.pad(input_, (0, padw, 0, padh), 'reflect')
- with torch.no_grad():
- restored = model(input_)
-
- restored = torch.clamp(restored, 0, 1)
- restored = restored[:, :, :h, :w]
- restored = restored.permute(0, 2, 3, 1).cpu().detach().numpy()
- restored = img_as_ubyte(restored[0])
-
- f = os.path.splitext(os.path.split(file_)[-1])[0]
- save_img((os.path.join(out_dir, f + '.png')), restored)
-
-
-def save_img(filepath, img):
- cv2.imwrite(filepath, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
-
-
-def load_checkpoint(model, weights):
- checkpoint = torch.load(weights, map_location=torch.device('cpu'))
- try:
- model.load_state_dict(checkpoint["state_dict"])
- except:
- state_dict = checkpoint["state_dict"]
- new_state_dict = OrderedDict()
- for k, v in state_dict.items():
- name = k[7:] # remove `module.`
- new_state_dict[name] = v
- model.load_state_dict(new_state_dict)
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/52Hz/SRMNet_AWGN_denoising/README.md b/spaces/52Hz/SRMNet_AWGN_denoising/README.md
deleted file mode 100644
index 9f1da3c83055846d02c8d43340ad0317f99a3d29..0000000000000000000000000000000000000000
--- a/spaces/52Hz/SRMNet_AWGN_denoising/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: SRMNet_AWGN_denoising
-emoji: 🌪
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/rel_transformer.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/rel_transformer.py
deleted file mode 100644
index ed69e587f9813fc1214dc034f8cabf238e362b61..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/rel_transformer.py
+++ /dev/null
@@ -1,611 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-from utils.hparams import hparams
-from modules.commons.common_layers import Embedding
-from utils.tts_utils import group_hidden_by_segs, expand_word2ph
-
-import transformers
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0.,
- window_size=None, block_length=None, pre_ln=False, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.pre_ln = pre_ln
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(hidden_channels, hidden_channels, n_heads, window_size=window_size,
- p_dropout=p_dropout, block_length=block_length))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
- if pre_ln:
- self.last_ln = LayerNorm(hidden_channels)
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- for i in range(self.n_layers):
- x = x * x_mask
- x_ = x
- if self.pre_ln:
- x = self.norm_layers_1[i](x)
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = x_ + y
- if not self.pre_ln:
- x = self.norm_layers_1[i](x)
-
- x_ = x
- if self.pre_ln:
- x = self.norm_layers_2[i](x)
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = x_ + y
- if not self.pre_ln:
- x = self.norm_layers_2[i](x)
- if self.pre_ln:
- x = self.last_ln(x)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, window_size=None, heads_share=True, p_dropout=0.,
- block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.p_dropout = p_dropout
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels ** -0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- if proximal_init:
- self.conv_k.weight.data.copy_(self.conv_q.weight.data)
- self.conv_k.bias.data.copy_(self.conv_q.bias.data)
- nn.init.xavier_uniform_(self.conv_v.weight)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.k_channels)
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query, key_relative_embeddings)
- rel_logits = self._relative_position_to_absolute_position(rel_logits)
- scores_local = rel_logits / math.sqrt(self.k_channels)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores * block_mask + -1e4 * (1 - block_mask)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:, slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:, :, :length, length - 1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]))
- x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(x * x_mask)
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- return x * x_mask
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-4):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- n_dims = len(x.shape)
- mean = torch.mean(x, 1, keepdim=True)
- variance = torch.mean((x - mean) ** 2, 1, keepdim=True)
-
- x = (x - mean) * torch.rsqrt(variance + self.eps)
-
- shape = [1, -1] + [1] * (n_dims - 2)
- x = x * self.gamma.view(*shape) + self.beta.view(*shape)
- return x
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class RelTransformerEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout=0.0,
- window_size=4,
- block_length=None,
- prenet=True,
- pre_ln=True,
- ):
-
- super().__init__()
-
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.prenet = prenet
- if n_vocab > 0:
- self.emb = Embedding(n_vocab, hidden_channels, padding_idx=0)
-
- if prenet:
- self.pre = ConvReluNorm(hidden_channels, hidden_channels, hidden_channels,
- kernel_size=5, n_layers=3, p_dropout=0)
- self.encoder = Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- pre_ln=pre_ln,
- )
-
- def forward(self, x, x_mask=None):
- if self.n_vocab > 0:
- x_lengths = (x > 0).long().sum(-1)
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- else:
- x_lengths = (x.abs().sum(-1) > 0).long().sum(-1)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- if self.prenet:
- x = self.pre(x, x_mask)
- x = self.encoder(x, x_mask)
- return x.transpose(1, 2)
-
-
-class Pooler(nn.Module):
- """
- Parameter-free poolers to get the sentence embedding
- 'cls': [CLS] representation with BERT/RoBERTa's MLP pooler.
- 'cls_before_pooler': [CLS] representation without the original MLP pooler.
- 'avg': average of the last layers' hidden states at each token.
- 'avg_top2': average of the last two layers.
- 'avg_first_last': average of the first and the last layers.
- """
- def __init__(self, pooler_type):
- super().__init__()
- self.pooler_type = pooler_type
- assert self.pooler_type in ["cls", "cls_before_pooler", "avg", "avg_top2", "avg_first_last"], "unrecognized pooling type %s" % self.pooler_type
-
- def forward(self, attention_mask, outputs):
- last_hidden = outputs.last_hidden_state
- pooler_output = outputs.pooler_output
- hidden_states = outputs.hidden_states
-
- if self.pooler_type in ['cls_before_pooler', 'cls']:
- return last_hidden[:, 0]
- elif self.pooler_type == "avg":
- return ((last_hidden * attention_mask.unsqueeze(-1)).sum(1) / attention_mask.sum(-1).unsqueeze(-1))
- elif self.pooler_type == "avg_first_last":
- first_hidden = hidden_states[0]
- last_hidden = hidden_states[-1]
- pooled_result = ((first_hidden + last_hidden) / 2.0 * attention_mask.unsqueeze(-1)).sum(1) / attention_mask.sum(-1).unsqueeze(-1)
- return pooled_result
- elif self.pooler_type == "avg_top2":
- second_last_hidden = hidden_states[-2]
- last_hidden = hidden_states[-1]
- pooled_result = ((last_hidden + second_last_hidden) / 2.0 * attention_mask.unsqueeze(-1)).sum(1) / attention_mask.sum(-1).unsqueeze(-1)
- return pooled_result
- else:
- raise NotImplementedError
-
-
-class Similarity(nn.Module):
- """
- Dot product or cosine similarity
- """
-
- def __init__(self, temp):
- super().__init__()
- self.temp = temp
- self.cos = nn.CosineSimilarity(dim=-1)
- self.record = None
- self.pos_avg = 0.0
- self.neg_avg = 0.0
-
- def forward(self, x, y):
- sim = self.cos(x, y)
- self.record = sim.detach() # [64,64]
- min_size = min(self.record.shape[0], self.record.shape[1]) # 64
- num_item = self.record.shape[0] * self.record.shape[1] # 4096
- self.pos_avg = self.record.diag().sum() / min_size
- if num_item - min_size == 0:
- self.neg_avg = (self.record.sum() - self.record.diag().sum()) / 1
- return sim / self.temp
- if torch.any(torch.isnan(self.record)).item() is True:
- print("we got self.record has nan when compute neg_avg")
- if torch.any(torch.isnan(self.record.diag())).item() is True:
- print("we got self.record.diag() has nan when compute neg_avg")
- self.neg_avg = (self.record.sum() - self.record.diag().sum()) / (num_item - min_size)
-
- return sim / self.temp
-
-
-class BertPredictionHeadTransform(nn.Module):
- def __init__(self, hidden_size):
- super().__init__()
- self.dense = nn.Linear(hidden_size, hidden_size)
- self.transform_act_fn = F.gelu
- self.LayerNorm = nn.LayerNorm(hidden_size, eps=1e-12)
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.transform_act_fn(hidden_states)
- hidden_states = self.LayerNorm(hidden_states)
- return hidden_states
-
-
-class BertLMPredictionHead(nn.Module):
- def __init__(self, hid_dim, out_dim):
- super().__init__()
- self.transform = BertPredictionHeadTransform(hid_dim)
- self.decoder = nn.Linear(hid_dim, out_dim, bias=False)
- self.bias = nn.Parameter(torch.zeros(out_dim))
- self.decoder.bias = self.bias
-
- def forward(self, hidden_states):
- hidden_states = self.transform(hidden_states)
- hidden_states = self.decoder(hidden_states)
- return hidden_states
-
-
-# V2_2
-# change add to concat.
-# now support finetune BERT
-# grad_bert=0.1 & trainable_block_idx=0
-class BERTRelTransformerEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout=0.0,
- window_size=4,
- block_length=None,
- prenet=True,
- pre_ln=True,
- ):
-
- super().__init__()
-
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.block_length = block_length
- self.prenet = prenet
- if n_vocab > 0:
- self.emb = Embedding(n_vocab, hidden_channels, padding_idx=0)
-
- if prenet:
- self.pre = ConvReluNorm(hidden_channels, hidden_channels, hidden_channels,
- kernel_size=5, n_layers=3, p_dropout=0)
- self.encoder1 = Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers//2,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- pre_ln=pre_ln,
- )
-
- self.encoder2 = Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers - n_layers//2,
- kernel_size,
- p_dropout,
- window_size=window_size,
- block_length=block_length,
- pre_ln=pre_ln,
- )
-
- if hparams['ds_name'] in ['ljspeech', 'libritts', 'librispeech']:
- model_name = 'bert-base-uncased'
- elif hparams['ds_name'] in ['biaobei', 'wenetspeech']:
- model_name = 'bert-base-chinese'
- else:
- raise NotImplementedError()
-
- self.tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
- config = transformers.AutoConfig.from_pretrained(model_name)
- if hparams.get("load_bert_from_pretrained", True):
- print("Load BERT from pretrained model ...")
- self.bert = transformers.AutoModel.from_pretrained(model_name,config=config)
- trainable_start_block = hparams.get("bert_trainable_start_block", 0)
- else:
- print("Initialize BERT from scratch!")
- self.bert = transformers.BertModel(config=config)
- trainable_start_block = 0
-
- for k, v in self.bert.named_parameters():
- if 'embeddings' in k:
- v.requires_grad = False
- elif 'encoder.layer' in k:
- block_idx = int(k.split(".")[2])
- if block_idx < trainable_start_block:
- v.requires_grad = False
- else:
- v.requires_grad = True
- elif 'cls' in k:
- v.requires_grad = True
- else:
- print("Unhandled key: {}, set to requires_grad...".format(k))
- v.requires_grad = True
-
- self.bert_combine = nn.Sequential(*[
- nn.Conv1d(768 + hidden_channels, hidden_channels, 3, 1, 1),
- nn.ReLU(),
- ])
- self.pooler = Pooler("avg")
- self.sim = Similarity(temp=0.05)
-
- def forward(self, x, x_mask=None, bert_feats=None, ph2word=None, **kwargs):
- if self.n_vocab > 0:
- x_lengths = (x > 0).long().sum(-1)
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- else:
- x_lengths = (x.abs().sum(-1) > 0).long().sum(-1)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- if self.prenet:
- x = self.pre(x, x_mask)
- x = self.encoder1(x, x_mask)
- bert_outputs = self.bert(bert_feats['bert_input_ids'],
- attention_mask=bert_feats['bert_attention_mask'],
- token_type_ids=bert_feats['bert_token_type_ids'],
- output_hidden_states=True)
- bert_num_blocks = hparams.get("bert_num_blocks", 12) # total 1+12blocks in bert
- bert_embedding = bert_outputs['hidden_states'][bert_num_blocks]
- # bert_embedding = bert_outputs['last_hidden_state']
- grad_bert = hparams.get("grad_bert", 0.1)
- bert_embedding = bert_embedding.detach() * (1-grad_bert) + bert_embedding * grad_bert
- bert_word_embedding, _ = group_hidden_by_segs(bert_embedding, bert_feats['bert_token2word'], bert_feats['bert_token2word'].max().item())
- bert_ph_embedding = expand_word2ph(bert_word_embedding, ph2word)
- bert_ph_embedding = bert_ph_embedding.transpose(1,2)
- x = torch.cat([x, bert_ph_embedding], dim=1)
- x = self.bert_combine(x)
- x = self.encoder2(x, x_mask)
- return x.transpose(1, 2)
-
-
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/lj/preprocess.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/lj/preprocess.py
deleted file mode 100644
index a3aa6b5a91fbfde53af0d2d43748d439399ca307..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/egs/datasets/audio/lj/preprocess.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from text_to_speech.data_gen.tts.base_preprocess import BasePreprocessor
-
-
-class LJPreprocess(BasePreprocessor):
- def meta_data(self):
- for l in open(f'{self.raw_data_dir}/metadata.csv').readlines():
- item_name, _, txt = l.strip().split("|")
- wav_fn = f"{self.raw_data_dir}/wavs/{item_name}.wav"
- yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt}
diff --git a/spaces/AIMLApps/Botrite_wip/README.md b/spaces/AIMLApps/Botrite_wip/README.md
deleted file mode 100644
index 01dc43bf6644b5bd147e82955ba431a8fd234906..0000000000000000000000000000000000000000
--- a/spaces/AIMLApps/Botrite_wip/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Botrite Wip
-emoji: 📈
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.37.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AP123/IllusionDiffusion/user_history.py b/spaces/AP123/IllusionDiffusion/user_history.py
deleted file mode 100644
index c0cfdb3b2c02c353dc36116a8e86d77aabe4f75f..0000000000000000000000000000000000000000
--- a/spaces/AP123/IllusionDiffusion/user_history.py
+++ /dev/null
@@ -1,423 +0,0 @@
-"""
-User History is a plugin that you can add to your Spaces to cache generated images for your users.
-
-Key features:
-- 🤗 Sign in with Hugging Face
-- Save generated images with their metadata: prompts, timestamp, hyper-parameters, etc.
-- Export your history as zip.
-- Delete your history to respect privacy.
-- Compatible with Persistent Storage for long-term storage.
-- Admin panel to check configuration and disk usage .
-
-Useful links:
-- Demo: https://huggingface.co/spaces/Wauplin/gradio-user-history
-- README: https://huggingface.co/spaces/Wauplin/gradio-user-history/blob/main/README.md
-- Source file: https://huggingface.co/spaces/Wauplin/gradio-user-history/blob/main/user_history.py
-- Discussions: https://huggingface.co/spaces/Wauplin/gradio-user-history/discussions
-"""
-import json
-import os
-import shutil
-import warnings
-from datetime import datetime
-from functools import cache
-from pathlib import Path
-from typing import Callable, Dict, List, Tuple
-from uuid import uuid4
-
-import gradio as gr
-import numpy as np
-import requests
-from filelock import FileLock
-from PIL.Image import Image
-
-
-def setup(folder_path: str | Path | None = None) -> None:
- user_history = _UserHistory()
- user_history.folder_path = _resolve_folder_path(folder_path)
- user_history.initialized = True
-
-
-def render() -> None:
- user_history = _UserHistory()
-
- # initialize with default config
- if not user_history.initialized:
- print("Initializing user history with default config. Use `user_history.setup(...)` to customize folder_path.")
- setup()
-
- # Render user history tab
- gr.Markdown(
- "## Your past generations\n\nLog in to keep a gallery of your previous generations. Your history will be saved"
- " and available on your next visit. Make sure to export your images from time to time as this gallery may be"
- " deleted in the future."
- )
-
- if os.getenv("SYSTEM") == "spaces" and not os.path.exists("/data"):
- gr.Markdown(
- "**⚠️ Persistent storage is disabled, meaning your history will be lost if the Space gets restarted."
- " Only the Space owner can setup a Persistent Storage. If you are not the Space owner, consider"
- " duplicating this Space to set your own storage.⚠️**"
- )
-
- with gr.Row():
- gr.LoginButton(min_width=250)
- gr.LogoutButton(min_width=250)
- refresh_button = gr.Button(
- "Refresh",
- icon="https://huggingface.co/spaces/Wauplin/gradio-user-history/resolve/main/assets/icon_refresh.png",
- )
- export_button = gr.Button(
- "Export",
- icon="https://huggingface.co/spaces/Wauplin/gradio-user-history/resolve/main/assets/icon_download.png",
- )
- delete_button = gr.Button(
- "Delete history",
- icon="https://huggingface.co/spaces/Wauplin/gradio-user-history/resolve/main/assets/icon_delete.png",
- )
-
- # "Export zip" row (hidden by default)
- with gr.Row():
- export_file = gr.File(file_count="single", file_types=[".zip"], label="Exported history", visible=False)
-
- # "Config deletion" row (hidden by default)
- with gr.Row():
- confirm_button = gr.Button("Confirm delete all history", variant="stop", visible=False)
- cancel_button = gr.Button("Cancel", visible=False)
-
- # Gallery
- gallery = gr.Gallery(
- label="Past images",
- show_label=True,
- elem_id="gallery",
- object_fit="contain",
- columns=5,
- height=600,
- preview=False,
- show_share_button=False,
- show_download_button=False,
- )
- gr.Markdown(
- "User history is powered by"
- " [Wauplin/gradio-user-history](https://huggingface.co/spaces/Wauplin/gradio-user-history). Integrate it to"
- " your own Space in just a few lines of code!"
- )
- gallery.attach_load_event(_fetch_user_history, every=None)
-
- # Interactions
- refresh_button.click(fn=_fetch_user_history, inputs=[], outputs=[gallery], queue=False)
- export_button.click(fn=_export_user_history, inputs=[], outputs=[export_file], queue=False)
-
- # Taken from https://github.com/gradio-app/gradio/issues/3324#issuecomment-1446382045
- delete_button.click(
- lambda: [gr.update(visible=True), gr.update(visible=True)],
- outputs=[confirm_button, cancel_button],
- queue=False,
- )
- cancel_button.click(
- lambda: [gr.update(visible=False), gr.update(visible=False)],
- outputs=[confirm_button, cancel_button],
- queue=False,
- )
- confirm_button.click(_delete_user_history).then(
- lambda: [gr.update(visible=False), gr.update(visible=False)],
- outputs=[confirm_button, cancel_button],
- queue=False,
- )
-
- # Admin section (only shown locally or when logged in as Space owner)
- _admin_section()
-
-
-def save_image(
- profile: gr.OAuthProfile | None,
- image: Image | np.ndarray | str | Path,
- label: str | None = None,
- metadata: Dict | None = None,
-):
- # Ignore images from logged out users
- if profile is None:
- return
- username = profile["preferred_username"]
-
- # Ignore images if user history not used
- user_history = _UserHistory()
- if not user_history.initialized:
- warnings.warn(
- "User history is not set in Gradio demo. Saving image is ignored. You must use `user_history.render(...)`"
- " first."
- )
- return
-
- # Copy image to storage
- image_path = _copy_image(image, dst_folder=user_history._user_images_path(username))
-
- # Save new image + metadata
- if metadata is None:
- metadata = {}
- if "datetime" not in metadata:
- metadata["datetime"] = str(datetime.now())
- data = {"path": str(image_path), "label": label, "metadata": metadata}
- with user_history._user_lock(username):
- with user_history._user_jsonl_path(username).open("a") as f:
- f.write(json.dumps(data) + "\n")
-
-
-#############
-# Internals #
-#############
-
-
-class _UserHistory(object):
- _instance = None
- initialized: bool = False
- folder_path: Path
-
- def __new__(cls):
- # Using singleton pattern => we don't want to expose an object (more complex to use) but still want to keep
- # state between `render` and `save_image` calls.
- if cls._instance is None:
- cls._instance = super(_UserHistory, cls).__new__(cls)
- return cls._instance
-
- def _user_path(self, username: str) -> Path:
- path = self.folder_path / username
- path.mkdir(parents=True, exist_ok=True)
- return path
-
- def _user_lock(self, username: str) -> FileLock:
- """Ensure history is not corrupted if concurrent calls."""
- return FileLock(self.folder_path / f"{username}.lock") # lock outside of folder => better when exporting ZIP
-
- def _user_jsonl_path(self, username: str) -> Path:
- return self._user_path(username) / "history.jsonl"
-
- def _user_images_path(self, username: str) -> Path:
- path = self._user_path(username) / "images"
- path.mkdir(parents=True, exist_ok=True)
- return path
-
-
-def _fetch_user_history(profile: gr.OAuthProfile | None) -> List[Tuple[str, str]]:
- """Return saved history for that user, if it exists."""
- # Cannot load history for logged out users
- if profile is None:
- return []
- username = profile["preferred_username"]
-
- user_history = _UserHistory()
- if not user_history.initialized:
- warnings.warn("User history is not set in Gradio demo. You must use `user_history.render(...)` first.")
- return []
-
- with user_history._user_lock(username):
- # No file => no history saved yet
- jsonl_path = user_history._user_jsonl_path(username)
- if not jsonl_path.is_file():
- return []
-
- # Read history
- images = []
- for line in jsonl_path.read_text().splitlines():
- data = json.loads(line)
- images.append((data["path"], data["label"] or ""))
- return list(reversed(images))
-
-
-def _export_user_history(profile: gr.OAuthProfile | None) -> Dict | None:
- """Zip all history for that user, if it exists and return it as a downloadable file."""
- # Cannot load history for logged out users
- if profile is None:
- return None
- username = profile["preferred_username"]
-
- user_history = _UserHistory()
- if not user_history.initialized:
- warnings.warn("User history is not set in Gradio demo. You must use `user_history.render(...)` first.")
- return None
-
- # Zip history
- with user_history._user_lock(username):
- path = shutil.make_archive(
- str(_archives_path() / f"history_{username}"), "zip", user_history._user_path(username)
- )
-
- return gr.update(visible=True, value=path)
-
-
-def _delete_user_history(profile: gr.OAuthProfile | None) -> None:
- """Delete all history for that user."""
- # Cannot load history for logged out users
- if profile is None:
- return
- username = profile["preferred_username"]
-
- user_history = _UserHistory()
- if not user_history.initialized:
- warnings.warn("User history is not set in Gradio demo. You must use `user_history.render(...)` first.")
- return
-
- with user_history._user_lock(username):
- shutil.rmtree(user_history._user_path(username))
-
-
-####################
-# Internal helpers #
-####################
-
-
-def _copy_image(image: Image | np.ndarray | str | Path, dst_folder: Path) -> Path:
- """Copy image to the images folder."""
- # Already a path => copy it
- if isinstance(image, str):
- image = Path(image)
- if isinstance(image, Path):
- dst = dst_folder / f"{uuid4().hex}_{Path(image).name}" # keep file ext
- shutil.copyfile(image, dst)
- return dst
-
- # Still a Python object => serialize it
- if isinstance(image, np.ndarray):
- image = Image.fromarray(image)
- if isinstance(image, Image):
- dst = dst_folder / f"{uuid4().hex}.png"
- image.save(dst)
- return dst
-
- raise ValueError(f"Unsupported image type: {type(image)}")
-
-
-def _resolve_folder_path(folder_path: str | Path | None) -> Path:
- if folder_path is not None:
- return Path(folder_path).expanduser().resolve()
-
- if os.getenv("SYSTEM") == "spaces" and os.path.exists("/data"): # Persistent storage is enabled!
- return Path("/data") / "_user_history"
-
- # Not in a Space or Persistent storage not enabled => local folder
- return Path(__file__).parent / "_user_history"
-
-
-def _archives_path() -> Path:
- # Doesn't have to be on persistent storage as it's only used for download
- path = Path(__file__).parent / "_user_history_exports"
- path.mkdir(parents=True, exist_ok=True)
- return path
-
-
-#################
-# Admin section #
-#################
-
-
-def _admin_section() -> None:
- title = gr.Markdown()
- title.attach_load_event(_display_if_admin(), every=None)
-
-
-def _display_if_admin() -> Callable:
- def _inner(profile: gr.OAuthProfile | None) -> str:
- if profile is None:
- return ""
- if profile["preferred_username"] in _fetch_admins():
- return _admin_content()
- return ""
-
- return _inner
-
-
-def _admin_content() -> str:
- return f"""
-## Admin section
-
-Running on **{os.getenv("SYSTEM", "local")}** (id: {os.getenv("SPACE_ID")}). {_get_msg_is_persistent_storage_enabled()}
-
-Admins: {', '.join(_fetch_admins())}
-
-{_get_nb_users()} user(s), {_get_nb_images()} image(s)
-
-### Configuration
-
-History folder: *{_UserHistory().folder_path}*
-
-Exports folder: *{_archives_path()}*
-
-### Disk usage
-
-{_disk_space_warning_message()}
-"""
-
-
-def _get_nb_users() -> int:
- user_history = _UserHistory()
- if not user_history.initialized:
- return 0
- if user_history.folder_path is not None and user_history.folder_path.exists():
- return len([path for path in user_history.folder_path.iterdir() if path.is_dir()])
- return 0
-
-
-def _get_nb_images() -> int:
- user_history = _UserHistory()
- if not user_history.initialized:
- return 0
- if user_history.folder_path is not None and user_history.folder_path.exists():
- return len([path for path in user_history.folder_path.glob("*/images/*")])
- return 0
-
-
-def _get_msg_is_persistent_storage_enabled() -> str:
- if os.getenv("SYSTEM") == "spaces":
- if os.path.exists("/data"):
- return "Persistent storage is enabled."
- else:
- return (
- "Persistent storage is not enabled. This means that user histories will be deleted when the Space is"
- " restarted. Consider adding a Persistent Storage in your Space settings."
- )
- return ""
-
-
-def _disk_space_warning_message() -> str:
- user_history = _UserHistory()
- if not user_history.initialized:
- return ""
-
- message = ""
- if user_history.folder_path is not None:
- total, used, _ = _get_disk_usage(user_history.folder_path)
- message += f"History folder: **{used / 1e9 :.0f}/{total / 1e9 :.0f}GB** used ({100*used/total :.0f}%)."
-
- total, used, _ = _get_disk_usage(_archives_path())
- message += f"\n\nExports folder: **{used / 1e9 :.0f}/{total / 1e9 :.0f}GB** used ({100*used/total :.0f}%)."
-
- return f"{message.strip()}"
-
-
-def _get_disk_usage(path: Path) -> Tuple[int, int, int]:
- for path in [path] + list(path.parents): # first check target_dir, then each parents one by one
- try:
- return shutil.disk_usage(path)
- except OSError: # if doesn't exist or can't read => fail silently and try parent one
- pass
- return 0, 0, 0
-
-
-@cache
-def _fetch_admins() -> List[str]:
- # Running locally => fake user is admin
- if os.getenv("SYSTEM") != "spaces":
- return ["FakeGradioUser"]
-
- # Running in Space but no space_id => ???
- space_id = os.getenv("SPACE_ID")
- if space_id is None:
- return ["Unknown"]
-
- # Running in Space => try to fetch organization members
- # Otherwise, it's not an organization => namespace is the user
- namespace = space_id.split("/")[0]
- response = requests.get(f"https://huggingface.co/api/organizations/{namespace}/members")
- if response.status_code == 200:
- return sorted((member["user"] for member in response.json()), key=lambda x: x.lower())
- return [namespace]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/tcrp-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/tcrp-plugin.js
deleted file mode 100644
index 62f4e67ac914c89fbb672b33b95fb4db4af644a2..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/tcrp-plugin.js
+++ /dev/null
@@ -1,34 +0,0 @@
-import TCRP from './tcrp.js';
-
-const Recorder = TCRP.Recorder;
-const Player = TCRP.Player;
-
-class TCRPPlugin extends Phaser.Plugins.BasePlugin {
- constructor(pluginManager) {
- super(pluginManager);
- }
-
- start() {
- var eventEmitter = this.game.events;
- eventEmitter.on('destroy', this.destroy, this);
- }
-
- addRecorder(parent, config) {
- return new Recorder(parent, config);
- }
-
- addPlayer(parent, config) {
- return new Player(parent, config);
- }
-}
-
-var methods = {
- runCommands: TCRP.RunCommands
-}
-
-Object.assign(
- TCRPPlugin.prototype,
- methods
-);
-
-export default TCRPPlugin;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/Factory.d.ts
deleted file mode 100644
index 744850f15da0f31086ed59345b7c2abb5a91cea6..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/Factory.d.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-// import * as Phaser from 'phaser';
-import ClickOutside from "./ClickOutside";
-
-export default function (
- gameObject: Phaser.GameObjects.GameObject,
- config?: ClickOutside.IConfig
-): ClickOutside;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunChildrenWrap.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunChildrenWrap.js
deleted file mode 100644
index 48eab787719ede6cb9e8145b0a44c31ba85f7260..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/RunChildrenWrap.js
+++ /dev/null
@@ -1,93 +0,0 @@
-import { GetDisplayWidth, GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js';
-
-var RunChildrenWrap = function (lineWidth, out) {
- if (out === undefined) {
- out = {
- lines: [],
- width: 0,
- height: 0
- }
- } else {
- out.lines.length = 0;
- out.width = 0;
- out.height = 0;
- }
-
- var children = this.sizerChildren;
- var itemSpace = this.space.item,
- lineSpace = this.space.line,
- indentLeftOdd = this.space.indentLeftOdd,
- indentLeftEven = this.space.indentLeftEven,
- indentTopOdd = this.space.indentTopOdd,
- indentTopEven = this.space.indentTopEven;
- var child, childWidth, childHeight, remainder = 0, indentLeft;
- var lines = out.lines,
- lastLine = undefined,
- newLine;
- for (var i = 0, cnt = children.length; i < cnt; i++) {
- child = children[i];
- if (child === '\n') {
- child = undefined;
- childWidth = 0;
- newLine = true;
- } else {
- if (child.rexSizer.hidden) {
- continue;
- }
-
- if (child.isRexSizer) {
- child.layout(); // Use original size
- }
-
- childWidth = GetChildWidth(child);
- newLine = (remainder < childWidth) || (lastLine === undefined);
- }
- // New line
- if (newLine) {
- if (lastLine) {
- lastLine.width = lineWidth - (remainder + itemSpace);
- out.width = Math.max(out.width, lastLine.width);
- out.height += lastLine.height + lineSpace;
- }
-
- lastLine = {
- children: [],
- // width: 0,
- height: 0
- };
- lines.push(lastLine);
-
- var indentLeft = (lines.length % 2) ? indentLeftOdd : indentLeftEven;
- remainder = lineWidth - indentLeft;
- }
-
- remainder -= (childWidth + itemSpace);
- if (child) {
- lastLine.children.push(child);
- childHeight = GeChildHeight(child);
- lastLine.height = Math.max(lastLine.height, childHeight);
- }
- }
-
- if (lastLine) {
- lastLine.width = lineWidth - (remainder + itemSpace);
- out.width = Math.max(out.width, lastLine.width);
- out.height += lastLine.height;
- }
-
- out.height += Math.max(indentTopOdd, indentTopEven);
-
- return out;
-}
-
-var GetChildWidth = function (child) {
- var padding = child.rexSizer.padding;
- return GetDisplayWidth(child) + padding.left + padding.right;
-}
-
-var GeChildHeight = function (child) {
- var padding = child.rexSizer.padding;
- return GetDisplayHeight(child) + padding.top + padding.bottom;
-}
-
-export default RunChildrenWrap;
\ No newline at end of file
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/transforms.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AlekseyKorshuk/model-evaluation/tabs/arena_side_by_side.py b/spaces/AlekseyKorshuk/model-evaluation/tabs/arena_side_by_side.py
deleted file mode 100644
index 84494aa1a8c2ad7f4e7dcde54a7c88e62077ab25..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/model-evaluation/tabs/arena_side_by_side.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import time
-
-import gradio as gr
-import random
-from conversation import Conversation
-from utils import get_matchmaking
-
-
-def get_tab_arena_side_by_side(download_bot_config, get_bot_profile, model_mapping, client):
- gr.Markdown("""
- # ⚔️ Chatbot Arena (side-by-side) ⚔️
- ## Rules
- * Chat with two models side-by-side and vote for which one is better!
- * You pick the models you want to chat with.
- * You can continue chatting and voting or click “Clear” to start a new round.
- """)
- default_bot_id = "_bot_e21de304-6151-4a04-b025-4c553ae8cbca"
- bot_config = download_bot_config(default_bot_id)
- user_state = gr.State(
- bot_config
- )
- with gr.Row():
- bot_id = gr.Textbox(label="Chai bot ID", value=default_bot_id, interactive=True)
- reload_bot_button = gr.Button("Reload bot")
- bot_profile = gr.HTML(get_bot_profile(bot_config))
- with gr.Accordion("Bot config:", open=False):
- bot_config_text = gr.Markdown(f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}\n")
-
- with gr.Row():
- values = list(model_mapping.keys())
- first_message = (None, bot_config["firstMessage"])
- height = 450
- model_a_value, model_b_value = get_matchmaking(client, values, is_anonymous=False)
- with gr.Column():
- model_a = gr.Dropdown(values, value=model_a_value, label="Model A")
- chatbot_a = gr.Chatbot([first_message])
- chatbot_a.style(height=height)
- with gr.Column():
- model_b = gr.Dropdown(values, value=model_b_value, label="Model B")
- chatbot_b = gr.Chatbot([first_message])
- chatbot_b.style(height=height)
-
- with gr.Row():
- with gr.Column(scale=3):
- msg = gr.Textbox(show_label=False, value="Hi there!", interactive=True)
- with gr.Column(scale=3):
- send = gr.Button("Send")
- with gr.Row():
- vote_a = gr.Button("👈 A is better", interactive=False)
- vote_b = gr.Button("👉 B is better", interactive=False)
- vote_tie = gr.Button("🤝 Tie", interactive=False)
- vote_bad = gr.Button("💩 Both are bad", interactive=False)
- with gr.Row():
- regenerate = gr.Button("Regenerate", interactive=False)
- clear = gr.Button("Clear")
-
- with gr.Accordion("Generation parameters for model A", open=False):
- model = model_mapping[model_a.value]
- temperature_model_a = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["temperature"],
- interactive=True, label="Temperature")
- repetition_penalty_model_a = gr.Slider(minimum=0.0, maximum=2.0,
- value=model.generation_params["repetition_penalty"],
- interactive=True, label="Repetition penalty")
- max_new_tokens_model_a = gr.Slider(minimum=1, maximum=512, value=model.generation_params["max_new_tokens"],
- interactive=True, label="Max new tokens")
- top_k_model_a = gr.Slider(minimum=1, maximum=100, value=model.generation_params["top_k"],
- interactive=True, label="Top-K")
- top_p_model_a = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["top_p"],
- interactive=True, label="Top-P")
-
- with gr.Accordion("Generation parameters for model B", open=False):
- model = model_mapping[model_b.value]
- temperature_model_b = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["temperature"],
- interactive=True, label="Temperature")
- repetition_penalty_model_b = gr.Slider(minimum=0.0, maximum=2.0,
- value=model.generation_params["repetition_penalty"],
- interactive=True, label="Repetition penalty")
- max_new_tokens_model_b = gr.Slider(minimum=1, maximum=512, value=model.generation_params["max_new_tokens"],
- interactive=True, label="Max new tokens")
- top_k_model_b = gr.Slider(minimum=1, maximum=100, value=model.generation_params["top_k"],
- interactive=True, label="Top-K")
- top_p_model_b = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["top_p"],
- interactive=True, label="Top-P")
-
- def clear_chat(user_state):
- return "", [(None, user_state["firstMessage"])], [(None, user_state["firstMessage"])]
-
- def reload_bot(bot_id):
- bot_config = download_bot_config(bot_id)
- bot_profile = get_bot_profile(bot_config)
- return bot_profile, [(None, bot_config["firstMessage"])], [(None, bot_config[
- "firstMessage"])], bot_config, f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}"
-
- def get_generation_args(model_tag):
- model = model_mapping[model_tag]
- return (
- model.generation_params["temperature"],
- model.generation_params["repetition_penalty"],
- model.generation_params["max_new_tokens"],
- model.generation_params["top_k"],
- model.generation_params["top_p"],
- )
-
- def respond(message, chat_history, user_state, model_tag,
- temperature, repetition_penalty, max_new_tokens, top_k, top_p):
- custom_generation_params = {
- 'temperature': temperature,
- 'repetition_penalty': repetition_penalty,
- 'max_new_tokens': max_new_tokens,
- 'top_k': top_k,
- 'top_p': top_p,
- }
- conv = Conversation(user_state)
- conv.set_chat_history(chat_history)
- conv.add_user_message(message)
- model = model_mapping[model_tag]
- bot_message = model.generate_response(conv, custom_generation_params)
- chat_history.append(
- (message, bot_message)
- )
- return "", chat_history
-
- def record_vote(user_state, vote,
- chat_history_a, model_tag_a,
- chat_history_b, model_tag_b):
- if len(chat_history_a) < 2:
- return
- conv_a = Conversation(user_state)
- conv_a.set_chat_history(chat_history_a)
- conv_b = Conversation(user_state)
- conv_b.set_chat_history(chat_history_b)
- if "A is better" in vote:
- vote_str = "model_a"
- elif "B is better" in vote:
- vote_str = "model_b"
- elif "Tie" in vote:
- vote_str = "tie"
- else:
- vote_str = "tie (bothbad)"
- row = {
- "timestamp": time.time(),
- "bot_id": user_state["bot_id"],
- "vote": vote_str,
- "model_a": model_tag_a,
- "model_b": model_tag_b,
- "is_anonymous": int(False)
- }
- sheet = client.open("Chat Arena").sheet1
- num_rows = len(sheet.get_all_records())
- sheet.insert_row(list(row.values()), index=num_rows + 2)
- return
-
- def regenerate_response(chat_history, user_state, model_tag,
- temperature, repetition_penalty, max_new_tokens, top_k, top_p):
- custom_generation_params = {
- 'temperature': temperature,
- 'repetition_penalty': repetition_penalty,
- 'max_new_tokens': max_new_tokens,
- 'top_k': top_k,
- 'top_p': top_p,
- }
- last_row = chat_history.pop(-1)
- chat_history.append((last_row[0], None))
- model = model_mapping[model_tag]
- conv = Conversation(user_state)
- conv.set_chat_history(chat_history)
- bot_message = model.generate_response(conv, custom_generation_params)
- chat_history[-1] = (last_row[0], bot_message)
- return "", chat_history
-
- def disable_voting():
- return [gr.Button.update(interactive=False)] * 4
-
- def enable_voting():
- return [gr.Button.update(interactive=True)] * 4
-
- def enable_send():
- return [gr.Button.update(interactive=True), gr.Button.update(interactive=False)]
-
- def enable_regenerate():
- return gr.Button.update(interactive=True)
-
- for vote in [vote_a, vote_b, vote_tie, vote_bad]:
- vote.click(record_vote,
- [user_state, vote, chatbot_a, model_a, chatbot_b, model_b],
- None,
- queue=False)
- vote.click(disable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
-
- model_a.change(get_generation_args, [model_a],
- [temperature_model_a, repetition_penalty_model_a, max_new_tokens_model_a, top_k_model_a,
- top_p_model_a], queue=False)
- model_b.change(get_generation_args, [model_b],
- [temperature_model_b, repetition_penalty_model_b, max_new_tokens_model_b, top_k_model_b,
- top_p_model_b], queue=False)
- reload_bot_button.click(reload_bot, [bot_id], [bot_profile, chatbot_a, chatbot_b, user_state, bot_config_text],
- queue=False)
- clear.click(clear_chat, [user_state], [msg, chatbot_a, chatbot_b], queue=False)
- model_a.change(clear_chat, [user_state], [msg, chatbot_a, chatbot_b], queue=False)
- model_b.change(clear_chat, [user_state], [msg, chatbot_a, chatbot_b], queue=False)
- clear.click(enable_send, None, [send, regenerate], queue=False)
- reload_bot_button.click(enable_send, None, [send, regenerate], queue=False)
-
- model_a.change(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
- model_b.change(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
- reload_bot_button.click(disable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
- send.click(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
- clear.click(disable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
- regenerate.click(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
- msg.submit(enable_voting, None, [vote_a, vote_b, vote_tie, vote_bad], queue=False)
-
- send.click(respond,
- [msg, chatbot_a, user_state, model_a, temperature_model_a, repetition_penalty_model_a,
- max_new_tokens_model_a, top_k_model_a, top_p_model_a], [msg, chatbot_a],
- queue=False)
- msg.submit(respond,
- [msg, chatbot_a, user_state, model_a, temperature_model_a, repetition_penalty_model_a,
- max_new_tokens_model_a, top_k_model_a, top_p_model_a], [msg, chatbot_a],
- queue=False)
-
- send.click(respond,
- [msg, chatbot_b, user_state, model_b, temperature_model_b, repetition_penalty_model_b,
- max_new_tokens_model_b, top_k_model_b, top_p_model_b], [msg, chatbot_b],
- queue=False)
- msg.submit(respond,
- [msg, chatbot_b, user_state, model_b, temperature_model_b, repetition_penalty_model_b,
- max_new_tokens_model_b, top_k_model_b, top_p_model_b], [msg, chatbot_b],
- queue=False)
-
- send.click(enable_regenerate, None, [regenerate], queue=False)
- msg.submit(enable_regenerate, None, [regenerate], queue=False)
-
- regenerate.click(regenerate_response,
- [chatbot_a, user_state, model_a, temperature_model_a, repetition_penalty_model_a,
- max_new_tokens_model_a, top_k_model_a,
- top_p_model_a], [msg, chatbot_a], queue=False)
- regenerate.click(regenerate_response,
- [chatbot_b, user_state, model_b, temperature_model_b, repetition_penalty_model_b,
- max_new_tokens_model_b, top_k_model_b,
- top_p_model_b], [msg, chatbot_b], queue=False)
diff --git a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/train.py b/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/train.py
deleted file mode 100644
index 06ce3be20bc4fcbc5395c596b042c1bf2bdad8b8..0000000000000000000000000000000000000000
--- a/spaces/AlekseyKorshuk/thin-plate-spline-motion-model/train.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from tqdm import trange
-import torch
-from torch.utils.data import DataLoader
-from logger import Logger
-from modules.model import GeneratorFullModel
-from torch.optim.lr_scheduler import MultiStepLR
-from torch.nn.utils import clip_grad_norm_
-from frames_dataset import DatasetRepeater
-import math
-
-def train(config, inpainting_network, kp_detector, bg_predictor, dense_motion_network, checkpoint, log_dir, dataset):
- train_params = config['train_params']
- optimizer = torch.optim.Adam(
- [{'params': list(inpainting_network.parameters()) +
- list(dense_motion_network.parameters()) +
- list(kp_detector.parameters()), 'initial_lr': train_params['lr_generator']}],lr=train_params['lr_generator'], betas=(0.5, 0.999), weight_decay = 1e-4)
-
- optimizer_bg_predictor = None
- if bg_predictor:
- optimizer_bg_predictor = torch.optim.Adam(
- [{'params':bg_predictor.parameters(),'initial_lr': train_params['lr_generator']}],
- lr=train_params['lr_generator'], betas=(0.5, 0.999), weight_decay = 1e-4)
-
- if checkpoint is not None:
- start_epoch = Logger.load_cpk(
- checkpoint, inpainting_network = inpainting_network, dense_motion_network = dense_motion_network,
- kp_detector = kp_detector, bg_predictor = bg_predictor,
- optimizer = optimizer, optimizer_bg_predictor = optimizer_bg_predictor)
- print('load success:', start_epoch)
- start_epoch += 1
- else:
- start_epoch = 0
-
- scheduler_optimizer = MultiStepLR(optimizer, train_params['epoch_milestones'], gamma=0.1,
- last_epoch=start_epoch - 1)
- if bg_predictor:
- scheduler_bg_predictor = MultiStepLR(optimizer_bg_predictor, train_params['epoch_milestones'],
- gamma=0.1, last_epoch=start_epoch - 1)
-
- if 'num_repeats' in train_params or train_params['num_repeats'] != 1:
- dataset = DatasetRepeater(dataset, train_params['num_repeats'])
- dataloader = DataLoader(dataset, batch_size=train_params['batch_size'], shuffle=True,
- num_workers=train_params['dataloader_workers'], drop_last=True)
-
- generator_full = GeneratorFullModel(kp_detector, bg_predictor, dense_motion_network, inpainting_network, train_params)
-
- if torch.cuda.is_available():
- generator_full = torch.nn.DataParallel(generator_full).cuda()
-
- bg_start = train_params['bg_start']
-
- with Logger(log_dir=log_dir, visualizer_params=config['visualizer_params'],
- checkpoint_freq=train_params['checkpoint_freq']) as logger:
- for epoch in trange(start_epoch, train_params['num_epochs']):
- for x in dataloader:
- if(torch.cuda.is_available()):
- x['driving'] = x['driving'].cuda()
- x['source'] = x['source'].cuda()
-
- losses_generator, generated = generator_full(x, epoch)
- loss_values = [val.mean() for val in losses_generator.values()]
- loss = sum(loss_values)
- loss.backward()
-
- clip_grad_norm_(kp_detector.parameters(), max_norm=10, norm_type = math.inf)
- clip_grad_norm_(dense_motion_network.parameters(), max_norm=10, norm_type = math.inf)
- if bg_predictor and epoch>=bg_start:
- clip_grad_norm_(bg_predictor.parameters(), max_norm=10, norm_type = math.inf)
-
- optimizer.step()
- optimizer.zero_grad()
- if bg_predictor and epoch>=bg_start:
- optimizer_bg_predictor.step()
- optimizer_bg_predictor.zero_grad()
-
- losses = {key: value.mean().detach().data.cpu().numpy() for key, value in losses_generator.items()}
- logger.log_iter(losses=losses)
-
- scheduler_optimizer.step()
- if bg_predictor:
- scheduler_bg_predictor.step()
-
- model_save = {
- 'inpainting_network': inpainting_network,
- 'dense_motion_network': dense_motion_network,
- 'kp_detector': kp_detector,
- 'optimizer': optimizer,
- }
- if bg_predictor and epoch>=bg_start:
- model_save['bg_predictor'] = bg_predictor
- model_save['optimizer_bg_predictor'] = optimizer_bg_predictor
-
- logger.log_epoch(epoch, model_save, inp=x, out=generated)
-
diff --git a/spaces/AlexWang/lama/bin/gen_outpainting_dataset.py b/spaces/AlexWang/lama/bin/gen_outpainting_dataset.py
deleted file mode 100644
index 72f6fc16c372fbc0aec9643c7be1c44ce5efeba4..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/gen_outpainting_dataset.py
+++ /dev/null
@@ -1,88 +0,0 @@
-#!/usr/bin/env python3
-import glob
-import logging
-import os
-import shutil
-import sys
-import traceback
-
-from saicinpainting.evaluation.data import load_image
-from saicinpainting.evaluation.utils import move_to_device
-
-os.environ['OMP_NUM_THREADS'] = '1'
-os.environ['OPENBLAS_NUM_THREADS'] = '1'
-os.environ['MKL_NUM_THREADS'] = '1'
-os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
-os.environ['NUMEXPR_NUM_THREADS'] = '1'
-
-import cv2
-import hydra
-import numpy as np
-import torch
-import tqdm
-import yaml
-from omegaconf import OmegaConf
-from torch.utils.data._utils.collate import default_collate
-
-from saicinpainting.training.data.datasets import make_default_val_dataset
-from saicinpainting.training.trainers import load_checkpoint
-from saicinpainting.utils import register_debug_signal_handlers
-
-LOGGER = logging.getLogger(__name__)
-
-
-def main(args):
- try:
- if not args.indir.endswith('/'):
- args.indir += '/'
-
- for in_img in glob.glob(os.path.join(args.indir, '**', '*' + args.img_suffix), recursive=True):
- if 'mask' in os.path.basename(in_img):
- continue
-
- out_img_path = os.path.join(args.outdir, os.path.splitext(in_img[len(args.indir):])[0] + '.png')
- out_mask_path = f'{os.path.splitext(out_img_path)[0]}_mask.png'
-
- os.makedirs(os.path.dirname(out_img_path), exist_ok=True)
-
- img = load_image(in_img)
- height, width = img.shape[1:]
- pad_h, pad_w = int(height * args.coef / 2), int(width * args.coef / 2)
-
- mask = np.zeros((height, width), dtype='uint8')
-
- if args.expand:
- img = np.pad(img, ((0, 0), (pad_h, pad_h), (pad_w, pad_w)))
- mask = np.pad(mask, ((pad_h, pad_h), (pad_w, pad_w)), mode='constant', constant_values=255)
- else:
- mask[:pad_h] = 255
- mask[-pad_h:] = 255
- mask[:, :pad_w] = 255
- mask[:, -pad_w:] = 255
-
- # img = np.pad(img, ((0, 0), (pad_h * 2, pad_h * 2), (pad_w * 2, pad_w * 2)), mode='symmetric')
- # mask = np.pad(mask, ((pad_h * 2, pad_h * 2), (pad_w * 2, pad_w * 2)), mode = 'symmetric')
-
- img = np.clip(np.transpose(img, (1, 2, 0)) * 255, 0, 255).astype('uint8')
- img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
- cv2.imwrite(out_img_path, img)
-
- cv2.imwrite(out_mask_path, mask)
- except KeyboardInterrupt:
- LOGGER.warning('Interrupted by user')
- except Exception as ex:
- LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
- sys.exit(1)
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('indir', type=str, help='Root directory with images')
- aparser.add_argument('outdir', type=str, help='Where to store results')
- aparser.add_argument('--img-suffix', type=str, default='.png', help='Input image extension')
- aparser.add_argument('--expand', action='store_true', help='Generate mask by padding (true) or by cropping (false)')
- aparser.add_argument('--coef', type=float, default=0.2, help='How much to crop/expand in order to get masks')
-
- main(aparser.parse_args())
diff --git a/spaces/Ame42/UBTH/app.py b/spaces/Ame42/UBTH/app.py
deleted file mode 100644
index 4a0a4448e65f74409dbb4d1bcccfd590b757a8ff..0000000000000000000000000000000000000000
--- a/spaces/Ame42/UBTH/app.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# This is a sample Python script.
-
-# Press Shift+F10 to execute it or replace it with your code.
-import gradio as gr
-from utils import *
-from datetime import datetime
-
-doc_type = ipp
-prev_sht = None
-curr_sht = None
-
-
-def ui_builder():
- with gr.Blocks() as demo:
- err_view = gr.Textbox(label="Error found", visible=False)
-
- with gr.Tab("Multiple files"):
-
- def generate_all(d):
- try:
- d = [retrieve(dt) for dt in d if retrieve(dt) is not None]
-
- out = "All months.csv"
-
- merge_all(d).to_csv(out)
-
- return {
- err_view: gr.update(visible=False),
- out_file: gr.update(value=out, visible=True, label="Merged file")
- }
- except TypeError:
- return {
- err_view: gr.update(
- value="Please select a folder containing all the files you want to filter",
- visible=True
- ),
- out_file: gr.update(visible=False)
- }
-
- # input ui
- gr.Markdown('### See data that shows up in every month file in the chosen folder')
- all_data = gr.File(label="Add a folder with all months", file_count="directory")
-
- # output ui
- output = gr.Markdown("## *Download your file", visible=False)
- out_file = gr.File(value="Tutorial Guide.pdf", label="Learn to use this app", visible=True)
- run = gr.Button("Generate file")
-
- run.click(fn=generate_all, inputs=all_data, outputs=[err_view, out_file])
- with gr.Tab("Compare two"):
-
- def err_str(err):
- f"""\
-[Faulty file]
- Check ••••• {
- os.path.split(
- os.path.splitext(
- err.get_file()
- )[0]
- )[1][:-8]
- }
-
- {err.get_message()}\
-"""
-
- def raise_error(msg: str) -> dict:
- return {
- err_view: gr.update(
- value=msg,
- visible=True
- ),
- b: gr.update(visible=False),
- f: gr.update(visible=False),
- s: gr.update(visible=False),
- prev_dis: gr.update(value=None),
- curr_dis: gr.update(value=None),
- files: gr.update(visible=False)
- }
-
- def choose_type(event: gr.SelectData):
- global doc_type
- doc_type = event.value
- return {
- uploads: gr.update(visible=True)
- }
-
- def check_prev(pr):
- try:
- shts = pd.ExcelFile(pr.name).sheet_names
-
- return {
- prev_sheet: gr.update(choices=shts),
- sheets: gr.update(visible=True)
- }
- except UnusualFileError as err:
- return raise_error(err_str(err))
-
- def check_curr(cr):
- try:
- shts = pd.ExcelFile(cr.name).sheet_names
-
- return {
- curr_sheet: gr.update(choices=shts),
- sheets: gr.update(visible=True)
- }
- except UnusualFileError as err:
- return raise_error(err_str(err))
-
- def sheet_prev(event: gr.SelectData, file):
- global prev_sht
- prev_sht = event.value
- name, ext = os.path.splitext(file.name)
- pr = get_raw(file.name, prev_sht, ext)
- return {
- data: gr.update(visible=True),
- outputs: gr.update(visible=True),
- prev_dis: gr.update(value=pr)
- }
-
- def sheet_curr(event: gr.SelectData, file):
- global curr_sht
- curr_sht = event.value
- name, ext = os.path.splitext(file.name)
- cr = get_raw(file.name, curr_sht, ext)
- return {
- data: gr.update(visible=True),
- outputs: gr.update(visible=True),
- curr_dis: gr.update(value=cr)
- }
-
- def generate(p, c, b_i, f_i, s_i):
- current_time = datetime.now()
- formatted_time = current_time.strftime('• %d-%m-%Y • %H.%M.%S')
- b_file, f_file, s_file = f"Present in both {formatted_time}.csv", f"Exits {formatted_time}.csv", \
- f"Entries {formatted_time}.csv"
- # extract info from UI results
- try:
- p_name, p_ext = os.path.splitext(p.name)
- c_name, c_ext = os.path.splitext(c.name)
- p = get_data(p.name, prev_sht, doc_type, p_ext)
- c = get_data(c.name, curr_sht, doc_type, c_ext)
-
- # process the data
- if p is None or c is None:
- return raise_error(f"Incompatible column names in either or both files. Make sure they "
- f"conform to the standard.\n\nIPPIS: {ipp_col}\nGIFMIS: {gif_col}")
- elif p.columns[0] != c.columns[0]:
- return raise_error(f"You seem to be mixing {ipp} and {gif} files. This is not allowed")
- else:
- both_, p_merged, c_merged = merge_two(p, c, doc_type)
-
- clear_csv_trash()
-
- # save only the files the user requested
- if b_i:
- both_.to_csv(b_file, index=False)
-
- if f_i:
- p_merged.to_csv(f_file, index=False)
-
- if s_i:
- c_merged.to_csv(s_file, index=False)
-
- return {
- err_view: gr.update(visible=False),
- b: gr.update(value=b_file, visible=True) if b_i else gr.update(visible=False),
- f: gr.update(value=f_file, visible=True) if f_i else gr.update(visible=False),
- s: gr.update(value=s_file, visible=True) if s_i else gr.update(visible=False),
- prev_dis: gr.update(value=p),
- curr_dis: gr.update(value=c),
- files: gr.update(visible=True) if b_i or f_i or s_i else gr.update(visible=False)
- }
- except AttributeError:
- return raise_error("Please select both files below before generating files")
- except UnusualFileError as err:
- return raise_error(err_str(err))
-
- # input ui
- with gr.Blocks():
- ########################################################################################################
- type = gr.Radio([ipp, gif], label="Type", info="Choose a file type")
- ########################################################################################################
- with gr.Row(visible=False) as uploads:
- prev = gr.File(label="Previous month", file_types=['.csv', '.xls', '.xlsx'])
- curr = gr.File(label="Current month", file_types=['.csv', '.xls', '.xlsx'])
- ########################################################################################################
- with gr.Row(visible=False) as sheets:
- prev_sheet = gr.Radio(["N/A"], label="Sheets", info="Which sheet do you want to use?",
- interactive=True)
- curr_sheet = gr.Radio(["N/A"], label="Sheets", info="Which sheet do you want to use?",
- interactive=True)
- ########################################################################################################
- with gr.Row(visible=False) as data:
- prev_dis = gr.Dataframe(row_count=(5, "fixed"), col_count=(5, "fixed"), interactive=False)
- curr_dis = gr.Dataframe(row_count=(5, "fixed"), col_count=(5, "fixed"), interactive=False)
- ########################################################################################################
- with gr.Column(visible=False) as outputs:
- both = gr.Checkbox(label="See data that shows up in both months")
- first = gr.Checkbox(label="See data that's in the previous month but not in the current")
- second = gr.Checkbox(True, label="See data that's in the current month but not in the previous")
- ########################################################################################################
- # output ui
- with gr.Blocks():
- output = gr.Markdown("## *Download your files", visible=False)
- with gr.Row(visible=False) as files:
- b = gr.File(label="Both months", visible=False)
- f = gr.File(label="Previous month", visible=False)
- s = gr.File(label="Current month", visible=False)
- run = gr.Button("Generate files")
-
- type.select(fn=choose_type, inputs=None, outputs=[uploads])
- prev.upload(fn=check_prev, inputs=[prev], outputs=[prev_sheet, sheets])
- curr.upload(fn=check_curr, inputs=[curr], outputs=[curr_sheet, sheets])
- prev_sheet.select(fn=sheet_prev, inputs=[prev], outputs=[data, outputs, prev_dis])
- curr_sheet.select(fn=sheet_curr, inputs=[curr], outputs=[data, outputs, curr_dis])
- run.click(fn=generate, inputs=[prev, curr, both, first, second], outputs=[err_view, b, f, s, prev_dis,
- curr_dis, files])
- demo.launch()
-
-
-# Press the green button in the gutter to run the script.
-if __name__ == '__main__':
- ui_builder()
-
-# See PyCharm help at https://www.jetbrains.com/help/pycharm/
diff --git a/spaces/Amrrs/hubble-jwst-compare/app.py b/spaces/Amrrs/hubble-jwst-compare/app.py
deleted file mode 100644
index 6b39bff534b67fc3e744dab7809a7bd295d3296e..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/hubble-jwst-compare/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import streamlit as st
-from streamlit_image_comparison import image_comparison
-
-# set page config
-st.set_page_config(page_title="James Webb Space Telescope vs Hubble Telescope Images", layout="centered")
-
-st.title("James Webb vs Hubble Telescope Pictures")
-
-st.markdown("# Southern Nebula")
-
-# render image-comparison
-image_comparison(
- img1="https://www.webbcompare.com/img/hubble/southern_nebula_700.jpg",
- img2="https://www.webbcompare.com/img/webb/southern_nebula_700.jpg",
- label1="Hubble",
- label2="Webb"
-)
-
-
-st.markdown("# Galaxy Cluster SMACS 0723")
-
-# render image-comparison
-image_comparison(
- img1="https://www.webbcompare.com/img/hubble/deep_field_700.jpg",
- img2="https://www.webbcompare.com/img/webb/deep_field_700.jpg",
- label1="Hubble",
- label2="Webb"
-)
-
-
-st.markdown("# Carina Nebula")
-
-# render image-comparison
-image_comparison(
- img1="https://www.webbcompare.com/img/hubble/carina_700.png",
- img2="https://www.webbcompare.com/img/webb/carina_700.jpg",
- label1="Hubble",
- label2="Webb"
-)
-
-st.markdown("# Stephan's Quintet")
-
-# render image-comparison
-image_comparison(
- img1="https://www.webbcompare.com/img/hubble/stephans_quintet_700.jpg",
- img2="https://www.webbcompare.com/img/webb/stephans_quintet_700.jpg",
- label1="Hubble",
- label2="Webb"
-)
-
-
-
-st.caption("Inspiration Credit - https://www.webbcompare.com/")
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_scipy_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_scipy_objects.py
deleted file mode 100644
index a1ff25863822b04971d2c6dfdc17f5b28774cf05..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_scipy_objects.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# This file is autogenerated by the command `make fix-copies`, do not edit.
-from ..utils import DummyObject, requires_backends
-
-
-class LMSDiscreteScheduler(metaclass=DummyObject):
- _backends = ["torch", "scipy"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch", "scipy"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "scipy"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch", "scipy"])
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py
deleted file mode 100644
index 8357766f50ff638f13ca56bd79d1b1c64e96f3dd..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,15 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch',
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_1x_coco.py
deleted file mode 100644
index 13a4645bfdb50d5a2f04cee49ecc5f7647d10acf..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- backbone=dict(plugins=[
- dict(
- cfg=dict(
- type='GeneralizedAttention',
- spatial_range=-1,
- num_heads=8,
- attention_type='1111',
- kv_stride=2),
- stages=(False, False, True, True),
- position='after_conv2')
- ]))
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/__init__.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/__init__.py
deleted file mode 100644
index dcc5619538c0f7c782508bdbd9587259d805e0d9..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .clip import *
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/__init__.py
deleted file mode 100644
index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr
-from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light
diff --git a/spaces/AnticPan/Clothes2Human/app.py b/spaces/AnticPan/Clothes2Human/app.py
deleted file mode 100644
index bc6a9a587a819091e242d693107ce66931d92bdd..0000000000000000000000000000000000000000
--- a/spaces/AnticPan/Clothes2Human/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-import json
-import requests
-import gradio as gr
-from util import base64_to_img, img_to_base64, resize_image
-
-url = os.getenv("REQUEST_URL")
-headers = {'Content-Type': 'application/json',
- 'Validation-Key': os.getenv("VALIDATION_KEY")}
-names = ["input_image", "prompt", "neg_prompt", "maxlen", "step", "cfg", "seed", "up", "down", "left", "right"]
-def run(*params):
- params = {k:v for k, v in zip(names, params)}
- image = params.pop("input_image")
- image = resize_image(image)
- params["image_base64"] = img_to_base64(image)
- try:
- response = requests.post(url, headers=headers, data=json.dumps(params), timeout=30)
- if response.status_code != 200:
- raise ValueError()
- data = response.json()
- except:
- raise gr.Error("Fail to generate")
- if data["code"] != 0:
- raise gr.Error(data["message"])
- result = base64_to_img(data["content"])
- return result
-
-with gr.Blocks() as demo:
- gr.Markdown("# SDXL inpainting for Clothes2Human")
- with gr.Row().style(equal_height=True):
- with gr.Column():
- input_image = gr.Image(type="pil", height=300)
- with gr.Column():
- output_image = gr.Image(type="pil", height=300)
-
- with gr.Row():
- with gr.Column():
- prompt = gr.Textbox(label="Prompt")
- neg_prompt = gr.Textbox(label="Negative Prompt")
-
- maxlen = gr.Slider(label="Max Edge Length", step=32, minimum=768, maximum=1536, value=1024)
- step = gr.Slider(label="Step", minimum=20, maximum=70, value=50, step=1)
-
- with gr.Column():
- up = gr.Slider(label="Scale Up Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
- down = gr.Slider(label="Scale Down Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
- left = gr.Slider(label="Scale Left Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
- right = gr.Slider(label="Scale Right Image", minimum=-0.3, maximum=0.5, value=0, step=0.1)
- with gr.Column():
- cfg = gr.Slider(label="CFG Scale", minimum=1.0, maximum=9.0, value=5.0, step=0.5)
- seed = gr.Slider(label="Seed", minimum=-1, maximum=1000000, value=-1, step=1)
- inpaint_button = gr.Button()
-
- run_in = [input_image, prompt, neg_prompt, maxlen, step, cfg, seed, up, down, left, right]
- inpaint_button.click(run, inputs=run_in, outputs=[output_image])
-
- gr.Examples([["imgs/1.jpg","A man wearing a white T-shirt stands on the beach","", 1024, 50, 5.0, 333866, 0.3, 0.3, 0.1, 0.1],
- ["imgs/2.jpg"," woman wearing a blue dress stands in a park, asian race","", 1280, 50, 5.0, 443652, 0.3, 0.3, 0.2, 0.2],
- ["imgs/3.jpg","A woman wearing a white dress stands","", 1280, 50, 5.0, 306728, -0.1, -0.2, 0, 0]],
- inputs=run_in, outputs=[output_image], fn=run, cache_examples=True)
-
-demo.queue(concurrency_count=2).launch()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/padding.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/padding.py
deleted file mode 100644
index 1b2204f59f2ce4d9c8f2cca85326e4d81f8805bb..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/padding.py
+++ /dev/null
@@ -1,141 +0,0 @@
-from typing import cast, List, Optional, Tuple, TYPE_CHECKING, Union
-
-if TYPE_CHECKING:
- from .console import (
- Console,
- ConsoleOptions,
- RenderableType,
- RenderResult,
- )
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .style import Style
-from .segment import Segment
-
-
-PaddingDimensions = Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int, int]]
-
-
-class Padding(JupyterMixin):
- """Draw space around content.
-
- Example:
- >>> print(Padding("Hello", (2, 4), style="on blue"))
-
- Args:
- renderable (RenderableType): String or other renderable.
- pad (Union[int, Tuple[int]]): Padding for top, right, bottom, and left borders.
- May be specified with 1, 2, or 4 integers (CSS style).
- style (Union[str, Style], optional): Style for padding characters. Defaults to "none".
- expand (bool, optional): Expand padding to fit available width. Defaults to True.
- """
-
- def __init__(
- self,
- renderable: "RenderableType",
- pad: "PaddingDimensions" = (0, 0, 0, 0),
- *,
- style: Union[str, Style] = "none",
- expand: bool = True,
- ):
- self.renderable = renderable
- self.top, self.right, self.bottom, self.left = self.unpack(pad)
- self.style = style
- self.expand = expand
-
- @classmethod
- def indent(cls, renderable: "RenderableType", level: int) -> "Padding":
- """Make padding instance to render an indent.
-
- Args:
- renderable (RenderableType): String or other renderable.
- level (int): Number of characters to indent.
-
- Returns:
- Padding: A Padding instance.
- """
-
- return Padding(renderable, pad=(0, 0, 0, level), expand=False)
-
- @staticmethod
- def unpack(pad: "PaddingDimensions") -> Tuple[int, int, int, int]:
- """Unpack padding specified in CSS style."""
- if isinstance(pad, int):
- return (pad, pad, pad, pad)
- if len(pad) == 1:
- _pad = pad[0]
- return (_pad, _pad, _pad, _pad)
- if len(pad) == 2:
- pad_top, pad_right = cast(Tuple[int, int], pad)
- return (pad_top, pad_right, pad_top, pad_right)
- if len(pad) == 4:
- top, right, bottom, left = cast(Tuple[int, int, int, int], pad)
- return (top, right, bottom, left)
- raise ValueError(f"1, 2 or 4 integers required for padding; {len(pad)} given")
-
- def __repr__(self) -> str:
- return f"Padding({self.renderable!r}, ({self.top},{self.right},{self.bottom},{self.left}))"
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- style = console.get_style(self.style)
- if self.expand:
- width = options.max_width
- else:
- width = min(
- Measurement.get(console, options, self.renderable).maximum
- + self.left
- + self.right,
- options.max_width,
- )
- render_options = options.update_width(width - self.left - self.right)
- if render_options.height is not None:
- render_options = render_options.update_height(
- height=render_options.height - self.top - self.bottom
- )
- lines = console.render_lines(
- self.renderable, render_options, style=style, pad=True
- )
- _Segment = Segment
-
- left = _Segment(" " * self.left, style) if self.left else None
- right = (
- [_Segment(f'{" " * self.right}', style), _Segment.line()]
- if self.right
- else [_Segment.line()]
- )
- blank_line: Optional[List[Segment]] = None
- if self.top:
- blank_line = [_Segment(f'{" " * width}\n', style)]
- yield from blank_line * self.top
- if left:
- for line in lines:
- yield left
- yield from line
- yield from right
- else:
- for line in lines:
- yield from line
- yield from right
- if self.bottom:
- blank_line = blank_line or [_Segment(f'{" " * width}\n', style)]
- yield from blank_line * self.bottom
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- max_width = options.max_width
- extra_width = self.left + self.right
- if max_width - extra_width < 1:
- return Measurement(max_width, max_width)
- measure_min, measure_max = Measurement.get(console, options, self.renderable)
- measurement = Measurement(measure_min + extra_width, measure_max + extra_width)
- measurement = measurement.with_maximum(max_width)
- return measurement
-
-
-if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich import print
-
- print(Padding("Hello, World", (2, 4), style="on blue"))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_lib.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_lib.py
deleted file mode 100644
index ad3089c8b144f292e9560c8cefcbab4012d09a45..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install_lib.py
+++ /dev/null
@@ -1,238 +0,0 @@
-"""distutils.command.install_lib
-
-Implements the Distutils 'install_lib' command
-(install all Python modules)."""
-
-import os
-import importlib.util
-import sys
-
-from distutils.core import Command
-from distutils.errors import DistutilsOptionError
-
-
-# Extension for Python source files.
-PYTHON_SOURCE_EXTENSION = ".py"
-
-
-class install_lib(Command):
-
- description = "install all Python modules (extensions and pure Python)"
-
- # The byte-compilation options are a tad confusing. Here are the
- # possible scenarios:
- # 1) no compilation at all (--no-compile --no-optimize)
- # 2) compile .pyc only (--compile --no-optimize; default)
- # 3) compile .pyc and "opt-1" .pyc (--compile --optimize)
- # 4) compile "opt-1" .pyc only (--no-compile --optimize)
- # 5) compile .pyc and "opt-2" .pyc (--compile --optimize-more)
- # 6) compile "opt-2" .pyc only (--no-compile --optimize-more)
- #
- # The UI for this is two options, 'compile' and 'optimize'.
- # 'compile' is strictly boolean, and only decides whether to
- # generate .pyc files. 'optimize' is three-way (0, 1, or 2), and
- # decides both whether to generate .pyc files and what level of
- # optimization to use.
-
- user_options = [
- ('install-dir=', 'd', "directory to install to"),
- ('build-dir=', 'b', "build directory (where to install from)"),
- ('force', 'f', "force installation (overwrite existing files)"),
- ('compile', 'c', "compile .py to .pyc [default]"),
- ('no-compile', None, "don't compile .py files"),
- (
- 'optimize=',
- 'O',
- "also compile with optimization: -O1 for \"python -O\", "
- "-O2 for \"python -OO\", and -O0 to disable [default: -O0]",
- ),
- ('skip-build', None, "skip the build steps"),
- ]
-
- boolean_options = ['force', 'compile', 'skip-build']
- negative_opt = {'no-compile': 'compile'}
-
- def initialize_options(self):
- # let the 'install' command dictate our installation directory
- self.install_dir = None
- self.build_dir = None
- self.force = 0
- self.compile = None
- self.optimize = None
- self.skip_build = None
-
- def finalize_options(self):
- # Get all the information we need to install pure Python modules
- # from the umbrella 'install' command -- build (source) directory,
- # install (target) directory, and whether to compile .py files.
- self.set_undefined_options(
- 'install',
- ('build_lib', 'build_dir'),
- ('install_lib', 'install_dir'),
- ('force', 'force'),
- ('compile', 'compile'),
- ('optimize', 'optimize'),
- ('skip_build', 'skip_build'),
- )
-
- if self.compile is None:
- self.compile = True
- if self.optimize is None:
- self.optimize = False
-
- if not isinstance(self.optimize, int):
- try:
- self.optimize = int(self.optimize)
- if self.optimize not in (0, 1, 2):
- raise AssertionError
- except (ValueError, AssertionError):
- raise DistutilsOptionError("optimize must be 0, 1, or 2")
-
- def run(self):
- # Make sure we have built everything we need first
- self.build()
-
- # Install everything: simply dump the entire contents of the build
- # directory to the installation directory (that's the beauty of
- # having a build directory!)
- outfiles = self.install()
-
- # (Optionally) compile .py to .pyc
- if outfiles is not None and self.distribution.has_pure_modules():
- self.byte_compile(outfiles)
-
- # -- Top-level worker functions ------------------------------------
- # (called from 'run()')
-
- def build(self):
- if not self.skip_build:
- if self.distribution.has_pure_modules():
- self.run_command('build_py')
- if self.distribution.has_ext_modules():
- self.run_command('build_ext')
-
- def install(self):
- if os.path.isdir(self.build_dir):
- outfiles = self.copy_tree(self.build_dir, self.install_dir)
- else:
- self.warn(
- "'%s' does not exist -- no Python modules to install" % self.build_dir
- )
- return
- return outfiles
-
- def byte_compile(self, files):
- if sys.dont_write_bytecode:
- self.warn('byte-compiling is disabled, skipping.')
- return
-
- from distutils.util import byte_compile
-
- # Get the "--root" directory supplied to the "install" command,
- # and use it as a prefix to strip off the purported filename
- # encoded in bytecode files. This is far from complete, but it
- # should at least generate usable bytecode in RPM distributions.
- install_root = self.get_finalized_command('install').root
-
- if self.compile:
- byte_compile(
- files,
- optimize=0,
- force=self.force,
- prefix=install_root,
- dry_run=self.dry_run,
- )
- if self.optimize > 0:
- byte_compile(
- files,
- optimize=self.optimize,
- force=self.force,
- prefix=install_root,
- verbose=self.verbose,
- dry_run=self.dry_run,
- )
-
- # -- Utility methods -----------------------------------------------
-
- def _mutate_outputs(self, has_any, build_cmd, cmd_option, output_dir):
- if not has_any:
- return []
-
- build_cmd = self.get_finalized_command(build_cmd)
- build_files = build_cmd.get_outputs()
- build_dir = getattr(build_cmd, cmd_option)
-
- prefix_len = len(build_dir) + len(os.sep)
- outputs = []
- for file in build_files:
- outputs.append(os.path.join(output_dir, file[prefix_len:]))
-
- return outputs
-
- def _bytecode_filenames(self, py_filenames):
- bytecode_files = []
- for py_file in py_filenames:
- # Since build_py handles package data installation, the
- # list of outputs can contain more than just .py files.
- # Make sure we only report bytecode for the .py files.
- ext = os.path.splitext(os.path.normcase(py_file))[1]
- if ext != PYTHON_SOURCE_EXTENSION:
- continue
- if self.compile:
- bytecode_files.append(
- importlib.util.cache_from_source(py_file, optimization='')
- )
- if self.optimize > 0:
- bytecode_files.append(
- importlib.util.cache_from_source(
- py_file, optimization=self.optimize
- )
- )
-
- return bytecode_files
-
- # -- External interface --------------------------------------------
- # (called by outsiders)
-
- def get_outputs(self):
- """Return the list of files that would be installed if this command
- were actually run. Not affected by the "dry-run" flag or whether
- modules have actually been built yet.
- """
- pure_outputs = self._mutate_outputs(
- self.distribution.has_pure_modules(),
- 'build_py',
- 'build_lib',
- self.install_dir,
- )
- if self.compile:
- bytecode_outputs = self._bytecode_filenames(pure_outputs)
- else:
- bytecode_outputs = []
-
- ext_outputs = self._mutate_outputs(
- self.distribution.has_ext_modules(),
- 'build_ext',
- 'build_lib',
- self.install_dir,
- )
-
- return pure_outputs + bytecode_outputs + ext_outputs
-
- def get_inputs(self):
- """Get the list of files that are input to this command, ie. the
- files that get installed as they are named in the build tree.
- The files in this list correspond one-to-one to the output
- filenames returned by 'get_outputs()'.
- """
- inputs = []
-
- if self.distribution.has_pure_modules():
- build_py = self.get_finalized_command('build_py')
- inputs.extend(build_py.get_outputs())
-
- if self.distribution.has_ext_modules():
- build_ext = self.get_finalized_command('build_ext')
- inputs.extend(build_ext.get_outputs())
-
- return inputs
diff --git a/spaces/Atualli/yoloxTeste/checkYolox.sh b/spaces/Atualli/yoloxTeste/checkYolox.sh
deleted file mode 100644
index 4850bf3db22fb0b0fa33107557a1a2462eaaa7b0..0000000000000000000000000000000000000000
--- a/spaces/Atualli/yoloxTeste/checkYolox.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/sh
-export path=/home/atualli/.local/lib/python3.8/site-packages:$PATH
-cd ~/Projetos/huggingface/yoloxTeste
-SERVER=192.168.0.153
-PORT=8080
-
-if lsof -Pi :$PORT -sTCP:LISTEN -t >/dev/null ; then
- echo "running"
-else
- ./telegramCrise.sh "reiniciando_yolox_linux_192.168.0.153:8080"
- pkill -f app.py
- #rm -r /tmp/tmp1*.png
- python app.py &
- echo "not running"
-fi
-
-
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/__init__.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/__init__.py
deleted file mode 100644
index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from text.symbols import *
-
-
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
-def cleaned_text_to_sequence(cleaned_text, tones, language):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- phones = [_symbol_to_id[symbol] for symbol in cleaned_text]
- tone_start = language_tone_start_map[language]
- tones = [i + tone_start for i in tones]
- lang_id = language_id_map[language]
- lang_ids = [lang_id for i in phones]
- return phones, tones, lang_ids
-
-def get_bert(norm_text, word2ph, language):
- from .chinese_bert import get_bert_feature as zh_bert
- from .english_bert_mock import get_bert_feature as en_bert
- lang_bert_func_map = {
- 'ZH': zh_bert,
- 'EN': en_bert
- }
- bert = lang_bert_func_map[language](norm_text, word2ph)
- return bert
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar El Tiempo De Juego Del Proyecto En Steam.md b/spaces/Benson/text-generation/Examples/Cmo Descargar El Tiempo De Juego Del Proyecto En Steam.md
deleted file mode 100644
index 15d52492874c6eea9407c61057de6e913b8dd049..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar El Tiempo De Juego Del Proyecto En Steam.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Cómo descargar el tiempo de reproducción del proyecto en Steam
-
¿Te gustan los juegos de terror? ¿Te gusta jugar con tus amigos o extraños en línea? ¿Quieres experimentar un juego emocionante y aterrador que te mantendrá al borde de tu asiento? Si respondiste afirmativamente a cualquiera de estas preguntas, deberías probar Project Playtime, un juego multijugador gratuito de terror que está disponible en Steam. En este artículo, te mostraremos cómo descargar y jugar a Project Playtime en Steam, además de darte algunos consejos y trucos para sobrevivir como sobreviviente o monstruo.
-
¿Qué es Project Playtime?
-
Project Playtime es un juego de terror multijugador donde seis jugadores intentan crear un juguete gigante mientras sobreviven a un monstruo aterrador que deambula por la fábrica de juguetes. Un séptimo jugador controla al monstruo y solo tiene un objetivo: Encontrar y matar a todos. El juego fue lanzado el 12 de diciembre de 2022 por Mob Entertainment, un estudio de juegos indie con sede en Texas. El juego ha recibido críticas muy positivas de jugadores y críticos por igual, alabando su jugabilidad, gráficos, diseño de sonido y atmósfera.
-
cómo descargar el tiempo de juego del proyecto en Steam
Hay muchas razones por las que deberías jugar a Project Playtime si eres un fan de los juegos de terror. Estas son algunas de ellas:
-
-
El juego es gratuito. No necesitas pagar nada para descargar y jugar el juego. También puedes ganar boletos jugando partidos, completando logros y abriendo cajas de juguetes. Puedes usar estos boletos para comprar cosméticos, beneficios, sabotajes y otros artículos en la tienda.
-
El juego es multijugador. Puede jugar con sus amigos o unirse a grupos de presión al azar en línea. También puedes chatear con otros jugadores usando chat de voz o de texto. Puedes elegir jugar como un sobreviviente o un monstruo, cada uno con sus propios roles, habilidades y estrategias.
-
-
El juego es divertido. El juego tiene mucho valor de repetición porque cada partido es diferente y desafiante. El juego tiene una gran variedad y opciones de personalización. Puedes elegir entre diferentes monstruos, sobrevivientes, beneficios, sabotajes, mapas, modos y configuraciones. También puedes desbloquear nuevos objetos y logros mientras juegas.
-
-
Así que, si estás buscando un juego de terror que sea gratuito, multijugador y divertido, definitivamente deberías probar Project Playtime.
-
Cómo obtener una cuenta de Steam e instalar Steam
-
Antes de poder descargar y jugar Project Playtime en Steam, necesitas tener una cuenta de Steam e instalar el cliente de Steam en tu computadora. Estos son los pasos para hacerlo:
Cómo encontrar y descargar Project Playtime en Steam
-
Ahora que tienes una cuenta de Steam e has instalado el cliente de Steam, puedes encontrar y descargar Project Playtime en Steam. Estos son los pasos para hacerlo:
-
-
Abre el cliente de Steam y ve a la pestaña "Tienda".
-
-
Verás la página del juego en la tienda de Steam. Haz clic en el botón "Jugar".
-
Aparecerá una ventana emergente pidiéndole que instale Project Playtime. Haga clic en el botón "Next".
-
Seleccione la carpeta de destino donde desea instalar el juego. Haga clic en el botón "Next".
-
Se iniciará el proceso de descarga. Puede ver el progreso y la velocidad de la descarga en la pestaña "Descargas".
-
Espere a que termine la descarga. Puede tomar algún tiempo dependiendo de su conexión a Internet y espacio en disco.
-
Una vez que la descarga se haya completado, verá un mensaje que dice "Project Playtime ya está listo para jugar". Haga clic en el botón "Play".
-
Has descargado e instalado correctamente el Project Playtime en Steam. ¡Disfruta!
-
-
Cómo jugar Project Playtime en Steam
-
Ahora que has descargado e instalado Project Playtime en Steam, puedes empezar a reproducirlo. Estos son los pasos para hacerlo:
-
-
Iniciar tiempo de reproducción del proyecto desde la biblioteca de Steam o desde el acceso directo del escritorio.
-
Verás el menú principal del juego. Puedes acceder a diferentes opciones como configuración, tienda, logros, perfil, etc.
-
Para empezar a jugar, haga clic en el botón "Play". Verá dos opciones: "Quick Match" y "Custom Match".
-
Si desea unirse a un lobby aleatorio en línea, haga clic en "Quick Match". Se le emparejará con otros jugadores en función de su región y preferencias. Puedes elegir jugar como sobreviviente o monstruo, o dejar que el juego decida por ti al azar.
-
-
Una vez que estás en un lobby, puedes chatear con otros jugadores usando chat de voz o texto. También puedes cambiar la apariencia de tu personaje haciendo clic en el botón "Personalizar". Puedes equipar diferentes cosméticos, beneficios, sabotajes, etc. que hayas comprado o ganado en la tienda. También puede cambiar su rol haciendo clic en el botón "Rol". Puedes elegir jugar como sobreviviente o monstruo, o dejar que el juego decida por ti al azar.
-
Cuando todo el mundo está listo, el anfitrión puede iniciar el partido haciendo clic en el botón "Inicio". El juego cargará el mapa y el modo que se seleccionaron.
-
El partido comenzará con una corta escena que presenta la historia y el objetivo del juego. Los supervivientes aparecerán en un lugar aleatorio en la fábrica de juguetes. El monstruo aparecerá en una habitación oculta cercana.
-
El objetivo de los supervivientes es encontrar y recoger seis piezas de juguete que están dispersas por el mapa. Necesitan llevarlos a una máquina de juguete gigante y montarlos juntos. También necesitan resolver rompecabezas, evitar trampas y esconderse del monstruo. Los sobrevivientes tienen una salud limitada, resistencia y batería de linterna. Pueden usar beneficios y sabotajes para ayudarles a escapar.
-El objetivo del monstruo es encontrar y matar a todos los supervivientes antes de que completen su objetivo. El monstruo puede usar diferentes habilidades, como correr, rugir, aplastar, etc. El monstruo también puede usar beneficios y sabotajes para obstaculizar el progreso de los sobrevivientes y atraparlos.
-
El combate terminará cuando los supervivientes completen su objetivo y escapen, o el monstruo mate a todos los supervivientes. El juego mostrará los resultados del partido, como quién ganó, quién murió, quién escapó, etc. El juego también otorgará entradas a cada jugador en función de su rendimiento.
-
Puede jugar otro partido haciendo clic en el botón "Revancha", o volver al menú principal haciendo clic en el botón "Dejar".
-
-
Consejos y trucos para jugar Project Playtime
-
-
Cómo trabajar juntos como un sobreviviente
-
Como sobreviviente, necesitas cooperar con tus compañeros sobrevivientes para escapar del monstruo y completar tu objetivo. Aquí hay algunas maneras de trabajar juntos como un sobreviviente:
-
-
Comunícate con tus compañeros de equipo mediante chat de voz o de texto. Puedes compartir información, advertirse, pedir ayuda, etc.
-
Manténganse juntos tanto como sea posible. Es más probable que sobrevivan si tienen a alguien cuidando su espalda. También pueden ayudarse mutuamente con rompecabezas, trampas y sanación.
-
Sepárate cuando sea necesario. A veces necesitas cubrir más terreno o distraer al monstruo. También puede utilizar diferentes ventajas y sabotajes para coordinar sus acciones.
-
Sea consciente de su entorno. Necesita saber dónde están las piezas de juguete, rompecabezas, salidas, escondites, etc. . También debes estar atento a pistas, ruidos, sombras, etc. que indiquen dónde está el monstruo.
-
Sé inteligente y sigiloso. Debes evitar hacer ruido o dejar rastros que puedan alertar al monstruo. También necesita usar su linterna sabiamente y esconderse cuando sea necesario.
-
-
Cómo usar beneficios y sabotajes como sobreviviente
-
Como sobreviviente, puedes usar diferentes beneficios y sabotajes para obtener una ventaja sobre el monstruo. Aquí hay algunos ejemplos de ventajas y sabotajes que puedes usar como sobreviviente:
-
-
Las ventajas son habilidades pasivas que le dan beneficios como el aumento de la salud, la resistencia, la velocidad, etc. Puede equipar hasta tres ventajas a la vez en el menú personalizado.
-
Los sabotajes son habilidades activas que te permiten interactuar con objetos en el mapa como puertas, interruptores, respiraderos, etc. Puedes usar sabotajes para bloquear, atrapar o distraer al monstruo. Puede equipar hasta dos sabotajes a la vez en el menú personalizado.
-
-
Algunos beneficios y sabotajes tienen tiempos de reutilización o cargos que limitan la frecuencia con la que puede usarlos. Por ejemplo, solo puedes usar el beneficio "Segunda Oportunidad" una vez por partido para revivirte después de ser derribado por el monstruo. Solo puedes usar el sabotaje "Firecracker" tres veces por partido para crear un ruido fuerte que atraiga o espante al monstruo.
-
Puedes comprar nuevos beneficios y sabotajes en la tienda con entradas que ganas jugando partidos, completando logros y abriendo cajas de juguetes.
-
-
Cómo cazar sobrevivientes como un monstruo
-
Como monstruo, necesitas usar tus sentidos, habilidades y estrategias para encontrar y matar a todos los supervivientes antes de que escapen. Aquí hay algunas maneras de cazar sobrevivientes como un monstruo:
-
-
-
Usa tu visión, oído y olor para localizar a los sobrevivientes. Puedes ver sus huellas, escuchar sus ruidos y oler su sangre. También puede ver sus rayos de linterna y siluetas.
-
Usa tus habilidades para perseguir, atacar y matar a los supervivientes. Puedes correr, rugir, aplastar y morder. También puedes usar habilidades especiales que son únicas para cada monstruo. Por ejemplo, el Payaso puede lanzar pasteles que los sobrevivientes ciegos, la Muñeca puede teletransportarse a las muñecas cercanas, y el Teddy puede transformarse en un oso gigante.
-
Usa tus beneficios y sabotajes para obstaculizar, atrapar y asustar a los sobrevivientes. Puede equipar hasta tres ventajas y dos sabotajes a la vez en el menú personalizado. Los beneficios son habilidades pasivas que te dan beneficios como mayor velocidad, daño, salud, etc. Los sabotajes son habilidades activas que te permiten interactuar con objetos en el mapa como puertas, interruptores, respiraderos, etc. Puedes usar sabotajes para bloquear, atrapar o distraer a los sobrevivientes.
-
-
Algunos beneficios y sabotajes tienen tiempos de reutilización o cargos que limitan la frecuencia con la que puede usarlos. Por ejemplo, solo puedes usar el beneficio "Rage" una vez por partido para aumentar tu daño y velocidad por un corto tiempo. Solo puedes usar el sabotaje "Blackout" tres veces por partido para apagar todas las luces del mapa durante unos segundos.
-
Puedes comprar nuevos beneficios y sabotajes en la tienda con entradas que ganas jugando partidos, completando logros y abriendo cajas de juguetes.
-
-
Cómo personalizar tu personaje y jugabilidad en Project Playtime
-
Project Playtime te permite personalizar tu personaje y jugabilidad de varias maneras. Puede acceder a la tienda, comprar cosméticos, beneficios, sabotajes y otros artículos con boletos. También puedes cambiar la configuración, como gráficos, sonido, controles, etc. Estas son algunas formas de personalizar tu personaje y el modo de juego en Project Playtime:
-
Cómo ganar entradas en Project Playtime
-
Las entradas son la moneda de Project Playtime. Puede utilizar las entradas para comprar artículos en la tienda. Aquí hay algunas maneras de ganar tickets en Project Playtime:
-
-
Juega partidos. Ganarás tickets según tu rendimiento en cada partido. La cantidad de tickets que ganes dependerá de factores como tu rol, tu puntuación, el resultado de tu equipo, etc.
-
Logros completos. Usted ganará boletos para completar varios logros en el juego. Los logros son desafíos que requieren que hagas tareas específicas o alcances ciertos hitos en el juego. Por ejemplo, puedes ganar un logro por escapar como sobreviviente 10 veces o matar a 50 sobrevivientes como monstruo.
-
Abre cajas de juguetes. Ganarás entradas para abrir cajas de juguetes que encuentres en el mapa o recibirás como recompensas. Las cajas de juguetes son contenedores que contienen artículos aleatorios como cosméticos, beneficios, sabotajes, etc. Puede abrir cajas de juguetes haciendo clic en ellas en la tienda o en su inventario.
-
-
Cómo gastar entradas en Project Playtime
-
-
-
Navegar por la tienda. Puede acceder a la tienda haciendo clic en el botón "Almacenar" en el menú principal o en el lobby. Puede navegar por diferentes categorías de artículos como cosméticos, beneficios, sabotajes, etc. Puede ver el nombre, la descripción, el precio y la vista previa de cada artículo. También puede filtrar elementos por rol, rareza, tipo, etc.
-
Comprar artículos. Puede comprar artículos haciendo clic en el botón "Comprar" junto al artículo que desee. Verá una ventana de confirmación que le pedirá que confirme su compra. También puede ver cuántas entradas tiene y cuántas entradas gastará. Haga clic en el botón "Confirmar" para completar su compra.
-
Equipar artículos. Puede equipar artículos haciendo clic en el botón "Personalizar" en el vestíbulo o en la tienda. Puedes ver la apariencia y las estadísticas de tu personaje. También puedes ver los artículos que has comprado o ganado en tu inventario. Puede equipar los elementos arrastrando y soltando a las ranuras correspondientes. Puede equipar hasta tres cosméticos, tres beneficios y dos sabotajes a la vez. También puede desigualizar elementos arrastrándolos y soltándolos en la papelera.
-
Usa elementos. Puedes usar elementos jugando con tu personaje personalizado. Puedes ver los elementos equipados en el menú del partido o en la pantalla del juego. Puede usar beneficios y sabotajes presionando los botones o teclas correspondientes.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Project Playtime:
-
-
Q: ¿Está libre el Project Playtime?
-
A: Sí, Project Playtime es gratuito. No necesitas pagar nada para descargar y jugar el juego.
-
Q: ¿Es multijugador Project Playtime?
-
A: Sí, Project Playtime es multijugador. Puedes jugar con tus amigos o unirte a grupos de presión aleatorios en línea.
-
Q: ¿Es Project Playtime horror?
-
A: Sí, Project Playtime es horror. El juego tiene una atmósfera oscura y espeluznante que te hará sentir incómodo y asustado.
-
Q: ¿Cuántos jugadores pueden jugar Project Playtime?
-
A: Project Playtime admite hasta siete jugadores por partido. Seis jugadores juegan como supervivientes y un jugador juega como un monstruo.
-
Q: ¿Cuánto tiempo es una coincidencia en Project Playtime?
-
A: Un partido en Project Playtime dura de 10 a 15 minutos, dependiendo del mapa, el modo, la dificultad y las habilidades de los jugadores.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar 3d Fondo De Pantalla En Vivo.md b/spaces/Benson/text-generation/Examples/Descargar 3d Fondo De Pantalla En Vivo.md
deleted file mode 100644
index 983b1da8ea77ec7088e8a10eea38ba3a3209c1c9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar 3d Fondo De Pantalla En Vivo.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Descargar 3D Live Wallpaper: Cómo hacer que su escritorio cobre vida
-
¿Quieres darle vida a tu escritorio con algunas imágenes impresionantes? ¿Quieres hacer tu computadora más personalizada e interactiva? ¿Quieres divertirte y divertirte mientras trabajas o estudias? Si respondiste sí a cualquiera de estas preguntas, entonces deberías intentar descargar fondos de escritorio en 3D.
3D live wallpaper es un tipo de papel pintado animado que utiliza gráficos tridimensionales para crear escenas realistas e inmersivas en su pantalla. A diferencia de los fondos de pantalla estáticos, los fondos de pantalla en vivo en 3D pueden moverse, cambiar y reaccionar a sus acciones. Puedes elegir entre una variedad de temas y estilos, como naturaleza, fantasía, ciencia ficción, anime, películas, juegos y más. También puede crear su propio fondo de pantalla en vivo en 3D utilizando imágenes, vídeos, sitios web o aplicaciones.
-
En este artículo, le mostraremos cómo descargar fondos de escritorio en vivo en 3D para su escritorio y cómo usarlo de manera efectiva. También compartiremos algunos de los beneficios de usar fondos de escritorio en vivo en 3D y responderemos algunas preguntas comunes al respecto. Al final de este artículo, podrás hacer que tu escritorio cobre vida con un increíble fondo de pantalla en vivo en 3D.
-
¿Qué son los fondos de pantalla en vivo en 3D?
-
Como su nombre indica, 3D live wallpaper es un tipo de fondo de pantalla que utiliza gráficos tridimensionales para crear escenas dinámicas y realistas en la pantalla. A diferencia de los fondos de pantalla normales, que son solo imágenes que permanecen inmóviles en su fondo, el fondo de pantalla en vivo en 3D puede moverse, cambiar e interactuar con su ratón o teclado. Por ejemplo, puede tener un fondo de pantalla en vivo en 3D de un bosque que cambia con las estaciones, o un fondo de pantalla en vivo en 3D de una nave espacial que vuela por el espacio.
-
-
Algunos ejemplos de temas populares de fondos de escritorio en vivo en 3D son:
-
-
-
Naturaleza: Usted puede tener un fondo de pantalla en vivo 3D de una cascada, una montaña, una playa, un bosque, o cualquier otro paisaje natural que te gusta.
-
Fantasía: Puedes tener un fondo de pantalla en vivo en 3D de un dragón, un unicornio, un hada, un castillo o cualquier otra criatura de fantasía o escenario que te guste.
-
Ciencia ficción: Puedes tener un fondo de pantalla en 3D de una nave espacial, un robot, un planeta alienígena, una ciudad futurista o cualquier otro elemento de ciencia ficción que te guste.
-
Anime: Puede tener un fondo de pantalla en 3D en vivo de su personaje o escena de anime favorito de un espectáculo de anime o película.
-
Películas: Puedes tener un fondo de pantalla en 3D de tu personaje o escena de película favorita de una película que te guste.
-
Juegos: Puedes tener un fondo de pantalla en 3D en vivo de tu personaje o escena favorita de un juego que te guste.
-
-
Estos son solo algunos de los ejemplos de temas de fondos de escritorio en vivo en 3D que puedes encontrar en línea. Hay muchas más opciones y categorías que puedes explorar y descargar.
-
¿Por qué utilizar fondos de pantalla en vivo 3D?
-
Ahora que sabes lo que son los fondos de pantalla en vivo en 3D, es posible que te estés preguntando por qué deberías usarlos en tu escritorio. Estos son algunos de los beneficios de usar fondos de pantalla en vivo en 3D:
-
-
Personalización: Puede personalizar su escritorio con fondos de pantalla en vivo en 3D que se adapten a sus preferencias, personalidad, estado de ánimo o intereses. También puede crear sus propios fondos de pantalla en 3D usando sus propias imágenes, videos, sitios web o aplicaciones.
-
Interactividad: Puedes interactuar con tus fondos de pantalla en 3D usando tu ratón o teclado. También puede ajustar la configuración y las características de sus fondos de pantalla en vivo en 3D para hacerlos más receptivos e interactivos.
-
Entretenimiento: Puede disfrutar viendo sus fondos de pantalla en vivo en 3D a medida que se mueven, cambian y reaccionan a sus acciones. También puede divertirse jugando con sus fondos de pantalla en vivo en 3D, ya que ofrecen varios efectos y animaciones.
-
-
-
¿Cómo descargar fondos de pantalla en vivo en 3D?
-
Si desea descargar fondos de pantalla en vivo en 3D para su escritorio, tiene varias fuentes y métodos para elegir. Estos son algunos de los más populares y confiables:
-
MoeWalls
-
MoeWalls es un sitio web que ofrece populares fondos de pantalla en vivo gratuitos, fondos de pantalla animados y videos para su escritorio. Puede navegar a través de varias categorías y géneros de fondos de pantalla en vivo en 3D, como anime, juegos, películas, naturaleza, fantasía, ciencia ficción y más. También puede buscar palabras clave específicas o títulos de fondos de pantalla en 3D que desea descargar.
-
Para descargar fondos de pantalla en vivo en 3D de MoeWalls, debe seguir estos pasos:
Inicie Wallpaper Engine y haga clic en la pestaña Taller.
-
Seleccione la categoría o género de fondo de pantalla en vivo en 3D que desea descargar.
-
Elija el fondo de pantalla en vivo en 3D que desee de la lista de resultados.
-
-
El fondo de pantalla en vivo 3D se descargará y se agregará a su biblioteca de Wallpaper Engine.
-
-
A continuación, puede seleccionar el fondo de pantalla en vivo 3D de su biblioteca y aplicarlo como fondo de escritorio.
-
Videos de Pexels
-
Pexels Videos es un sitio web que proporciona videos de stock gratuitos para uso personal y comercial. Usted puede encontrar varios tipos de vídeos en Pexels Videos, incluyendo fondos de escritorio videos 3D. Estos son videos que están diseñados para ser utilizados como fondos de escritorio con gráficos 3D realistas e inmersivos.
-
Para descargar videos de escritorio en 3D de Pexels Videos, debe seguir estos pasos:
Escriba "fondo de escritorio 3d" en el cuadro de búsqueda y pulse enter.
-
Elija el vídeo que desee de la lista de resultados.
-
Haga clic en el botón de descarga debajo de la vista previa del video.
-
Guarde el archivo en su computadora.
-
-
A continuación, puede utilizar el archivo como fondo de escritorio o utilizar un software como Wallpaper Engine para ejecutarlo como fondo de pantalla en vivo.
-
¿Cómo usar fondos de pantalla en vivo en 3D?
-
Una vez que haya descargado fondos de pantalla en vivo en 3D para su escritorio, es posible que desee saber cómo usarlos de manera efectiva. Aquí hay algunos consejos y trucos sobre cómo utilizar fondos de pantalla en vivo 3D en su escritorio:
-
-
Configurarlos como fondo: Puede establecer fondos de pantalla en vivo 3D como fondo de escritorio haciendo clic derecho en el archivo y seleccionando "Establecer como fondo de escritorio". Alternativamente, puede utilizar un software como Wallpaper Engine para aplicar fondos de pantalla en vivo 3D como fondo de escritorio. Wallpaper Engine también le permite personalizar la configuración y las características de sus fondos de pantalla en vivo en 3D, tales como resolución, velocidad de fotogramas, sonido, rendimiento y más.
-
-
Pausándolos cuando sea necesario: Puede pausar sus fondos de pantalla en vivo en 3D cuando necesite centrarse en otras tareas o ahorrar batería. Puede hacer esto haciendo clic derecho en el escritorio y seleccionando "Pausa" o "Detener" en el menú. Alternativamente, puede usar un software como Wallpaper Engine para pausar sus fondos de pantalla en vivo 3D automáticamente cuando está usando una aplicación de pantalla completa o cuando su computadora está inactiva.
-
-
El uso de fondos de pantalla en vivo en 3D puede mejorar su experiencia de escritorio y hacerla más agradable. Sin embargo, también debe tener en cuenta los posibles inconvenientes de usar fondos de pantalla en vivo en 3D, como el aumento del uso de CPU y GPU, el consumo de batería y la distracción. También debe asegurarse de que su computadora cumple con los requisitos mínimos para ejecutar fondos de pantalla en vivo 3D sin problemas y sin retrasos.
-
Conclusión
-
En conclusión, 3D live wallpaper es un tipo de papel pintado animado que utiliza gráficos tridimensionales para crear escenas realistas e inmersivas en la pantalla. Puede descargar fondos de escritorio en vivo en 3D de varias fuentes en línea, como MoeWalls, Wallpaper Engine y Pexels Videos. También puede usar fondos de escritorio en vivo en 3D de manera efectiva ajustando los ajustes y pausándolos cuando sea necesario.
-
Si quieres hacer que tu escritorio cobre vida con un increíble fondo de pantalla en vivo en 3D, deberías intentar descargar algunos de ellos hoy. Usted se sorprenderá por lo mucho que pueden transformar su escritorio en un entorno impresionante e interactivo. También tendrás diversión y entretenimiento mientras trabajas o estudias con tu fondo de pantalla en 3D.
-
Entonces, ¿qué estás esperando? Descargar 3D fondo de pantalla en vivo ahora y disfrutar!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas y respuestas más frecuentes sobre el fondo de pantalla en vivo 3D:
-
-
¿Cuál es la diferencia entre el papel pintado en vivo en 3D y el papel pintado normal?
-
-
¿Cuánto cuesta descargar fondos de escritorio en 3D?
-Depende de la fuente y el tipo de fondo de pantalla en vivo 3D que desea descargar. Algunos de ellos son gratuitos, mientras que algunos de ellos requieren una cuota o una suscripción. Por ejemplo, MoeWalls y Pexels Videos ofrecen fondos de escritorio en vivo en 3D gratis, mientras que Wallpaper Engine cuesta $4.99.
-
¿El uso de fondos de escritorio en vivo en 3D afecta el rendimiento de mi computadora?
-Depende de la calidad y la complejidad del fondo de pantalla en vivo 3D que está utilizando. Algunos de ellos pueden consumir más recursos de CPU y GPU que otros, lo que puede afectar el rendimiento de su computadora. Puede reducir este impacto reduciendo la resolución o la velocidad de fotogramas de su fondo de pantalla 3D en vivo o deteniéndolo cuando no esté en uso.
-
¿Puedo crear mi propio fondo de pantalla en vivo en 3D?
-Sí, puede crear su propio fondo de pantalla en vivo en 3D utilizando imágenes, videos, sitios web o aplicaciones. Puede utilizar un software como Wallpaper Engine para crear y editar su propio fondo de pantalla en vivo 3D utilizando su editor incorporado.
-
¿Puedo usar fondos de escritorio en vivo en 3D en otros dispositivos además de mi escritorio?
-Sí, puede usar fondos de escritorio en vivo en 3D en otros dispositivos, como computadoras portátiles, tabletas, teléfonos inteligentes o televisores inteligentes. Sin embargo, es posible que necesite utilizar diferentes fuentes o métodos para descargar y aplicar fondos de escritorio en vivo 3D en diferentes dispositivos. Por ejemplo, puedes usar aplicaciones como 3D Wallpaper Parallax o Live Wallpapers 3D/4K para tu smartphone o tablet.
-
-
Espero que este artículo haya respondido a sus preguntas y le haya ayudado a aprender más sobre el fondo de pantalla en vivo en 3D. Si tiene alguna otra pregunta o comentario, no dude en dejarlos a continuación. ¡Gracias por leer y tener un gran día!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/index.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/index.py
deleted file mode 100644
index 7267effed2413ba315d0a1af8490ec677c227662..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/index.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import logging
-from optparse import Values
-from typing import Any, Iterable, List, Optional, Union
-
-from pip._vendor.packaging.version import LegacyVersion, Version
-
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.req_command import IndexGroupCommand
-from pip._internal.cli.status_codes import ERROR, SUCCESS
-from pip._internal.commands.search import print_dist_installation_info
-from pip._internal.exceptions import CommandError, DistributionNotFound, PipError
-from pip._internal.index.collector import LinkCollector
-from pip._internal.index.package_finder import PackageFinder
-from pip._internal.models.selection_prefs import SelectionPreferences
-from pip._internal.models.target_python import TargetPython
-from pip._internal.network.session import PipSession
-from pip._internal.utils.misc import write_output
-
-logger = logging.getLogger(__name__)
-
-
-class IndexCommand(IndexGroupCommand):
- """
- Inspect information available from package indexes.
- """
-
- ignore_require_venv = True
- usage = """
- %prog versions
- """
-
- def add_options(self) -> None:
- cmdoptions.add_target_python_options(self.cmd_opts)
-
- self.cmd_opts.add_option(cmdoptions.ignore_requires_python())
- self.cmd_opts.add_option(cmdoptions.pre())
- self.cmd_opts.add_option(cmdoptions.no_binary())
- self.cmd_opts.add_option(cmdoptions.only_binary())
-
- index_opts = cmdoptions.make_option_group(
- cmdoptions.index_group,
- self.parser,
- )
-
- self.parser.insert_option_group(0, index_opts)
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- handlers = {
- "versions": self.get_available_package_versions,
- }
-
- logger.warning(
- "pip index is currently an experimental command. "
- "It may be removed/changed in a future release "
- "without prior warning."
- )
-
- # Determine action
- if not args or args[0] not in handlers:
- logger.error(
- "Need an action (%s) to perform.",
- ", ".join(sorted(handlers)),
- )
- return ERROR
-
- action = args[0]
-
- # Error handling happens here, not in the action-handlers.
- try:
- handlers[action](options, args[1:])
- except PipError as e:
- logger.error(e.args[0])
- return ERROR
-
- return SUCCESS
-
- def _build_package_finder(
- self,
- options: Values,
- session: PipSession,
- target_python: Optional[TargetPython] = None,
- ignore_requires_python: Optional[bool] = None,
- ) -> PackageFinder:
- """
- Create a package finder appropriate to the index command.
- """
- link_collector = LinkCollector.create(session, options=options)
-
- # Pass allow_yanked=False to ignore yanked versions.
- selection_prefs = SelectionPreferences(
- allow_yanked=False,
- allow_all_prereleases=options.pre,
- ignore_requires_python=ignore_requires_python,
- )
-
- return PackageFinder.create(
- link_collector=link_collector,
- selection_prefs=selection_prefs,
- target_python=target_python,
- )
-
- def get_available_package_versions(self, options: Values, args: List[Any]) -> None:
- if len(args) != 1:
- raise CommandError("You need to specify exactly one argument")
-
- target_python = cmdoptions.make_target_python(options)
- query = args[0]
-
- with self._build_session(options) as session:
- finder = self._build_package_finder(
- options=options,
- session=session,
- target_python=target_python,
- ignore_requires_python=options.ignore_requires_python,
- )
-
- versions: Iterable[Union[LegacyVersion, Version]] = (
- candidate.version for candidate in finder.find_all_candidates(query)
- )
-
- if not options.pre:
- # Remove prereleases
- versions = (
- version for version in versions if not version.is_prerelease
- )
- versions = set(versions)
-
- if not versions:
- raise DistributionNotFound(
- "No matching distribution found for {}".format(query)
- )
-
- formatted_versions = [str(ver) for ver in sorted(versions, reverse=True)]
- latest = formatted_versions[0]
-
- write_output("{} ({})".format(query, latest))
- write_output("Available versions: {}".format(", ".join(formatted_versions)))
- print_dist_installation_info(query, latest)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/compat.py
deleted file mode 100644
index 68267ad0e2689c6c88fd2fda3bf397f16f97cc90..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/compat.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import errno
-import inspect
-import os
-import socket
-import sys
-
-from botocore.compat import six
-
-if sys.platform.startswith('win'):
- def rename_file(current_filename, new_filename):
- try:
- os.remove(new_filename)
- except OSError as e:
- if not e.errno == errno.ENOENT:
- # We only want to a ignore trying to remove
- # a file that does not exist. If it fails
- # for any other reason we should be propagating
- # that exception.
- raise
- os.rename(current_filename, new_filename)
-else:
- rename_file = os.rename
-
-
-def accepts_kwargs(func):
- return inspect.getfullargspec(func)[2]
-
-
-# In python 3, socket.error is OSError, which is too general
-# for what we want (i.e FileNotFoundError is a subclass of OSError).
-# In python 3, all the socket related errors are in a newly created
-# ConnectionError.
-SOCKET_ERROR = ConnectionError
-MAXINT = None
-
-
-def seekable(fileobj):
- """Backwards compat function to determine if a fileobj is seekable
-
- :param fileobj: The file-like object to determine if seekable
-
- :returns: True, if seekable. False, otherwise.
- """
- # If the fileobj has a seekable attr, try calling the seekable()
- # method on it.
- if hasattr(fileobj, 'seekable'):
- return fileobj.seekable()
- # If there is no seekable attr, check if the object can be seeked
- # or telled. If it can, try to seek to the current position.
- elif hasattr(fileobj, 'seek') and hasattr(fileobj, 'tell'):
- try:
- fileobj.seek(0, 1)
- return True
- except OSError:
- # If an io related error was thrown then it is not seekable.
- return False
- # Else, the fileobj is not seekable
- return False
-
-
-def readable(fileobj):
- """Determines whether or not a file-like object is readable.
-
- :param fileobj: The file-like object to determine if readable
-
- :returns: True, if readable. False otherwise.
- """
- if hasattr(fileobj, 'readable'):
- return fileobj.readable()
-
- return hasattr(fileobj, 'read')
-
-
-def fallocate(fileobj, size):
- if hasattr(os, 'posix_fallocate'):
- os.posix_fallocate(fileobj.fileno(), 0, size)
- else:
- fileobj.truncate(size)
-
-
-# Import at end of file to avoid circular dependencies
-from multiprocessing.managers import BaseManager # noqa: F401,E402
diff --git a/spaces/Branon/Proxy/Dockerfile b/spaces/Branon/Proxy/Dockerfile
deleted file mode 100644
index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000
--- a/spaces/Branon/Proxy/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/README.md
deleted file mode 100644
index caa755f6f0f472a04a419deec4a6acfdb949023b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-## Detectron2 Demo
-
-We provide a command line tool to run a simple demo of builtin models.
-The usage is explained in [GETTING_STARTED.md](../GETTING_STARTED.md).
-
-See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-)
-for a high-quality demo generated with this tool.
diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/init.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/init.h
deleted file mode 100644
index 3ef78c1179f5b533c3ba3f637420c8125d632a7f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/init.h
+++ /dev/null
@@ -1,336 +0,0 @@
-/*
- pybind11/detail/init.h: init factory function implementation and support code.
-
- Copyright (c) 2017 Jason Rhinelander
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#pragma once
-
-#include "class.h"
-
-PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
-PYBIND11_NAMESPACE_BEGIN(detail)
-
-template <>
-class type_caster {
-public:
- bool load(handle h, bool) {
- value = reinterpret_cast(h.ptr());
- return true;
- }
-
- template using cast_op_type = value_and_holder &;
- operator value_and_holder &() { return *value; }
- static constexpr auto name = _();
-
-private:
- value_and_holder *value = nullptr;
-};
-
-PYBIND11_NAMESPACE_BEGIN(initimpl)
-
-inline void no_nullptr(void *ptr) {
- if (!ptr) throw type_error("pybind11::init(): factory function returned nullptr");
-}
-
-// Implementing functions for all forms of py::init<...> and py::init(...)
-template using Cpp = typename Class::type;
-template using Alias = typename Class::type_alias;
-template using Holder = typename Class::holder_type;
-
-template using is_alias_constructible = std::is_constructible, Cpp &&>;
-
-// Takes a Cpp pointer and returns true if it actually is a polymorphic Alias instance.
-template = 0>
-bool is_alias(Cpp *ptr) {
- return dynamic_cast *>(ptr) != nullptr;
-}
-// Failing fallback version of the above for a no-alias class (always returns false)
-template
-constexpr bool is_alias(void *) { return false; }
-
-// Constructs and returns a new object; if the given arguments don't map to a constructor, we fall
-// back to brace aggregate initiailization so that for aggregate initialization can be used with
-// py::init, e.g. `py::init` to initialize a `struct T { int a; int b; }`. For
-// non-aggregate types, we need to use an ordinary T(...) constructor (invoking as `T{...}` usually
-// works, but will not do the expected thing when `T` has an `initializer_list` constructor).
-template ::value, int> = 0>
-inline Class *construct_or_initialize(Args &&...args) { return new Class(std::forward(args)...); }
-template ::value, int> = 0>
-inline Class *construct_or_initialize(Args &&...args) { return new Class{std::forward(args)...}; }
-
-// Attempts to constructs an alias using a `Alias(Cpp &&)` constructor. This allows types with
-// an alias to provide only a single Cpp factory function as long as the Alias can be
-// constructed from an rvalue reference of the base Cpp type. This means that Alias classes
-// can, when appropriate, simply define a `Alias(Cpp &&)` constructor rather than needing to
-// inherit all the base class constructors.
-template
-void construct_alias_from_cpp(std::true_type /*is_alias_constructible*/,
- value_and_holder &v_h, Cpp &&base) {
- v_h.value_ptr() = new Alias(std::move(base));
-}
-template
-[[noreturn]] void construct_alias_from_cpp(std::false_type /*!is_alias_constructible*/,
- value_and_holder &, Cpp &&) {
- throw type_error("pybind11::init(): unable to convert returned instance to required "
- "alias class: no `Alias(Class &&)` constructor available");
-}
-
-// Error-generating fallback for factories that don't match one of the below construction
-// mechanisms.
-template
-void construct(...) {
- static_assert(!std::is_same::value /* always false */,
- "pybind11::init(): init function must return a compatible pointer, "
- "holder, or value");
-}
-
-// Pointer return v1: the factory function returns a class pointer for a registered class.
-// If we don't need an alias (because this class doesn't have one, or because the final type is
-// inherited on the Python side) we can simply take over ownership. Otherwise we need to try to
-// construct an Alias from the returned base instance.
-template
-void construct(value_and_holder &v_h, Cpp *ptr, bool need_alias) {
- no_nullptr(ptr);
- if (Class::has_alias && need_alias && !is_alias(ptr)) {
- // We're going to try to construct an alias by moving the cpp type. Whether or not
- // that succeeds, we still need to destroy the original cpp pointer (either the
- // moved away leftover, if the alias construction works, or the value itself if we
- // throw an error), but we can't just call `delete ptr`: it might have a special
- // deleter, or might be shared_from_this. So we construct a holder around it as if
- // it was a normal instance, then steal the holder away into a local variable; thus
- // the holder and destruction happens when we leave the C++ scope, and the holder
- // class gets to handle the destruction however it likes.
- v_h.value_ptr() = ptr;
- v_h.set_instance_registered(true); // To prevent init_instance from registering it
- v_h.type->init_instance(v_h.inst, nullptr); // Set up the holder
- Holder temp_holder(std::move(v_h.holder>())); // Steal the holder
- v_h.type->dealloc(v_h); // Destroys the moved-out holder remains, resets value ptr to null
- v_h.set_instance_registered(false);
-
- construct_alias_from_cpp(is_alias_constructible{}, v_h, std::move(*ptr));
- } else {
- // Otherwise the type isn't inherited, so we don't need an Alias
- v_h.value_ptr() = ptr;
- }
-}
-
-// Pointer return v2: a factory that always returns an alias instance ptr. We simply take over
-// ownership of the pointer.
-template = 0>
-void construct(value_and_holder &v_h, Alias *alias_ptr, bool) {
- no_nullptr(alias_ptr);
- v_h.value_ptr() = static_cast *>(alias_ptr);
-}
-
-// Holder return: copy its pointer, and move or copy the returned holder into the new instance's
-// holder. This also handles types like std::shared_ptr and std::unique_ptr where T is a
-// derived type (through those holder's implicit conversion from derived class holder constructors).
-template
-void construct(value_and_holder &v_h, Holder holder, bool need_alias) {
- auto *ptr = holder_helper>::get(holder);
- no_nullptr(ptr);
- // If we need an alias, check that the held pointer is actually an alias instance
- if (Class::has_alias && need_alias && !is_alias(ptr))
- throw type_error("pybind11::init(): construction failed: returned holder-wrapped instance "
- "is not an alias instance");
-
- v_h.value_ptr() = ptr;
- v_h.type->init_instance(v_h.inst, &holder);
-}
-
-// return-by-value version 1: returning a cpp class by value. If the class has an alias and an
-// alias is required the alias must have an `Alias(Cpp &&)` constructor so that we can construct
-// the alias from the base when needed (i.e. because of Python-side inheritance). When we don't
-// need it, we simply move-construct the cpp value into a new instance.
-template
-void construct(value_and_holder &v_h, Cpp &&result, bool need_alias) {
- static_assert(std::is_move_constructible>::value,
- "pybind11::init() return-by-value factory function requires a movable class");
- if (Class::has_alias && need_alias)
- construct_alias_from_cpp(is_alias_constructible{}, v_h, std::move(result));
- else
- v_h.value_ptr() = new Cpp(std::move(result));
-}
-
-// return-by-value version 2: returning a value of the alias type itself. We move-construct an
-// Alias instance (even if no the python-side inheritance is involved). The is intended for
-// cases where Alias initialization is always desired.
-template
-void construct(value_and_holder &v_h, Alias &&result, bool) {
- static_assert(std::is_move_constructible>::value,
- "pybind11::init() return-by-alias-value factory function requires a movable alias class");
- v_h.value_ptr() = new Alias(std::move(result));
-}
-
-// Implementing class for py::init<...>()
-template
-struct constructor {
- template = 0>
- static void execute(Class &cl, const Extra&... extra) {
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
- v_h.value_ptr() = construct_or_initialize>(std::forward(args)...);
- }, is_new_style_constructor(), extra...);
- }
-
- template , Args...>::value, int> = 0>
- static void execute(Class &cl, const Extra&... extra) {
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
- if (Py_TYPE(v_h.inst) == v_h.type->type)
- v_h.value_ptr() = construct_or_initialize>(std::forward(args)...);
- else
- v_h.value_ptr() = construct_or_initialize>(std::forward(args)...);
- }, is_new_style_constructor(), extra...);
- }
-
- template , Args...>::value, int> = 0>
- static void execute(Class &cl, const Extra&... extra) {
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
- v_h.value_ptr() = construct_or_initialize>(std::forward(args)...);
- }, is_new_style_constructor(), extra...);
- }
-};
-
-// Implementing class for py::init_alias<...>()
-template struct alias_constructor {
- template , Args...>::value, int> = 0>
- static void execute(Class &cl, const Extra&... extra) {
- cl.def("__init__", [](value_and_holder &v_h, Args... args) {
- v_h.value_ptr() = construct_or_initialize>(std::forward(args)...);
- }, is_new_style_constructor(), extra...);
- }
-};
-
-// Implementation class for py::init(Func) and py::init(Func, AliasFunc)
-template , typename = function_signature_t>
-struct factory;
-
-// Specialization for py::init(Func)
-template
-struct factory {
- remove_reference_t class_factory;
-
- factory(Func &&f) : class_factory(std::forward(f)) { }
-
- // The given class either has no alias or has no separate alias factory;
- // this always constructs the class itself. If the class is registered with an alias
- // type and an alias instance is needed (i.e. because the final type is a Python class
- // inheriting from the C++ type) the returned value needs to either already be an alias
- // instance, or the alias needs to be constructible from a `Class &&` argument.
- template
- void execute(Class &cl, const Extra &...extra) && {
- #if defined(PYBIND11_CPP14)
- cl.def("__init__", [func = std::move(class_factory)]
- #else
- auto &func = class_factory;
- cl.def("__init__", [func]
- #endif
- (value_and_holder &v_h, Args... args) {
- construct(v_h, func(std::forward(args)...),
- Py_TYPE(v_h.inst) != v_h.type->type);
- }, is_new_style_constructor(), extra...);
- }
-};
-
-// Specialization for py::init(Func, AliasFunc)
-template
-struct factory {
- static_assert(sizeof...(CArgs) == sizeof...(AArgs),
- "pybind11::init(class_factory, alias_factory): class and alias factories "
- "must have identical argument signatures");
- static_assert(all_of...>::value,
- "pybind11::init(class_factory, alias_factory): class and alias factories "
- "must have identical argument signatures");
-
- remove_reference_t class_factory;
- remove_reference_t alias_factory;
-
- factory(CFunc &&c, AFunc &&a)
- : class_factory(std::forward(c)), alias_factory(std::forward(a)) { }
-
- // The class factory is called when the `self` type passed to `__init__` is the direct
- // class (i.e. not inherited), the alias factory when `self` is a Python-side subtype.
- template
- void execute(Class &cl, const Extra&... extra) && {
- static_assert(Class::has_alias, "The two-argument version of `py::init()` can "
- "only be used if the class has an alias");
- #if defined(PYBIND11_CPP14)
- cl.def("__init__", [class_func = std::move(class_factory), alias_func = std::move(alias_factory)]
- #else
- auto &class_func = class_factory;
- auto &alias_func = alias_factory;
- cl.def("__init__", [class_func, alias_func]
- #endif
- (value_and_holder &v_h, CArgs... args) {
- if (Py_TYPE(v_h.inst) == v_h.type->type)
- // If the instance type equals the registered type we don't have inheritance, so
- // don't need the alias and can construct using the class function:
- construct(v_h, class_func(std::forward(args)...), false);
- else
- construct(v_h, alias_func(std::forward(args)...), true);
- }, is_new_style_constructor(), extra...);
- }
-};
-
-/// Set just the C++ state. Same as `__init__`.
-template
-void setstate(value_and_holder &v_h, T &&result, bool need_alias) {
- construct(v_h, std::forward(result), need_alias);
-}
-
-/// Set both the C++ and Python states
-template ::value, int> = 0>
-void setstate(value_and_holder &v_h, std::pair &&result, bool need_alias) {
- construct(v_h, std::move(result.first), need_alias);
- setattr((PyObject *) v_h.inst, "__dict__", result.second);
-}
-
-/// Implementation for py::pickle(GetState, SetState)
-template , typename = function_signature_t>
-struct pickle_factory;
-
-template
-struct pickle_factory {
- static_assert(std::is_same, intrinsic_t>::value,
- "The type returned by `__getstate__` must be the same "
- "as the argument accepted by `__setstate__`");
-
- remove_reference_t get;
- remove_reference_t set;
-
- pickle_factory(Get get, Set set)
- : get(std::forward(get)), set(std::forward(set)) { }
-
- template
- void execute(Class &cl, const Extra &...extra) && {
- cl.def("__getstate__", std::move(get));
-
-#if defined(PYBIND11_CPP14)
- cl.def("__setstate__", [func = std::move(set)]
-#else
- auto &func = set;
- cl.def("__setstate__", [func]
-#endif
- (value_and_holder &v_h, ArgState state) {
- setstate(v_h, func(std::forward(state)),
- Py_TYPE(v_h.inst) != v_h.type->type);
- }, is_new_style_constructor(), extra...);
- }
-};
-
-PYBIND11_NAMESPACE_END(initimpl)
-PYBIND11_NAMESPACE_END(detail)
-PYBIND11_NAMESPACE_END(pybind11)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/distance.h b/spaces/CVPR/LIVE/thrust/thrust/distance.h
deleted file mode 100644
index 6dd4800be7a8975061fb58777d603f13fb0c82b6..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/distance.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file distance.h
- * \brief Computes the size of a range
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-
-
-/*! \addtogroup iterators
- * \{
- */
-
-/*! \p distance finds the distance between \p first and \p last, i.e. the
- * number of times that \p first must be incremented until it is equal to
- * \p last.
- *
- * \param first The beginning of an input range of interest.
- * \param last The end of an input range of interest.
- * \return The distance between the beginning and end of the input range.
- *
- * \tparam InputIterator is a model of Input Iterator.
- *
- * \pre If \c InputIterator meets the requirements of random access iterator, \p last shall be reachable from \p first or
- * \p first shall be reachable from \p last; otherwise, \p last shall be reachable from \p first.
- *
- * The following code snippet demonstrates how to use \p distance to compute
- * the distance to one iterator from another.
- *
- * \code
- * #include
- * #include
- * ...
- * thrust::device_vector vec(13);
- * thrust::device_vector::iterator iter1 = vec.begin();
- * thrust::device_vector::iterator iter2 = iter1 + 7;
- *
- * int d = thrust::distance(iter1, iter2);
- *
- * // d is 7
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/distance.html
- */
-template
-inline __host__ __device__
- typename thrust::iterator_traits::difference_type
- distance(InputIterator first, InputIterator last);
-
-/*! \} // end iterators
- */
-
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/WALT/mmdet/models/dense_heads/vfnet_head.py b/spaces/CVPR/WALT/mmdet/models/dense_heads/vfnet_head.py
deleted file mode 100644
index 7243bb62893839568ec51928d88a5ad40b02a66c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/dense_heads/vfnet_head.py
+++ /dev/null
@@ -1,794 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init
-from mmcv.ops import DeformConv2d
-from mmcv.runner import force_fp32
-
-from mmdet.core import (bbox2distance, bbox_overlaps, build_anchor_generator,
- build_assigner, build_sampler, distance2bbox,
- multi_apply, multiclass_nms, reduce_mean)
-from ..builder import HEADS, build_loss
-from .atss_head import ATSSHead
-from .fcos_head import FCOSHead
-
-INF = 1e8
-
-
-@HEADS.register_module()
-class VFNetHead(ATSSHead, FCOSHead):
- """Head of `VarifocalNet (VFNet): An IoU-aware Dense Object
- Detector.`_.
-
- The VFNet predicts IoU-aware classification scores which mix the
- object presence confidence and object localization accuracy as the
- detection score. It is built on the FCOS architecture and uses ATSS
- for defining positive/negative training examples. The VFNet is trained
- with Varifocal Loss and empolys star-shaped deformable convolution to
- extract features for a bbox.
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- regress_ranges (tuple[tuple[int, int]]): Regress range of multiple
- level points.
- center_sampling (bool): If true, use center sampling. Default: False.
- center_sample_radius (float): Radius of center sampling. Default: 1.5.
- sync_num_pos (bool): If true, synchronize the number of positive
- examples across GPUs. Default: True
- gradient_mul (float): The multiplier to gradients from bbox refinement
- and recognition. Default: 0.1.
- bbox_norm_type (str): The bbox normalization type, 'reg_denom' or
- 'stride'. Default: reg_denom
- loss_cls_fl (dict): Config of focal loss.
- use_vfl (bool): If true, use varifocal loss for training.
- Default: True.
- loss_cls (dict): Config of varifocal loss.
- loss_bbox (dict): Config of localization loss, GIoU Loss.
- loss_bbox (dict): Config of localization refinement loss, GIoU Loss.
- norm_cfg (dict): dictionary to construct and config norm layer.
- Default: norm_cfg=dict(type='GN', num_groups=32,
- requires_grad=True).
- use_atss (bool): If true, use ATSS to define positive/negative
- examples. Default: True.
- anchor_generator (dict): Config of anchor generator for ATSS.
-
- Example:
- >>> self = VFNetHead(11, 7)
- >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
- >>> cls_score, bbox_pred, bbox_pred_refine= self.forward(feats)
- >>> assert len(cls_score) == len(self.scales)
- """ # noqa: E501
-
- def __init__(self,
- num_classes,
- in_channels,
- regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512),
- (512, INF)),
- center_sampling=False,
- center_sample_radius=1.5,
- sync_num_pos=True,
- gradient_mul=0.1,
- bbox_norm_type='reg_denom',
- loss_cls_fl=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- use_vfl=True,
- loss_cls=dict(
- type='VarifocalLoss',
- use_sigmoid=True,
- alpha=0.75,
- gamma=2.0,
- iou_weighted=True,
- loss_weight=1.0),
- loss_bbox=dict(type='GIoULoss', loss_weight=1.5),
- loss_bbox_refine=dict(type='GIoULoss', loss_weight=2.0),
- norm_cfg=dict(type='GN', num_groups=32, requires_grad=True),
- use_atss=True,
- anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- octave_base_scale=8,
- scales_per_octave=1,
- center_offset=0.0,
- strides=[8, 16, 32, 64, 128]),
- **kwargs):
- # dcn base offsets, adapted from reppoints_head.py
- self.num_dconv_points = 9
- self.dcn_kernel = int(np.sqrt(self.num_dconv_points))
- self.dcn_pad = int((self.dcn_kernel - 1) / 2)
- dcn_base = np.arange(-self.dcn_pad,
- self.dcn_pad + 1).astype(np.float64)
- dcn_base_y = np.repeat(dcn_base, self.dcn_kernel)
- dcn_base_x = np.tile(dcn_base, self.dcn_kernel)
- dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape(
- (-1))
- self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1)
-
- super(FCOSHead, self).__init__(
- num_classes, in_channels, norm_cfg=norm_cfg, **kwargs)
- self.regress_ranges = regress_ranges
- self.reg_denoms = [
- regress_range[-1] for regress_range in regress_ranges
- ]
- self.reg_denoms[-1] = self.reg_denoms[-2] * 2
- self.center_sampling = center_sampling
- self.center_sample_radius = center_sample_radius
- self.sync_num_pos = sync_num_pos
- self.bbox_norm_type = bbox_norm_type
- self.gradient_mul = gradient_mul
- self.use_vfl = use_vfl
- if self.use_vfl:
- self.loss_cls = build_loss(loss_cls)
- else:
- self.loss_cls = build_loss(loss_cls_fl)
- self.loss_bbox = build_loss(loss_bbox)
- self.loss_bbox_refine = build_loss(loss_bbox_refine)
-
- # for getting ATSS targets
- self.use_atss = use_atss
- self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
- self.anchor_generator = build_anchor_generator(anchor_generator)
- self.anchor_center_offset = anchor_generator['center_offset']
- self.num_anchors = self.anchor_generator.num_base_anchors[0]
- self.sampling = False
- if self.train_cfg:
- self.assigner = build_assigner(self.train_cfg.assigner)
- sampler_cfg = dict(type='PseudoSampler')
- self.sampler = build_sampler(sampler_cfg, context=self)
-
- def _init_layers(self):
- """Initialize layers of the head."""
- super(FCOSHead, self)._init_cls_convs()
- super(FCOSHead, self)._init_reg_convs()
- self.relu = nn.ReLU(inplace=True)
- self.vfnet_reg_conv = ConvModule(
- self.feat_channels,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- bias=self.conv_bias)
- self.vfnet_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
- self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides])
-
- self.vfnet_reg_refine_dconv = DeformConv2d(
- self.feat_channels,
- self.feat_channels,
- self.dcn_kernel,
- 1,
- padding=self.dcn_pad)
- self.vfnet_reg_refine = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
- self.scales_refine = nn.ModuleList([Scale(1.0) for _ in self.strides])
-
- self.vfnet_cls_dconv = DeformConv2d(
- self.feat_channels,
- self.feat_channels,
- self.dcn_kernel,
- 1,
- padding=self.dcn_pad)
- self.vfnet_cls = nn.Conv2d(
- self.feat_channels, self.cls_out_channels, 3, padding=1)
-
- def init_weights(self):
- """Initialize weights of the head."""
- for m in self.cls_convs:
- if isinstance(m.conv, nn.Conv2d):
- normal_init(m.conv, std=0.01)
- for m in self.reg_convs:
- if isinstance(m.conv, nn.Conv2d):
- normal_init(m.conv, std=0.01)
- normal_init(self.vfnet_reg_conv.conv, std=0.01)
- normal_init(self.vfnet_reg, std=0.01)
- normal_init(self.vfnet_reg_refine_dconv, std=0.01)
- normal_init(self.vfnet_reg_refine, std=0.01)
- normal_init(self.vfnet_cls_dconv, std=0.01)
- bias_cls = bias_init_with_prob(0.01)
- normal_init(self.vfnet_cls, std=0.01, bias=bias_cls)
-
- def forward(self, feats):
- """Forward features from the upstream network.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple:
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
- level, each is a 4D-tensor, the channel number is
- num_points * num_classes.
- bbox_preds (list[Tensor]): Box offsets for each
- scale level, each is a 4D-tensor, the channel number is
- num_points * 4.
- bbox_preds_refine (list[Tensor]): Refined Box offsets for
- each scale level, each is a 4D-tensor, the channel
- number is num_points * 4.
- """
- return multi_apply(self.forward_single, feats, self.scales,
- self.scales_refine, self.strides, self.reg_denoms)
-
- def forward_single(self, x, scale, scale_refine, stride, reg_denom):
- """Forward features of a single scale level.
-
- Args:
- x (Tensor): FPN feature maps of the specified stride.
- scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize
- the bbox prediction.
- scale_refine (:obj: `mmcv.cnn.Scale`): Learnable scale module to
- resize the refined bbox prediction.
- stride (int): The corresponding stride for feature maps,
- used to normalize the bbox prediction when
- bbox_norm_type = 'stride'.
- reg_denom (int): The corresponding regression range for feature
- maps, only used to normalize the bbox prediction when
- bbox_norm_type = 'reg_denom'.
-
- Returns:
- tuple: iou-aware cls scores for each box, bbox predictions and
- refined bbox predictions of input feature maps.
- """
- cls_feat = x
- reg_feat = x
-
- for cls_layer in self.cls_convs:
- cls_feat = cls_layer(cls_feat)
-
- for reg_layer in self.reg_convs:
- reg_feat = reg_layer(reg_feat)
-
- # predict the bbox_pred of different level
- reg_feat_init = self.vfnet_reg_conv(reg_feat)
- if self.bbox_norm_type == 'reg_denom':
- bbox_pred = scale(
- self.vfnet_reg(reg_feat_init)).float().exp() * reg_denom
- elif self.bbox_norm_type == 'stride':
- bbox_pred = scale(
- self.vfnet_reg(reg_feat_init)).float().exp() * stride
- else:
- raise NotImplementedError
-
- # compute star deformable convolution offsets
- # converting dcn_offset to reg_feat.dtype thus VFNet can be
- # trained with FP16
- dcn_offset = self.star_dcn_offset(bbox_pred, self.gradient_mul,
- stride).to(reg_feat.dtype)
-
- # refine the bbox_pred
- reg_feat = self.relu(self.vfnet_reg_refine_dconv(reg_feat, dcn_offset))
- bbox_pred_refine = scale_refine(
- self.vfnet_reg_refine(reg_feat)).float().exp()
- bbox_pred_refine = bbox_pred_refine * bbox_pred.detach()
-
- # predict the iou-aware cls score
- cls_feat = self.relu(self.vfnet_cls_dconv(cls_feat, dcn_offset))
- cls_score = self.vfnet_cls(cls_feat)
-
- return cls_score, bbox_pred, bbox_pred_refine
-
- def star_dcn_offset(self, bbox_pred, gradient_mul, stride):
- """Compute the star deformable conv offsets.
-
- Args:
- bbox_pred (Tensor): Predicted bbox distance offsets (l, r, t, b).
- gradient_mul (float): Gradient multiplier.
- stride (int): The corresponding stride for feature maps,
- used to project the bbox onto the feature map.
-
- Returns:
- dcn_offsets (Tensor): The offsets for deformable convolution.
- """
- dcn_base_offset = self.dcn_base_offset.type_as(bbox_pred)
- bbox_pred_grad_mul = (1 - gradient_mul) * bbox_pred.detach() + \
- gradient_mul * bbox_pred
- # map to the feature map scale
- bbox_pred_grad_mul = bbox_pred_grad_mul / stride
- N, C, H, W = bbox_pred.size()
-
- x1 = bbox_pred_grad_mul[:, 0, :, :]
- y1 = bbox_pred_grad_mul[:, 1, :, :]
- x2 = bbox_pred_grad_mul[:, 2, :, :]
- y2 = bbox_pred_grad_mul[:, 3, :, :]
- bbox_pred_grad_mul_offset = bbox_pred.new_zeros(
- N, 2 * self.num_dconv_points, H, W)
- bbox_pred_grad_mul_offset[:, 0, :, :] = -1.0 * y1 # -y1
- bbox_pred_grad_mul_offset[:, 1, :, :] = -1.0 * x1 # -x1
- bbox_pred_grad_mul_offset[:, 2, :, :] = -1.0 * y1 # -y1
- bbox_pred_grad_mul_offset[:, 4, :, :] = -1.0 * y1 # -y1
- bbox_pred_grad_mul_offset[:, 5, :, :] = x2 # x2
- bbox_pred_grad_mul_offset[:, 7, :, :] = -1.0 * x1 # -x1
- bbox_pred_grad_mul_offset[:, 11, :, :] = x2 # x2
- bbox_pred_grad_mul_offset[:, 12, :, :] = y2 # y2
- bbox_pred_grad_mul_offset[:, 13, :, :] = -1.0 * x1 # -x1
- bbox_pred_grad_mul_offset[:, 14, :, :] = y2 # y2
- bbox_pred_grad_mul_offset[:, 16, :, :] = y2 # y2
- bbox_pred_grad_mul_offset[:, 17, :, :] = x2 # x2
- dcn_offset = bbox_pred_grad_mul_offset - dcn_base_offset
-
- return dcn_offset
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'bbox_preds_refine'))
- def loss(self,
- cls_scores,
- bbox_preds,
- bbox_preds_refine,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute loss of the head.
-
- Args:
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
- level, each is a 4D-tensor, the channel number is
- num_points * num_classes.
- bbox_preds (list[Tensor]): Box offsets for each
- scale level, each is a 4D-tensor, the channel number is
- num_points * 4.
- bbox_preds_refine (list[Tensor]): Refined Box offsets for
- each scale level, each is a 4D-tensor, the channel
- number is num_points * 4.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
- Default: None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- assert len(cls_scores) == len(bbox_preds) == len(bbox_preds_refine)
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
- bbox_preds[0].device)
- labels, label_weights, bbox_targets, bbox_weights = self.get_targets(
- cls_scores, all_level_points, gt_bboxes, gt_labels, img_metas,
- gt_bboxes_ignore)
-
- num_imgs = cls_scores[0].size(0)
- # flatten cls_scores, bbox_preds and bbox_preds_refine
- flatten_cls_scores = [
- cls_score.permute(0, 2, 3,
- 1).reshape(-1,
- self.cls_out_channels).contiguous()
- for cls_score in cls_scores
- ]
- flatten_bbox_preds = [
- bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4).contiguous()
- for bbox_pred in bbox_preds
- ]
- flatten_bbox_preds_refine = [
- bbox_pred_refine.permute(0, 2, 3, 1).reshape(-1, 4).contiguous()
- for bbox_pred_refine in bbox_preds_refine
- ]
- flatten_cls_scores = torch.cat(flatten_cls_scores)
- flatten_bbox_preds = torch.cat(flatten_bbox_preds)
- flatten_bbox_preds_refine = torch.cat(flatten_bbox_preds_refine)
- flatten_labels = torch.cat(labels)
- flatten_bbox_targets = torch.cat(bbox_targets)
- # repeat points to align with bbox_preds
- flatten_points = torch.cat(
- [points.repeat(num_imgs, 1) for points in all_level_points])
-
- # FG cat_id: [0, num_classes - 1], BG cat_id: num_classes
- bg_class_ind = self.num_classes
- pos_inds = torch.where(
- ((flatten_labels >= 0) & (flatten_labels < bg_class_ind)) > 0)[0]
- num_pos = len(pos_inds)
-
- pos_bbox_preds = flatten_bbox_preds[pos_inds]
- pos_bbox_preds_refine = flatten_bbox_preds_refine[pos_inds]
- pos_labels = flatten_labels[pos_inds]
-
- # sync num_pos across all gpus
- if self.sync_num_pos:
- num_pos_avg_per_gpu = reduce_mean(
- pos_inds.new_tensor(num_pos).float()).item()
- num_pos_avg_per_gpu = max(num_pos_avg_per_gpu, 1.0)
- else:
- num_pos_avg_per_gpu = num_pos
-
- if num_pos > 0:
- pos_bbox_targets = flatten_bbox_targets[pos_inds]
- pos_points = flatten_points[pos_inds]
-
- pos_decoded_bbox_preds = distance2bbox(pos_points, pos_bbox_preds)
- pos_decoded_target_preds = distance2bbox(pos_points,
- pos_bbox_targets)
- iou_targets_ini = bbox_overlaps(
- pos_decoded_bbox_preds,
- pos_decoded_target_preds.detach(),
- is_aligned=True).clamp(min=1e-6)
- bbox_weights_ini = iou_targets_ini.clone().detach()
- iou_targets_ini_avg_per_gpu = reduce_mean(
- bbox_weights_ini.sum()).item()
- bbox_avg_factor_ini = max(iou_targets_ini_avg_per_gpu, 1.0)
- loss_bbox = self.loss_bbox(
- pos_decoded_bbox_preds,
- pos_decoded_target_preds.detach(),
- weight=bbox_weights_ini,
- avg_factor=bbox_avg_factor_ini)
-
- pos_decoded_bbox_preds_refine = \
- distance2bbox(pos_points, pos_bbox_preds_refine)
- iou_targets_rf = bbox_overlaps(
- pos_decoded_bbox_preds_refine,
- pos_decoded_target_preds.detach(),
- is_aligned=True).clamp(min=1e-6)
- bbox_weights_rf = iou_targets_rf.clone().detach()
- iou_targets_rf_avg_per_gpu = reduce_mean(
- bbox_weights_rf.sum()).item()
- bbox_avg_factor_rf = max(iou_targets_rf_avg_per_gpu, 1.0)
- loss_bbox_refine = self.loss_bbox_refine(
- pos_decoded_bbox_preds_refine,
- pos_decoded_target_preds.detach(),
- weight=bbox_weights_rf,
- avg_factor=bbox_avg_factor_rf)
-
- # build IoU-aware cls_score targets
- if self.use_vfl:
- pos_ious = iou_targets_rf.clone().detach()
- cls_iou_targets = torch.zeros_like(flatten_cls_scores)
- cls_iou_targets[pos_inds, pos_labels] = pos_ious
- else:
- loss_bbox = pos_bbox_preds.sum() * 0
- loss_bbox_refine = pos_bbox_preds_refine.sum() * 0
- if self.use_vfl:
- cls_iou_targets = torch.zeros_like(flatten_cls_scores)
-
- if self.use_vfl:
- loss_cls = self.loss_cls(
- flatten_cls_scores,
- cls_iou_targets,
- avg_factor=num_pos_avg_per_gpu)
- else:
- loss_cls = self.loss_cls(
- flatten_cls_scores,
- flatten_labels,
- weight=label_weights,
- avg_factor=num_pos_avg_per_gpu)
-
- return dict(
- loss_cls=loss_cls,
- loss_bbox=loss_bbox,
- loss_bbox_rf=loss_bbox_refine)
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'bbox_preds_refine'))
- def get_bboxes(self,
- cls_scores,
- bbox_preds,
- bbox_preds_refine,
- img_metas,
- cfg=None,
- rescale=None,
- with_nms=True):
- """Transform network outputs for a batch into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
- level with shape (N, num_points * num_classes, H, W).
- bbox_preds (list[Tensor]): Box offsets for each scale
- level with shape (N, num_points * 4, H, W).
- bbox_preds_refine (list[Tensor]): Refined Box offsets for
- each scale level with shape (N, num_points * 4, H, W).
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used. Default: None.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before returning boxes.
- Default: True.
-
- Returns:
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
- The first item is an (n, 5) tensor, where the first 4 columns
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the
- 5-th column is a score between 0 and 1. The second item is a
- (n,) tensor where each item is the predicted class label of
- the corresponding box.
- """
- assert len(cls_scores) == len(bbox_preds) == len(bbox_preds_refine)
- num_levels = len(cls_scores)
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
- bbox_preds[0].device)
- result_list = []
- for img_id in range(len(img_metas)):
- cls_score_list = [
- cls_scores[i][img_id].detach() for i in range(num_levels)
- ]
- bbox_pred_list = [
- bbox_preds_refine[i][img_id].detach()
- for i in range(num_levels)
- ]
- img_shape = img_metas[img_id]['img_shape']
- scale_factor = img_metas[img_id]['scale_factor']
- det_bboxes = self._get_bboxes_single(cls_score_list,
- bbox_pred_list, mlvl_points,
- img_shape, scale_factor, cfg,
- rescale, with_nms)
- result_list.append(det_bboxes)
- return result_list
-
- def _get_bboxes_single(self,
- cls_scores,
- bbox_preds,
- mlvl_points,
- img_shape,
- scale_factor,
- cfg,
- rescale=False,
- with_nms=True):
- """Transform outputs for a single batch item into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box iou-aware scores for a single scale
- level with shape (num_points * num_classes, H, W).
- bbox_preds (list[Tensor]): Box offsets for a single scale
- level with shape (num_points * 4, H, W).
- mlvl_points (list[Tensor]): Box reference for a single scale level
- with shape (num_total_points, 4).
- img_shape (tuple[int]): Shape of the input image,
- (height, width, 3).
- scale_factor (ndarray): Scale factor of the image arrange as
- (w_scale, h_scale, w_scale, h_scale).
- cfg (mmcv.Config | None): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
- Default: False.
- with_nms (bool): If True, do nms before returning boxes.
- Default: True.
-
- Returns:
- tuple(Tensor):
- det_bboxes (Tensor): BBox predictions in shape (n, 5), where
- the first 4 columns are bounding box positions
- (tl_x, tl_y, br_x, br_y) and the 5-th column is a score
- between 0 and 1.
- det_labels (Tensor): A (n,) tensor where each item is the
- predicted class label of the corresponding box.
- """
- cfg = self.test_cfg if cfg is None else cfg
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_points)
- mlvl_bboxes = []
- mlvl_scores = []
- for cls_score, bbox_pred, points in zip(cls_scores, bbox_preds,
- mlvl_points):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
- scores = cls_score.permute(1, 2, 0).reshape(
- -1, self.cls_out_channels).contiguous().sigmoid()
- bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4).contiguous()
-
- nms_pre = cfg.get('nms_pre', -1)
- if 0 < nms_pre < scores.shape[0]:
- max_scores, _ = scores.max(dim=1)
- _, topk_inds = max_scores.topk(nms_pre)
- points = points[topk_inds, :]
- bbox_pred = bbox_pred[topk_inds, :]
- scores = scores[topk_inds, :]
- bboxes = distance2bbox(points, bbox_pred, max_shape=img_shape)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_bboxes = torch.cat(mlvl_bboxes)
- if rescale:
- mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
- mlvl_scores = torch.cat(mlvl_scores)
- padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
- if with_nms:
- det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores,
- cfg.score_thr, cfg.nms,
- cfg.max_per_img)
- return det_bboxes, det_labels
- else:
- return mlvl_bboxes, mlvl_scores
-
- def _get_points_single(self,
- featmap_size,
- stride,
- dtype,
- device,
- flatten=False):
- """Get points according to feature map sizes."""
- h, w = featmap_size
- x_range = torch.arange(
- 0, w * stride, stride, dtype=dtype, device=device)
- y_range = torch.arange(
- 0, h * stride, stride, dtype=dtype, device=device)
- y, x = torch.meshgrid(y_range, x_range)
- # to be compatible with anchor points in ATSS
- if self.use_atss:
- points = torch.stack(
- (x.reshape(-1), y.reshape(-1)), dim=-1) + \
- stride * self.anchor_center_offset
- else:
- points = torch.stack(
- (x.reshape(-1), y.reshape(-1)), dim=-1) + stride // 2
- return points
-
- def get_targets(self, cls_scores, mlvl_points, gt_bboxes, gt_labels,
- img_metas, gt_bboxes_ignore):
- """A wrapper for computing ATSS and FCOS targets for points in multiple
- images.
-
- Args:
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
- level with shape (N, num_points * num_classes, H, W).
- mlvl_points (list[Tensor]): Points of each fpn level, each has
- shape (num_points, 2).
- gt_bboxes (list[Tensor]): Ground truth bboxes of each image,
- each has shape (num_gt, 4).
- gt_labels (list[Tensor]): Ground truth labels of each box,
- each has shape (num_gt,).
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
-
- Returns:
- tuple:
- labels_list (list[Tensor]): Labels of each level.
- label_weights (Tensor/None): Label weights of all levels.
- bbox_targets_list (list[Tensor]): Regression targets of each
- level, (l, t, r, b).
- bbox_weights (Tensor/None): Bbox weights of all levels.
- """
- if self.use_atss:
- return self.get_atss_targets(cls_scores, mlvl_points, gt_bboxes,
- gt_labels, img_metas,
- gt_bboxes_ignore)
- else:
- self.norm_on_bbox = False
- return self.get_fcos_targets(mlvl_points, gt_bboxes, gt_labels)
-
- def _get_target_single(self, *args, **kwargs):
- """Avoid ambiguity in multiple inheritance."""
- if self.use_atss:
- return ATSSHead._get_target_single(self, *args, **kwargs)
- else:
- return FCOSHead._get_target_single(self, *args, **kwargs)
-
- def get_fcos_targets(self, points, gt_bboxes_list, gt_labels_list):
- """Compute FCOS regression and classification targets for points in
- multiple images.
-
- Args:
- points (list[Tensor]): Points of each fpn level, each has shape
- (num_points, 2).
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image,
- each has shape (num_gt, 4).
- gt_labels_list (list[Tensor]): Ground truth labels of each box,
- each has shape (num_gt,).
-
- Returns:
- tuple:
- labels (list[Tensor]): Labels of each level.
- label_weights: None, to be compatible with ATSS targets.
- bbox_targets (list[Tensor]): BBox targets of each level.
- bbox_weights: None, to be compatible with ATSS targets.
- """
- labels, bbox_targets = FCOSHead.get_targets(self, points,
- gt_bboxes_list,
- gt_labels_list)
- label_weights = None
- bbox_weights = None
- return labels, label_weights, bbox_targets, bbox_weights
-
- def get_atss_targets(self,
- cls_scores,
- mlvl_points,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """A wrapper for computing ATSS targets for points in multiple images.
-
- Args:
- cls_scores (list[Tensor]): Box iou-aware scores for each scale
- level with shape (N, num_points * num_classes, H, W).
- mlvl_points (list[Tensor]): Points of each fpn level, each has
- shape (num_points, 2).
- gt_bboxes (list[Tensor]): Ground truth bboxes of each image,
- each has shape (num_gt, 4).
- gt_labels (list[Tensor]): Ground truth labels of each box,
- each has shape (num_gt,).
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4). Default: None.
-
- Returns:
- tuple:
- labels_list (list[Tensor]): Labels of each level.
- label_weights (Tensor): Label weights of all levels.
- bbox_targets_list (list[Tensor]): Regression targets of each
- level, (l, t, r, b).
- bbox_weights (Tensor): Bbox weights of all levels.
- """
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.anchor_generator.num_levels
-
- device = cls_scores[0].device
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
-
- cls_reg_targets = ATSSHead.get_targets(
- self,
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels,
- unmap_outputs=True)
- if cls_reg_targets is None:
- return None
-
- (anchor_list, labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets
-
- bbox_targets_list = [
- bbox_targets.reshape(-1, 4) for bbox_targets in bbox_targets_list
- ]
-
- num_imgs = len(img_metas)
- # transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format
- bbox_targets_list = self.transform_bbox_targets(
- bbox_targets_list, mlvl_points, num_imgs)
-
- labels_list = [labels.reshape(-1) for labels in labels_list]
- label_weights_list = [
- label_weights.reshape(-1) for label_weights in label_weights_list
- ]
- bbox_weights_list = [
- bbox_weights.reshape(-1) for bbox_weights in bbox_weights_list
- ]
- label_weights = torch.cat(label_weights_list)
- bbox_weights = torch.cat(bbox_weights_list)
- return labels_list, label_weights, bbox_targets_list, bbox_weights
-
- def transform_bbox_targets(self, decoded_bboxes, mlvl_points, num_imgs):
- """Transform bbox_targets (x1, y1, x2, y2) into (l, t, r, b) format.
-
- Args:
- decoded_bboxes (list[Tensor]): Regression targets of each level,
- in the form of (x1, y1, x2, y2).
- mlvl_points (list[Tensor]): Points of each fpn level, each has
- shape (num_points, 2).
- num_imgs (int): the number of images in a batch.
-
- Returns:
- bbox_targets (list[Tensor]): Regression targets of each level in
- the form of (l, t, r, b).
- """
- # TODO: Re-implemented in Class PointCoder
- assert len(decoded_bboxes) == len(mlvl_points)
- num_levels = len(decoded_bboxes)
- mlvl_points = [points.repeat(num_imgs, 1) for points in mlvl_points]
- bbox_targets = []
- for i in range(num_levels):
- bbox_target = bbox2distance(mlvl_points[i], decoded_bboxes[i])
- bbox_targets.append(bbox_target)
-
- return bbox_targets
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- """Override the method in the parent class to avoid changing para's
- name."""
- pass
diff --git a/spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/data/__init__.py b/spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/data/__init__.py
deleted file mode 100644
index f3b008fb13c5e8a84b1b785056e8c4f5226dc976..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/models/ade20k/segm_lib/utils/data/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-
-from .dataset import Dataset, TensorDataset, ConcatDataset
-from .dataloader import DataLoader
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/data/__init__.py
deleted file mode 100644
index 21c83f8cbd7a9388b452372f0444e78a54a33495..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from . import transforms # isort:skip
-
-from .build import (
- build_batch_data_loader,
- build_detection_test_loader,
- build_detection_train_loader,
- get_detection_dataset_dicts,
- load_proposals_into_dataset,
- print_instances_class_histogram,
-)
-from .catalog import DatasetCatalog, MetadataCatalog, Metadata
-from .common import DatasetFromList, MapDataset
-from .dataset_mapper import DatasetMapper
-
-# ensure the builtin datasets are registered
-from . import datasets, samplers # isort:skip
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/vocoders/__init__.py b/spaces/ChrisPreston/diff-svc_minato_aqua/modules/vocoders/__init__.py
deleted file mode 100644
index 6b4cbcab246907e9fc1b96b62c10d15f9a53a1b4..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/vocoders/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from modules.vocoders import nsf_hifigan
diff --git a/spaces/CofAI/chat/client/css/dropdown.css b/spaces/CofAI/chat/client/css/dropdown.css
deleted file mode 100644
index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/css/dropdown.css
+++ /dev/null
@@ -1,10 +0,0 @@
-.dropdown {
- border: 1px solid var(--conversations);
-}
-
-@media screen and (max-width: 990px) {
- .dropdown {
- padding: 4px 8px;
- font-size: 0.75rem;
- }
-}
diff --git a/spaces/Covert1107/sd-diffusers-webui/modules/safe.py b/spaces/Covert1107/sd-diffusers-webui/modules/safe.py
deleted file mode 100644
index 532c7dab3f60f5a68b068299d2adc0b776a423f9..0000000000000000000000000000000000000000
--- a/spaces/Covert1107/sd-diffusers-webui/modules/safe.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# this code is adapted from the script contributed by anon from /h/
-# modified, from https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/6cff4401824299a983c8e13424018efc347b4a2b/modules/safe.py
-
-import io
-import pickle
-import collections
-import sys
-import traceback
-
-import torch
-import numpy
-import _codecs
-import zipfile
-import re
-
-
-# PyTorch 1.13 and later have _TypedStorage renamed to TypedStorage
-TypedStorage = torch.storage.TypedStorage if hasattr(torch.storage, 'TypedStorage') else torch.storage._TypedStorage
-
-
-def encode(*args):
- out = _codecs.encode(*args)
- return out
-
-
-class RestrictedUnpickler(pickle.Unpickler):
- extra_handler = None
-
- def persistent_load(self, saved_id):
- assert saved_id[0] == 'storage'
- return TypedStorage()
-
- def find_class(self, module, name):
- if self.extra_handler is not None:
- res = self.extra_handler(module, name)
- if res is not None:
- return res
-
- if module == 'collections' and name == 'OrderedDict':
- return getattr(collections, name)
- if module == 'torch._utils' and name in ['_rebuild_tensor_v2', '_rebuild_parameter', '_rebuild_device_tensor_from_numpy']:
- return getattr(torch._utils, name)
- if module == 'torch' and name in ['FloatStorage', 'HalfStorage', 'IntStorage', 'LongStorage', 'DoubleStorage', 'ByteStorage', 'float32']:
- return getattr(torch, name)
- if module == 'torch.nn.modules.container' and name in ['ParameterDict']:
- return getattr(torch.nn.modules.container, name)
- if module == 'numpy.core.multiarray' and name in ['scalar', '_reconstruct']:
- return getattr(numpy.core.multiarray, name)
- if module == 'numpy' and name in ['dtype', 'ndarray']:
- return getattr(numpy, name)
- if module == '_codecs' and name == 'encode':
- return encode
- if module == "pytorch_lightning.callbacks" and name == 'model_checkpoint':
- import pytorch_lightning.callbacks
- return pytorch_lightning.callbacks.model_checkpoint
- if module == "pytorch_lightning.callbacks.model_checkpoint" and name == 'ModelCheckpoint':
- import pytorch_lightning.callbacks.model_checkpoint
- return pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint
- if module == "__builtin__" and name == 'set':
- return set
-
- # Forbid everything else.
- raise Exception(f"global '{module}/{name}' is forbidden")
-
-
-# Regular expression that accepts 'dirname/version', 'dirname/data.pkl', and 'dirname/data/'
-allowed_zip_names_re = re.compile(r"^([^/]+)/((data/\d+)|version|(data\.pkl))$")
-data_pkl_re = re.compile(r"^([^/]+)/data\.pkl$")
-
-def check_zip_filenames(filename, names):
- for name in names:
- if allowed_zip_names_re.match(name):
- continue
-
- raise Exception(f"bad file inside {filename}: {name}")
-
-
-def check_pt(filename, extra_handler):
- try:
-
- # new pytorch format is a zip file
- with zipfile.ZipFile(filename) as z:
- check_zip_filenames(filename, z.namelist())
-
- # find filename of data.pkl in zip file: '/data.pkl'
- data_pkl_filenames = [f for f in z.namelist() if data_pkl_re.match(f)]
- if len(data_pkl_filenames) == 0:
- raise Exception(f"data.pkl not found in {filename}")
- if len(data_pkl_filenames) > 1:
- raise Exception(f"Multiple data.pkl found in {filename}")
- with z.open(data_pkl_filenames[0]) as file:
- unpickler = RestrictedUnpickler(file)
- unpickler.extra_handler = extra_handler
- unpickler.load()
-
- except zipfile.BadZipfile:
-
- # if it's not a zip file, it's an olf pytorch format, with five objects written to pickle
- with open(filename, "rb") as file:
- unpickler = RestrictedUnpickler(file)
- unpickler.extra_handler = extra_handler
- for i in range(5):
- unpickler.load()
-
-
-def load(filename, *args, **kwargs):
- return load_with_extra(filename, extra_handler=global_extra_handler, *args, **kwargs)
-
-
-def load_with_extra(filename, extra_handler=None, *args, **kwargs):
- """
- this function is intended to be used by extensions that want to load models with
- some extra classes in them that the usual unpickler would find suspicious.
-
- Use the extra_handler argument to specify a function that takes module and field name as text,
- and returns that field's value:
-
- ```python
- def extra(module, name):
- if module == 'collections' and name == 'OrderedDict':
- return collections.OrderedDict
-
- return None
-
- safe.load_with_extra('model.pt', extra_handler=extra)
- ```
-
- The alternative to this is just to use safe.unsafe_torch_load('model.pt'), which as the name implies is
- definitely unsafe.
- """
-
- try:
- check_pt(filename, extra_handler)
-
- except pickle.UnpicklingError:
- print(f"Error verifying pickled file from {filename}:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
- print("The file is most likely corrupted.", file=sys.stderr)
- return None
-
- except Exception:
- print(f"Error verifying pickled file from {filename}:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
- print("\nThe file may be malicious, so the program is not going to read it.", file=sys.stderr)
- print("You can skip this check with --disable-safe-unpickle commandline argument.\n\n", file=sys.stderr)
- return None
-
- return unsafe_torch_load(filename, *args, **kwargs)
-
-
-class Extra:
- """
- A class for temporarily setting the global handler for when you can't explicitly call load_with_extra
- (because it's not your code making the torch.load call). The intended use is like this:
-
-```
-import torch
-from modules import safe
-
-def handler(module, name):
- if module == 'torch' and name in ['float64', 'float16']:
- return getattr(torch, name)
-
- return None
-
-with safe.Extra(handler):
- x = torch.load('model.pt')
-```
- """
-
- def __init__(self, handler):
- self.handler = handler
-
- def __enter__(self):
- global global_extra_handler
-
- assert global_extra_handler is None, 'already inside an Extra() block'
- global_extra_handler = self.handler
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- global global_extra_handler
-
- global_extra_handler = None
-
-
-unsafe_torch_load = torch.load
-torch.load = load
-global_extra_handler = None
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/backbone/pan.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/backbone/pan.py
deleted file mode 100644
index e9703e271b3987ff380e5222232592678cafef61..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/backbone/pan.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class FPA(nn.Module):
- def __init__(self, channels=2048):
- """
- Feature Pyramid Attention
- :type channels: int
- """
- super(FPA, self).__init__()
- channels_mid = int(channels / 4)
-
- self.channels_cond = channels
-
- # Master branch
- self.conv_master = nn.Conv2d(self.channels_cond, channels, kernel_size=1, bias=False)
- self.bn_master = nn.BatchNorm2d(channels)
-
- # Global pooling branch
- self.conv_gpb = nn.Conv2d(self.channels_cond, channels, kernel_size=1, bias=False)
- #self.bn_gpb = nn.BatchNorm2d(channels)
-
- # C333 because of the shape of last feature maps is (16, 16).
- self.conv7x7_1 = nn.Conv2d(self.channels_cond, channels_mid, kernel_size=(7, 7), stride=2, padding=3, bias=False)
- self.bn1_1 = nn.BatchNorm2d(channels_mid)
- self.conv5x5_1 = nn.Conv2d(channels_mid, channels_mid, kernel_size=(5, 5), stride=2, padding=2, bias=False)
- self.bn2_1 = nn.BatchNorm2d(channels_mid)
- self.conv3x3_1 = nn.Conv2d(channels_mid, channels_mid, kernel_size=(3, 3), stride=2, padding=1, bias=False)
- self.bn3_1 = nn.BatchNorm2d(channels_mid)
-
- self.conv7x7_2 = nn.Conv2d(channels_mid, channels_mid, kernel_size=(7, 7), stride=1, padding=3, bias=False)
- self.bn1_2 = nn.BatchNorm2d(channels_mid)
- self.conv5x5_2 = nn.Conv2d(channels_mid, channels_mid, kernel_size=(5, 5), stride=1, padding=2, bias=False)
- self.bn2_2 = nn.BatchNorm2d(channels_mid)
- self.conv3x3_2 = nn.Conv2d(channels_mid, channels_mid, kernel_size=(3, 3), stride=1, padding=1, bias=False)
- self.bn3_2 = nn.BatchNorm2d(channels_mid)
-
- self.bn_upsample_1 = nn.BatchNorm2d(channels)
- self.conv1x1_up1 = nn.Conv2d(channels_mid, channels, kernel_size=(1, 1), stride=1, padding=0, bias=False)
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- """
- :param x: Shape: [b, 2048, h, w]
- :return: out: Feature maps. Shape: [b, 2048, h, w]
- """
- # Master branch
- x_master = self.conv_master(x)
- x_master = self.bn_master(x_master)
-
- # Global pooling branch
- x_gpb = nn.AvgPool2d(x.shape[2:])(x).view(x.shape[0], self.channels_cond, 1, 1)
- x_gpb = self.conv_gpb(x_gpb)
- #x_gpb = self.bn_gpb(x_gpb)
-
- # Branch 1
- x1_1 = self.conv7x7_1(x)
- x1_1 = self.bn1_1(x1_1)
- x1_1 = self.relu(x1_1)
- x1_2 = self.conv7x7_2(x1_1)
- x1_2 = self.bn1_2(x1_2)
-
- # Branch 2
- x2_1 = self.conv5x5_1(x1_1)
- x2_1 = self.bn2_1(x2_1)
- x2_1 = self.relu(x2_1)
- x2_2 = self.conv5x5_2(x2_1)
- x2_2 = self.bn2_2(x2_2)
-
- # Branch 3
- x3_1 = self.conv3x3_1(x2_1)
- x3_1 = self.bn3_1(x3_1)
- x3_1 = self.relu(x3_1)
- x3_2 = self.conv3x3_2(x3_1)
- x3_2 = self.bn3_2(x3_2)
-
- # Merge branch 1 and 2
- x3_upsample = F.upsample(x3_2, size=x2_2.shape[-2:],
- mode='bilinear', align_corners=False)
-
- x2_merge = self.relu(x2_2 + x3_upsample)
-
- x2_upsample = F.upsample(x2_merge, size=x1_2.shape[-2:],
- mode='bilinear', align_corners=False)
- x1_merge = self.relu(x1_2 + x2_upsample)
-
- x1_merge_upsample = F.upsample(x1_merge, size=x_master.shape[-2:],
- mode='bilinear', align_corners=False)
- x1_merge_upsample_ch = self.relu(self.bn_upsample_1(self.conv1x1_up1(x1_merge_upsample)))
- x_master = x_master * x1_merge_upsample_ch
- #
- out = self.relu(x_master + x_gpb)
-
- return out
-
-
-class GAU(nn.Module):
- def __init__(self, channels_high, channels_low, upsample=True):
- super(GAU, self).__init__()
- # Global Attention Upsample
- self.upsample = upsample
- self.conv3x3 = nn.Conv2d(channels_low, channels_low, kernel_size=3, padding=1, bias=False)
- self.bn_low = nn.BatchNorm2d(channels_low)
-
- self.conv1x1 = nn.Conv2d(channels_high, channels_low, kernel_size=1, padding=0, bias=False)
- #self.bn_high = nn.BatchNorm2d(channels_low)
-
- if upsample:
- self.conv_upsample = nn.ConvTranspose2d(channels_high, channels_low, kernel_size=4, stride=2, padding=1, bias=False)
- self.bn_upsample = nn.BatchNorm2d(channels_low)
- else:
- self.conv_reduction = nn.Conv2d(channels_high, channels_low, kernel_size=1, padding=0, bias=False)
- self.bn_reduction = nn.BatchNorm2d(channels_low)
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, fms_high, fms_low, fm_mask=None):
- """
- Use the high level features with abundant catagory information to weight the low level features with pixel
- localization information. In the meantime, we further use mask feature maps with catagory-specific information
- to localize the mask position.
- :param fms_high: Features of high level. Tensor.
- :param fms_low: Features of low level. Tensor.
- :param fm_mask:
- :return: fms_att_upsample
- """
- b, c, h, w = fms_high.shape
-
- fms_high_gp = nn.AvgPool2d(fms_high.shape[2:])(fms_high).view(len(fms_high), c, 1, 1)
- fms_high_gp = self.conv1x1(fms_high_gp)
- # fms_high_gp = self.bn_high(fms_high_gp)# arlog, when the spatial size HxW = 1x1, the BN cannot be used.
- fms_high_gp = self.relu(fms_high_gp)
-
- # fms_low_mask = torch.cat([fms_low, fm_mask], dim=1)
- fms_low_mask = self.conv3x3(fms_low)
- fms_low_mask = self.bn_low(fms_low_mask)
-
- fms_att = fms_low_mask * fms_high_gp
- if self.upsample:
- out = self.relu(
- self.bn_upsample(self.conv_upsample(fms_high)) + fms_att)
- else:
- out = self.relu(
- self.bn_reduction(self.conv_reduction(fms_high)) + fms_att)
- return out
-
-
-class PAN(nn.Module):
- def __init__(self):
- """
- :param blocks: Blocks of the network with reverse sequential.
- """
- super(PAN, self).__init__()
- channels_blocks = [2048, 1024, 512, 256]
-
- self.fpa = FPA(channels=channels_blocks[0])
-
- self.gau_block1 = GAU(channels_blocks[0], channels_blocks[1])
- self.gau_block2 = GAU(channels_blocks[1], channels_blocks[2])
- self.gau_block3 = GAU(channels_blocks[2], channels_blocks[3])
- self.gau = [self.gau_block1, self.gau_block2, self.gau_block3]
-
- def forward(self, fms):
- """
- :param fms: Feature maps of forward propagation in the network with reverse sequential. shape:[b, c, h, w]
- :return: fm_high. [b, 256, h, w]
- """
- feats = []
- for i, fm_low in enumerate(fms[::-1]):
- if i == 0:
- fm_high = self.fpa(fm_low)
- else:
- fm_high = self.gau[int(i-1)](fm_high, fm_low)
- feats.append(fm_high)
- feats.reverse()
- return tuple(feats)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/memory.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/memory.py
deleted file mode 100644
index a6499c13ff36f74d2e217ee996825a13edd6d9fb..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/memory.py
+++ /dev/null
@@ -1,279 +0,0 @@
-from __future__ import annotations
-
-from collections import OrderedDict, deque
-from dataclasses import dataclass, field
-from types import TracebackType
-from typing import Generic, NamedTuple, TypeVar
-
-from .. import (
- BrokenResourceError,
- ClosedResourceError,
- EndOfStream,
- WouldBlock,
- get_cancelled_exc_class,
-)
-from .._core._compat import DeprecatedAwaitable
-from ..abc import Event, ObjectReceiveStream, ObjectSendStream
-from ..lowlevel import checkpoint
-
-T_Item = TypeVar("T_Item")
-T_co = TypeVar("T_co", covariant=True)
-T_contra = TypeVar("T_contra", contravariant=True)
-
-
-class MemoryObjectStreamStatistics(NamedTuple):
- current_buffer_used: int #: number of items stored in the buffer
- #: maximum number of items that can be stored on this stream (or :data:`math.inf`)
- max_buffer_size: float
- open_send_streams: int #: number of unclosed clones of the send stream
- open_receive_streams: int #: number of unclosed clones of the receive stream
- tasks_waiting_send: int #: number of tasks blocked on :meth:`MemoryObjectSendStream.send`
- #: number of tasks blocked on :meth:`MemoryObjectReceiveStream.receive`
- tasks_waiting_receive: int
-
-
-@dataclass(eq=False)
-class MemoryObjectStreamState(Generic[T_Item]):
- max_buffer_size: float = field()
- buffer: deque[T_Item] = field(init=False, default_factory=deque)
- open_send_channels: int = field(init=False, default=0)
- open_receive_channels: int = field(init=False, default=0)
- waiting_receivers: OrderedDict[Event, list[T_Item]] = field(
- init=False, default_factory=OrderedDict
- )
- waiting_senders: OrderedDict[Event, T_Item] = field(
- init=False, default_factory=OrderedDict
- )
-
- def statistics(self) -> MemoryObjectStreamStatistics:
- return MemoryObjectStreamStatistics(
- len(self.buffer),
- self.max_buffer_size,
- self.open_send_channels,
- self.open_receive_channels,
- len(self.waiting_senders),
- len(self.waiting_receivers),
- )
-
-
-@dataclass(eq=False)
-class MemoryObjectReceiveStream(Generic[T_co], ObjectReceiveStream[T_co]):
- _state: MemoryObjectStreamState[T_co]
- _closed: bool = field(init=False, default=False)
-
- def __post_init__(self) -> None:
- self._state.open_receive_channels += 1
-
- def receive_nowait(self) -> T_co:
- """
- Receive the next item if it can be done without waiting.
-
- :return: the received item
- :raises ~anyio.ClosedResourceError: if this send stream has been closed
- :raises ~anyio.EndOfStream: if the buffer is empty and this stream has been
- closed from the sending end
- :raises ~anyio.WouldBlock: if there are no items in the buffer and no tasks
- waiting to send
-
- """
- if self._closed:
- raise ClosedResourceError
-
- if self._state.waiting_senders:
- # Get the item from the next sender
- send_event, item = self._state.waiting_senders.popitem(last=False)
- self._state.buffer.append(item)
- send_event.set()
-
- if self._state.buffer:
- return self._state.buffer.popleft()
- elif not self._state.open_send_channels:
- raise EndOfStream
-
- raise WouldBlock
-
- async def receive(self) -> T_co:
- await checkpoint()
- try:
- return self.receive_nowait()
- except WouldBlock:
- # Add ourselves in the queue
- receive_event = Event()
- container: list[T_co] = []
- self._state.waiting_receivers[receive_event] = container
-
- try:
- await receive_event.wait()
- except get_cancelled_exc_class():
- # Ignore the immediate cancellation if we already received an item, so as not to
- # lose it
- if not container:
- raise
- finally:
- self._state.waiting_receivers.pop(receive_event, None)
-
- if container:
- return container[0]
- else:
- raise EndOfStream
-
- def clone(self) -> MemoryObjectReceiveStream[T_co]:
- """
- Create a clone of this receive stream.
-
- Each clone can be closed separately. Only when all clones have been closed will the
- receiving end of the memory stream be considered closed by the sending ends.
-
- :return: the cloned stream
-
- """
- if self._closed:
- raise ClosedResourceError
-
- return MemoryObjectReceiveStream(_state=self._state)
-
- def close(self) -> None:
- """
- Close the stream.
-
- This works the exact same way as :meth:`aclose`, but is provided as a special case for the
- benefit of synchronous callbacks.
-
- """
- if not self._closed:
- self._closed = True
- self._state.open_receive_channels -= 1
- if self._state.open_receive_channels == 0:
- send_events = list(self._state.waiting_senders.keys())
- for event in send_events:
- event.set()
-
- async def aclose(self) -> None:
- self.close()
-
- def statistics(self) -> MemoryObjectStreamStatistics:
- """
- Return statistics about the current state of this stream.
-
- .. versionadded:: 3.0
- """
- return self._state.statistics()
-
- def __enter__(self) -> MemoryObjectReceiveStream[T_co]:
- return self
-
- def __exit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> None:
- self.close()
-
-
-@dataclass(eq=False)
-class MemoryObjectSendStream(Generic[T_contra], ObjectSendStream[T_contra]):
- _state: MemoryObjectStreamState[T_contra]
- _closed: bool = field(init=False, default=False)
-
- def __post_init__(self) -> None:
- self._state.open_send_channels += 1
-
- def send_nowait(self, item: T_contra) -> DeprecatedAwaitable:
- """
- Send an item immediately if it can be done without waiting.
-
- :param item: the item to send
- :raises ~anyio.ClosedResourceError: if this send stream has been closed
- :raises ~anyio.BrokenResourceError: if the stream has been closed from the
- receiving end
- :raises ~anyio.WouldBlock: if the buffer is full and there are no tasks waiting
- to receive
-
- """
- if self._closed:
- raise ClosedResourceError
- if not self._state.open_receive_channels:
- raise BrokenResourceError
-
- if self._state.waiting_receivers:
- receive_event, container = self._state.waiting_receivers.popitem(last=False)
- container.append(item)
- receive_event.set()
- elif len(self._state.buffer) < self._state.max_buffer_size:
- self._state.buffer.append(item)
- else:
- raise WouldBlock
-
- return DeprecatedAwaitable(self.send_nowait)
-
- async def send(self, item: T_contra) -> None:
- await checkpoint()
- try:
- self.send_nowait(item)
- except WouldBlock:
- # Wait until there's someone on the receiving end
- send_event = Event()
- self._state.waiting_senders[send_event] = item
- try:
- await send_event.wait()
- except BaseException:
- self._state.waiting_senders.pop(send_event, None) # type: ignore[arg-type]
- raise
-
- if self._state.waiting_senders.pop(send_event, None): # type: ignore[arg-type]
- raise BrokenResourceError
-
- def clone(self) -> MemoryObjectSendStream[T_contra]:
- """
- Create a clone of this send stream.
-
- Each clone can be closed separately. Only when all clones have been closed will the
- sending end of the memory stream be considered closed by the receiving ends.
-
- :return: the cloned stream
-
- """
- if self._closed:
- raise ClosedResourceError
-
- return MemoryObjectSendStream(_state=self._state)
-
- def close(self) -> None:
- """
- Close the stream.
-
- This works the exact same way as :meth:`aclose`, but is provided as a special case for the
- benefit of synchronous callbacks.
-
- """
- if not self._closed:
- self._closed = True
- self._state.open_send_channels -= 1
- if self._state.open_send_channels == 0:
- receive_events = list(self._state.waiting_receivers.keys())
- self._state.waiting_receivers.clear()
- for event in receive_events:
- event.set()
-
- async def aclose(self) -> None:
- self.close()
-
- def statistics(self) -> MemoryObjectStreamStatistics:
- """
- Return statistics about the current state of this stream.
-
- .. versionadded:: 3.0
- """
- return self._state.statistics()
-
- def __enter__(self) -> MemoryObjectSendStream[T_contra]:
- return self
-
- def __exit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> None:
- self.close()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/eexec.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/eexec.py
deleted file mode 100644
index cafa312cdaa4696b0624438e06418ade95438441..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/eexec.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""
-PostScript Type 1 fonts make use of two types of encryption: charstring
-encryption and ``eexec`` encryption. Charstring encryption is used for
-the charstrings themselves, while ``eexec`` is used to encrypt larger
-sections of the font program, such as the ``Private`` and ``CharStrings``
-dictionaries. Despite the different names, the algorithm is the same,
-although ``eexec`` encryption uses a fixed initial key R=55665.
-
-The algorithm uses cipher feedback, meaning that the ciphertext is used
-to modify the key. Because of this, the routines in this module return
-the new key at the end of the operation.
-
-"""
-
-from fontTools.misc.textTools import bytechr, bytesjoin, byteord
-
-
-def _decryptChar(cipher, R):
- cipher = byteord(cipher)
- plain = ((cipher ^ (R >> 8))) & 0xFF
- R = ((cipher + R) * 52845 + 22719) & 0xFFFF
- return bytechr(plain), R
-
-
-def _encryptChar(plain, R):
- plain = byteord(plain)
- cipher = ((plain ^ (R >> 8))) & 0xFF
- R = ((cipher + R) * 52845 + 22719) & 0xFFFF
- return bytechr(cipher), R
-
-
-def decrypt(cipherstring, R):
- r"""
- Decrypts a string using the Type 1 encryption algorithm.
-
- Args:
- cipherstring: String of ciphertext.
- R: Initial key.
-
- Returns:
- decryptedStr: Plaintext string.
- R: Output key for subsequent decryptions.
-
- Examples::
-
- >>> testStr = b"\0\0asdadads asds\265"
- >>> decryptedStr, R = decrypt(testStr, 12321)
- >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1'
- True
- >>> R == 36142
- True
- """
- plainList = []
- for cipher in cipherstring:
- plain, R = _decryptChar(cipher, R)
- plainList.append(plain)
- plainstring = bytesjoin(plainList)
- return plainstring, int(R)
-
-
-def encrypt(plainstring, R):
- r"""
- Encrypts a string using the Type 1 encryption algorithm.
-
- Note that the algorithm as described in the Type 1 specification requires the
- plaintext to be prefixed with a number of random bytes. (For ``eexec`` the
- number of random bytes is set to 4.) This routine does *not* add the random
- prefix to its input.
-
- Args:
- plainstring: String of plaintext.
- R: Initial key.
-
- Returns:
- cipherstring: Ciphertext string.
- R: Output key for subsequent encryptions.
-
- Examples::
-
- >>> testStr = b"\0\0asdadads asds\265"
- >>> decryptedStr, R = decrypt(testStr, 12321)
- >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1'
- True
- >>> R == 36142
- True
-
- >>> testStr = b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1'
- >>> encryptedStr, R = encrypt(testStr, 12321)
- >>> encryptedStr == b"\0\0asdadads asds\265"
- True
- >>> R == 36142
- True
- """
- cipherList = []
- for plain in plainstring:
- cipher, R = _encryptChar(plain, R)
- cipherList.append(cipher)
- cipherstring = bytesjoin(cipherList)
- return cipherstring, int(R)
-
-
-def hexString(s):
- import binascii
-
- return binascii.hexlify(s)
-
-
-def deHexString(h):
- import binascii
-
- h = bytesjoin(h.split())
- return binascii.unhexlify(h)
-
-
-if __name__ == "__main__":
- import sys
- import doctest
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-6a563d90.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-6a563d90.js
deleted file mode 100644
index 0b00c00aa785256220b3494780e55f3ed0c00524..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-6a563d90.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{T as l}from"./Textbox-1f11d244.js";import"./index-1d65707a.js";/* empty css */import"./Button-f155035a.js";import"./BlockTitle-dee077e8.js";import"./Info-7c6961ef.js";import"./Copy-9f1657c4.js";const a=["static","dynamic"],n=t=>({type:{payload:"string"},description:{payload:"text string"},example_data:t.value||"hello world"});export{l as Component,n as document,a as modes};
-//# sourceMappingURL=index-6a563d90.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-9ae8fa0e.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-9ae8fa0e.css
deleted file mode 100644
index 8d40eb2078051865fa9f54b19d9fd5837f4910d4..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-9ae8fa0e.css
+++ /dev/null
@@ -1 +0,0 @@
-input.svelte-q8uklq{position:absolute;top:var(--size-2);right:var(--size-2);bottom:var(--size-2);left:var(--size-2);flex:1 1 0%;transform:translate(-.1px);outline:none;border:none;background:transparent}span.svelte-q8uklq{flex:1 1 0%;outline:none;padding:var(--size-2)}.header.svelte-q8uklq{transform:translate(0);font:var(--weight-bold)}.edit.svelte-q8uklq{opacity:0;pointer-events:none}.button-wrap.svelte-1tclfmr:hover svg.svelte-1tclfmr.svelte-1tclfmr{color:var(--color-accent)}.button-wrap.svelte-1tclfmr svg.svelte-1tclfmr.svelte-1tclfmr{margin-right:var(--size-1);margin-left:-5px}.label.svelte-1tclfmr p.svelte-1tclfmr.svelte-1tclfmr{position:relative;z-index:var(--layer-4);margin-bottom:var(--size-2);color:var(--block-label-text-color);font-size:var(--block-label-text-size)}.table-wrap.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{position:relative;transition:.15s;border:1px solid var(--border-color-primary);border-radius:var(--table-radius);overflow-x:scroll;overflow-y:hidden}.dragging.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{border-color:var(--color-accent)}.no-wrap.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{white-space:nowrap}table.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{transition:.15s;width:var(--size-full);table-layout:auto;overflow:hidden;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-md);font-family:var(--font-mono)}table.dragging.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{opacity:.4}thead.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{position:sticky;top:0;left:0;z-index:var(--layer-1);box-shadow:var(--shadow-drop)}tr.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{border-bottom:1px solid var(--border-color-primary);text-align:left}tr.svelte-1tclfmr>.svelte-1tclfmr+.svelte-1tclfmr{border-right-width:0px;border-left-width:1px;border-style:solid;border-color:var(--border-color-primary)}th.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr,td.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{--ring-color:transparent;position:relative;outline:none;box-shadow:inset 0 0 0 1px var(--ring-color);padding:0}th.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr:first-child{border-top-left-radius:var(--table-radius)}th.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr:last-child{border-top-right-radius:var(--table-radius)}th.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr:focus-within,td.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr:focus-within{--ring-color:var(--color-accent)}tr.svelte-1tclfmr:last-child td.svelte-1tclfmr.svelte-1tclfmr:first-child{border-bottom-left-radius:var(--table-radius)}tr.svelte-1tclfmr:last-child td.svelte-1tclfmr.svelte-1tclfmr:last-child{border-bottom-right-radius:var(--table-radius)}tr.svelte-1tclfmr th.svelte-1tclfmr.svelte-1tclfmr{background:var(--table-even-background-fill)}th.svelte-1tclfmr svg.svelte-1tclfmr.svelte-1tclfmr{fill:currentColor;font-size:10px}.sort-button.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{display:flex;flex:none;justify-content:center;align-items:center;transition:.15s;cursor:pointer;padding:var(--size-2);color:var(--body-text-color-subdued);line-height:var(--text-sm)}.sort-button.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr:hover{color:var(--body-text-color)}.des.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{transform:scaleY(-1)}.sort-button.sorted.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{color:var(--color-accent)}tbody.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{overflow-y:scroll}tbody.svelte-1tclfmr>tr.svelte-1tclfmr.svelte-1tclfmr:last-child{border:none}tbody.svelte-1tclfmr>tr.svelte-1tclfmr.svelte-1tclfmr:nth-child(even){background:var(--table-even-background-fill)}tbody.svelte-1tclfmr>tr.svelte-1tclfmr.svelte-1tclfmr:nth-child(odd){background:var(--table-odd-background-fill)}tbody.svelte-1tclfmr>tr.svelte-1tclfmr.svelte-1tclfmr:nth-child(odd):focus{background:var(--background-fill-primary)}.editing.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{background:var(--table-editing)}.cell-wrap.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{display:flex;align-items:center;outline:none;height:var(--size-full);min-height:var(--size-9)}.controls-wrap.svelte-1tclfmr.svelte-1tclfmr.svelte-1tclfmr{display:flex;justify-content:flex-end;padding-top:var(--size-2)}.controls-wrap.svelte-1tclfmr>.svelte-1tclfmr+.svelte-1tclfmr{margin-left:var(--size-1)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_against_stdlib_http.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_against_stdlib_http.py
deleted file mode 100644
index d2ee13149d34c9882432cdebfec87dff9814076d..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_against_stdlib_http.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import json
-import os.path
-import socket
-import socketserver
-import threading
-from contextlib import closing, contextmanager
-from http.server import SimpleHTTPRequestHandler
-from typing import Callable, Generator
-from urllib.request import urlopen
-
-import h11
-
-
-@contextmanager
-def socket_server(
- handler: Callable[..., socketserver.BaseRequestHandler]
-) -> Generator[socketserver.TCPServer, None, None]:
- httpd = socketserver.TCPServer(("127.0.0.1", 0), handler)
- thread = threading.Thread(
- target=httpd.serve_forever, kwargs={"poll_interval": 0.01}
- )
- thread.daemon = True
- try:
- thread.start()
- yield httpd
- finally:
- httpd.shutdown()
-
-
-test_file_path = os.path.join(os.path.dirname(__file__), "data/test-file")
-with open(test_file_path, "rb") as f:
- test_file_data = f.read()
-
-
-class SingleMindedRequestHandler(SimpleHTTPRequestHandler):
- def translate_path(self, path: str) -> str:
- return test_file_path
-
-
-def test_h11_as_client() -> None:
- with socket_server(SingleMindedRequestHandler) as httpd:
- with closing(socket.create_connection(httpd.server_address)) as s:
- c = h11.Connection(h11.CLIENT)
-
- s.sendall(
- c.send( # type: ignore[arg-type]
- h11.Request(
- method="GET", target="/foo", headers=[("Host", "localhost")]
- )
- )
- )
- s.sendall(c.send(h11.EndOfMessage())) # type: ignore[arg-type]
-
- data = bytearray()
- while True:
- event = c.next_event()
- print(event)
- if event is h11.NEED_DATA:
- # Use a small read buffer to make things more challenging
- # and exercise more paths :-)
- c.receive_data(s.recv(10))
- continue
- if type(event) is h11.Response:
- assert event.status_code == 200
- if type(event) is h11.Data:
- data += event.data
- if type(event) is h11.EndOfMessage:
- break
- assert bytes(data) == test_file_data
-
-
-class H11RequestHandler(socketserver.BaseRequestHandler):
- def handle(self) -> None:
- with closing(self.request) as s:
- c = h11.Connection(h11.SERVER)
- request = None
- while True:
- event = c.next_event()
- if event is h11.NEED_DATA:
- # Use a small read buffer to make things more challenging
- # and exercise more paths :-)
- c.receive_data(s.recv(10))
- continue
- if type(event) is h11.Request:
- request = event
- if type(event) is h11.EndOfMessage:
- break
- assert request is not None
- info = json.dumps(
- {
- "method": request.method.decode("ascii"),
- "target": request.target.decode("ascii"),
- "headers": {
- name.decode("ascii"): value.decode("ascii")
- for (name, value) in request.headers
- },
- }
- )
- s.sendall(c.send(h11.Response(status_code=200, headers=[]))) # type: ignore[arg-type]
- s.sendall(c.send(h11.Data(data=info.encode("ascii"))))
- s.sendall(c.send(h11.EndOfMessage()))
-
-
-def test_h11_as_server() -> None:
- with socket_server(H11RequestHandler) as httpd:
- host, port = httpd.server_address
- url = "http://{}:{}/some-path".format(host, port)
- with closing(urlopen(url)) as f:
- assert f.getcode() == 200
- data = f.read()
- info = json.loads(data.decode("ascii"))
- print(info)
- assert info["method"] == "GET"
- assert info["target"] == "/some-path"
- assert "urllib" in info["headers"]["user-agent"]
diff --git a/spaces/Dagfinn1962/stablediffusion-models/app.py b/spaces/Dagfinn1962/stablediffusion-models/app.py
deleted file mode 100644
index 8474190cebdad3f7fd91eefe8353fff110248791..0000000000000000000000000000000000000000
--- a/spaces/Dagfinn1962/stablediffusion-models/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import gradio as gr
-import os
-import sys
-from pathlib import Path
-
-models = [
- {"name": "Stable Diffusion 1.4","url": "stablediffusionapi/juggernaut-xl-v5"},
-
- ]
-
-current_model = models[0]
-
-text_gen = gr.Interface.load("spaces/daspartho/prompt-extend")
-
-models2 = []
-for model in models:
- model_url = f"models/{model['url']}"
- loaded_model = gr.Interface.load(model_url, live=True, preprocess=True)
- models2.append(loaded_model)
-
-
-def text_it(inputs, text_gen=text_gen):
- return text_gen(inputs)
-
-
-def set_model(current_model_index):
- global current_model
- current_model = models[current_model_index]
- return gr.update(value=f"{current_model['name']}")
-
-
-def send_it(inputs, model_choice):
- proc = models2[model_choice]
- return proc(inputs)
-
-
-with gr.Blocks (css = 'main.css') as myface:
-
- gr.HTML("
Your Promt Here
Choose model here
" )
- with gr.Row():
- input_text = gr.Textbox(label=" ",placeholder="1.PROMPT IDEA HERE ! ",lines=4)
- # Model selection dropdown
- model_name1 = gr.Dropdown(
- label=" ",
- choices=[m["name"] for m in models],
- type="index",
- value=current_model["name"],
- interactive=True,
-
-
- )
- with gr.Row():
- see_prompts = gr.Button("2. GENERATE YOUR PROMT IDEA HERE!")
- run = gr.Button("3. GENERATE THE IMAGE HERE!", varant="primery")
-
- #
- with gr.Row():
- output1 = gr.Image(label="")
- output2 = gr.Image(label="")
- output3 = gr.Image(label="")
- with gr.Row():
- magic1 = gr.Textbox(label="Generated Prompt", lines=2)
- magic2 = gr.Textbox(label="Generated Prompt", lines=2)
- magic3 = gr.Textbox(label="Generated Prompt", lines=2)
-
- model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3,])
-
- run.click(send_it, inputs=[magic1, model_name1], outputs=[output1])
- run.click(send_it, inputs=[magic2, model_name1], outputs=[output2])
- run.click(send_it, inputs=[magic3, model_name1], outputs=[output3])
-
-
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic1])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic2])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic3])
-
-
-myface.queue(concurrency_count=200)
-myface.launch(inline=True, show_api=False, max_threads=400)
\ No newline at end of file
diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp b/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp
deleted file mode 100644
index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp
+++ /dev/null
@@ -1,23 +0,0 @@
-#include
-
-
-torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
- int up_x, int up_y, int down_x, int down_y,
- int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
- CHECK_CUDA(input);
- CHECK_CUDA(kernel);
-
- return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/Datatrooper/wine/README.md b/spaces/Datatrooper/wine/README.md
deleted file mode 100644
index 0bd7db942836dbc470624b5af19b9fd82346dcf9..0000000000000000000000000000000000000000
--- a/spaces/Datatrooper/wine/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Wine
-emoji: 🍷
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/__init__.py b/spaces/Dorado607/ChuanhuChatGPT/modules/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DragGan/DragGan/stylegan_human/torch_utils/models.py b/spaces/DragGan/DragGan/stylegan_human/torch_utils/models.py
deleted file mode 100644
index 762550239ba6f1e09f4887bf1b27fd421745a589..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/torch_utils/models.py
+++ /dev/null
@@ -1,756 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# https://github.com/rosinality/stylegan2-pytorch/blob/master/model.py
-
-import math
-import random
-import functools
-import operator
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-import torch.nn.init as init
-from torch.autograd import Function
-
-from .op_edit import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
- k /= k.sum()
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer("kernel", kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},"
- f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})"
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})"
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, "
- f"upsample={self.upsample}, downsample={self.downsample})"
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
- self.input = nn.Parameter(torch.randn(1, channel, size, size // 2))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
- self.noise = NoiseInjection()
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- out = self.activate(out)
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=1,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- small=False,
- small_isaac=False,
- ):
- super().__init__()
-
- self.size = size
-
- if small and size > 64:
- raise ValueError("small only works for sizes <= 64")
-
- self.style_dim = style_dim
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu"
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- if small:
- self.channels = {
- 4: 64 * channel_multiplier,
- 8: 64 * channel_multiplier,
- 16: 64 * channel_multiplier,
- 32: 64 * channel_multiplier,
- 64: 64 * channel_multiplier,
- }
- elif small_isaac:
- self.channels = {4: 256, 8: 256, 16: 256, 32: 256, 64: 128, 128: 128}
- else:
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res // 2]
- self.noises.register_buffer(
- "noise_{}".format(layer_idx), torch.randn(*shape)
- )
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2 // 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i // 2, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- return_features=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- real=False,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, "noise_{}".format(i))
- for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- # print('truncation_latent: ', truncation_latent.shape)
- if not real: #if type(styles) == list:
- style_t = []
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- ) # (-1.1162e-03-(-1.0914e-01))*0.8+(-1.0914e-01)
- styles = style_t
- else: # styles are latent (tensor: 1,18,512), for real PTI output
- truncation_latent = truncation_latent.repeat(18,1).unsqueeze(0) # (1,512) --> (1,18,512)
- styles = torch.add(truncation_latent,torch.mul(torch.sub(styles,truncation_latent),truncation))
- # print('now styles after truncation : ', styles)
- #if type(styles) == list and len(styles) < 2: # this if for input as list of [(1,512)]
- if not real:
- if len(styles) < 2:
- inject_index = self.n_latent
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- else:
- latent = styles[0]
- elif type(styles) == list:
- if inject_index is None:
- inject_index = 4
-
- latent = styles[0].unsqueeze(0)
- if latent.shape[1] == 1:
- latent = latent.repeat(1, inject_index, 1)
- else:
- latent = latent[:, :inject_index, :]
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
- latent = torch.cat([latent, latent2], 1)
- else: # input is tensor of size with torch.Size([1, 18, 512]), for real PTI output
- latent = styles
-
- # print(f'processed latent: {latent.shape}')
-
- features = {}
- out = self.input(latent)
- features["out_0"] = out
- out = self.conv1(out, latent[:, 0], noise=noise[0])
- features["conv1_0"] = out
-
- skip = self.to_rgb1(out, latent[:, 1])
- features["skip_0"] = skip
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- features["conv1_{}".format(i)] = out
- out = conv2(out, latent[:, i + 1], noise=noise2)
- features["conv2_{}".format(i)] = out
- skip = to_rgb(out, latent[:, i + 2], skip)
- features["skip_{}".format(i)] = skip
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
- elif return_features:
- return image, features
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class StyleDiscriminator(nn.Module):
- def __init__(
- self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], small=False
- ):
- super().__init__()
-
- if small:
- channels = {4: 64, 8: 64, 16: 64, 32: 64, 64: 64}
-
- else:
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"),
- EqualLinear(channels[4], 1),
- )
-
-
- def forward(self, input):
- h = input
- h_list = []
-
- for index, blocklist in enumerate(self.convs):
- h = blocklist(h)
- h_list.append(h)
-
- out = h
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
- h_list.append(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out, h_list
-
-
-class StyleEncoder(nn.Module):
- def __init__(self, size, w_dim=512):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256,
- 128: 128,
- 256: 64,
- 512: 32,
- 1024: 16
- }
-
- self.w_dim = w_dim
- log_size = int(math.log(size, 2))
- convs = [ConvLayer(3, channels[size], 1)]
-
- in_channel = channels[size]
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
- convs.append(ResBlock(in_channel, out_channel))
- in_channel = out_channel
-
- convs.append(EqualConv2d(in_channel,2*self.w_dim, 4, padding=0, bias=False))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, input):
- out = self.convs(input)
- # return out.view(len(input), self.n_latents, self.w_dim)
- reshaped = out.view(len(input), 2*self.w_dim)
- return reshaped[:,:self.w_dim], reshaped[:,self.w_dim:]
-
-def kaiming_init(m):
- if isinstance(m, (nn.Linear, nn.Conv2d)):
- init.kaiming_normal_(m.weight)
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)):
- m.weight.data.fill_(1)
- if m.bias is not None:
- m.bias.data.fill_(0)
-
-
-def normal_init(m):
- if isinstance(m, (nn.Linear, nn.Conv2d)):
- init.normal_(m.weight, 0, 0.02)
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)):
- m.weight.data.fill_(1)
- if m.bias is not None:
- m.bias.data.fill_(0)
\ No newline at end of file
diff --git a/spaces/Ekittl01/impira-layoutlm-document-qa/app.py b/spaces/Ekittl01/impira-layoutlm-document-qa/app.py
deleted file mode 100644
index c80208650f94f0a6bd291fdf0a78afaf1fcf318b..0000000000000000000000000000000000000000
--- a/spaces/Ekittl01/impira-layoutlm-document-qa/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/impira/layoutlm-document-qa").launch()
\ No newline at end of file
diff --git a/spaces/Enterprisium/Easy_GUI/config.py b/spaces/Enterprisium/Easy_GUI/config.py
deleted file mode 100644
index 5b72235b58b65ac629f49bcc4aad032b5b59d8d4..0000000000000000000000000000000000000000
--- a/spaces/Enterprisium/Easy_GUI/config.py
+++ /dev/null
@@ -1,204 +0,0 @@
-import argparse
-import sys
-import torch
-import json
-from multiprocessing import cpu_count
-
-global usefp16
-usefp16 = False
-
-
-def use_fp32_config():
- usefp16 = False
- device_capability = 0
- if torch.cuda.is_available():
- device = torch.device("cuda:0") # Assuming you have only one GPU (index 0).
- device_capability = torch.cuda.get_device_capability(device)[0]
- if device_capability >= 7:
- usefp16 = True
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as d:
- data = json.load(d)
-
- if "train" in data and "fp16_run" in data["train"]:
- data["train"]["fp16_run"] = True
-
- with open(f"configs/{config_file}", "w") as d:
- json.dump(data, d, indent=4)
-
- print(f"Set fp16_run to true in {config_file}")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "r", encoding="utf-8"
- ) as f:
- strr = f.read()
-
- strr = strr.replace("3.0", "3.7")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "w", encoding="utf-8"
- ) as f:
- f.write(strr)
- else:
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as f:
- data = json.load(f)
-
- if "train" in data and "fp16_run" in data["train"]:
- data["train"]["fp16_run"] = False
-
- with open(f"configs/{config_file}", "w") as d:
- json.dump(data, d, indent=4)
-
- print(f"Set fp16_run to false in {config_file}")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "r", encoding="utf-8"
- ) as f:
- strr = f.read()
-
- strr = strr.replace("3.7", "3.0")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "w", encoding="utf-8"
- ) as f:
- f.write(strr)
- else:
- print(
- "CUDA is not available. Make sure you have an NVIDIA GPU and CUDA installed."
- )
- return (usefp16, device_capability)
-
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.iscolab,
- self.noparallel,
- self.noautoopen,
- self.paperspace,
- self.is_cli,
- ) = self.arg_parse()
-
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- exe = sys.executable or "python"
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- parser.add_argument( # Fork Feature. Paperspace integration for web UI
- "--paperspace",
- action="store_true",
- help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.",
- )
- parser.add_argument( # Fork Feature. Embed a CLI into the infer-web.py
- "--is_cli",
- action="store_true",
- help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!",
- )
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- cmd_opts.paperspace,
- cmd_opts.is_cli,
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("Found GPU", self.gpu_name, ", force to fp32")
- self.is_half = False
- else:
- print("Found GPU", self.gpu_name)
- use_fp32_config()
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif self.has_mps():
- print("No supported Nvidia GPU found, use MPS instead")
- self.device = "mps"
- self.is_half = False
- use_fp32_config()
- else:
- print("No supported Nvidia GPU found, use CPU instead")
- self.device = "cpu"
- self.is_half = False
- use_fp32_config()
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/Epitech/IA_NLP/app.py b/spaces/Epitech/IA_NLP/app.py
deleted file mode 100644
index 97555047cef6202daac2442f47d03d6cab6b853c..0000000000000000000000000000000000000000
--- a/spaces/Epitech/IA_NLP/app.py
+++ /dev/null
@@ -1,129 +0,0 @@
-from tensorflow import keras
-import streamlit as st
-import altair as alt
-import plotly.express as px
-
-import pandas as pd
-import numpy as np
-from datetime import datetime
-
-
-import joblib
-
-from google.cloud import storage
-from tempfile import TemporaryFile
-from csv import writer
-from datetime import datetime
-import os
-from dotenv import load_dotenv
-from nltk.stem import PorterStemmer
-from nltk.corpus import stopwords
-import re
-from tensorflow import keras
-import numpy as np
-import pandas as pd
-
-from tensorflow.keras.preprocessing.sequence import pad_sequences
-import nltk
-from tensorflow.keras.preprocessing.text import one_hot
-
-
-import re
-from nltk.corpus import stopwords
-from nltk.stem import PorterStemmer
-
-import pickle
-pkl_file = open('m_lb.pkl', 'rb')
-le_departure = pickle.load(pkl_file)
-pkl_file.close()
-model = keras.models.load_model('m_odel.h5')
-nltk.download('stopwords')
-stopwords = set(nltk.corpus.stopwords.words('english'))
-vocabSize = 11000
-max_len = 1160
-load_dotenv()
-
-emotions_emoji_dict = { "anger":"😠",
- "disgust":"🤮",
- "fear":"😨😱",
- "happy":"🤗",
- "joy":"😂",
- "neutral":"😐",
- "sad":"😔",
- "sadness":"😔",
- "shame":"😳",
- "surprise":"😮"
- }
-
-
-def predict_emotions(sentence):
- sentence = sentence_cleaning(sentence)
- result = le_departure.inverse_transform(
- np.argmax(model.predict(sentence), axis=-1))[0]
- proba = np.max(model.predict(sentence))
- print()
-
- return result, proba, get_all_result(model.predict(sentence))
-
-
-def get_all_result(prediction):
- dict = {}
- for element in prediction:
- for i in range(0, len(element)):
- dict[element[i]] = le_departure.inverse_transform([i])[0]
- return dict
-
-
-def sentence_cleaning(sentence):
- """Pre-processing sentence for prediction"""
- stemmer = PorterStemmer()
- corpus = []
- text = re.sub("[^a-zA-Z]", " ", sentence)
- text = text.lower()
- text = text.split()
- text = [stemmer.stem(word) for word in text if word not in stopwords]
- text = " ".join(text)
- corpus.append(text)
- one_hot_word = [one_hot(input_text=word, n=vocabSize) for word in corpus]
- pad = pad_sequences(sequences=one_hot_word, maxlen=max_len, padding='pre')
- return pad
-
-
-def main():
- st.title("🤮😨😱Emotion Classifier😂😳😮")
- menu = ["Home", "Monitor"]
- choice = st.sidebar.selectbox("Menu", menu)
- if choice == "Home":
- st.subheader("Home-Emotion In Text")
-
- with st.form(key='emotion_clf_form'):
- raw_text = st.text_area("Type Here")
- submit_text = st.form_submit_button(label='Submit')
-
- if submit_text:
- col1, col2 = st.columns(2)
-
-
- res, proba, total_result = predict_emotions(raw_text)
-
- with col1:
- st.success("Original Text")
- st.write(raw_text)
-
- st.success("Prediction")
- st.write("{}:{}".format(res, emotions_emoji_dict[res]))
- st.write("Confidence:{}".format(proba))
-
- with col2:
- source = pd.DataFrame({'Proba': list(total_result.keys()), 'Emotion': list(total_result.values())})
-
- fig = alt.Chart(source).mark_bar().encode(x='Emotion',y='Proba',color='Emotion')
- st.altair_chart(fig,use_container_width=True)
-
-
- else:
- st.subheader("About")
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Erala/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/Erala/QQsign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000
--- a/spaces/Erala/QQsign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/EuroPython2022/clickbaitonator/fudge/evaluate_clickbait.py b/spaces/EuroPython2022/clickbaitonator/fudge/evaluate_clickbait.py
deleted file mode 100644
index 476955aba7ea6ade2c9eaca9fcd959d92b0ae948..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/clickbaitonator/fudge/evaluate_clickbait.py
+++ /dev/null
@@ -1,200 +0,0 @@
-import os
-import random
-import time
-import pickle
-import math
-from argparse import ArgumentParser
-
-from typing import Iterable, List, Optional, Tuple
-
-from tqdm import tqdm
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from transformers import AutoTokenizer, AutoModelWithLMHead
-from torch import Tensor
-
-from fudge.data import Dataset
-from fudge.model import Model
-from fudge.util import num_params
-from fudge.constants import *
-
-
-
-tokenizer = AutoTokenizer.from_pretrained('google/pegasus-xsum')
-classifier_tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
-
-
-def main(args):
- with open(args.dataset_info, 'rb') as rf:
- dataset_info = pickle.load(rf)
-
- article_content = """Australian actor Guy Pearce will return for the iconic soap Neighbours finale on August 1 to reprise his role as Mike Young.
- Guy, 54, played the troubled Mike from 1986 to 1989, and is now set to make a comeback on the show after 33 years, Metro.co.uk reports.
- The star's character arcs explored the implications of domestic abuse, student-teacher relationships and dealing with loss of loved ones.
- Speaking to Metro.co.uk, Guy said: 'It is very exciting and surreal at the same time being back on set again, however it feels like coming home.
- 'It's where it all started for me professionally. I've been asked to come back on occasions over the years and wondered if it was the right thing
- to do, but once I knew the show was finishing, I knew I had to do it.'He added that there is 'nothing like being here all together again'
- , even though he's had a chance to catch-up with other cast members."""
-
- tokenizer.add_special_tokens({'pad_token': PAD_TOKEN})
- pad_id = tokenizer.encode(PAD_TOKEN)[0]
-
- #For loading Clickbait summarizer
- model = AutoModelWithLMHead.from_pretrained(args.model_string, return_dict=True).to(args.device)
-
- model.eval()
-
- checkpoint = torch.load(args.ckpt, map_location=args.device)
- model_args = checkpoint['args']
- conditioning_model = Model(model_args, pad_id, len(dataset_info.index2word)) # no need to get the glove embeddings when reloading since they're saved in model ckpt anyway
- conditioning_model.load_state_dict(checkpoint['state_dict'])
- conditioning_model = conditioning_model.to(args.device)
- conditioning_model.eval()
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.ckpt, checkpoint['epoch']))
- print('num params', num_params(conditioning_model))
-
- while True:
- results = generate_clickbait(model,
- tokenizer,
- conditioning_model,
- [args.input_text],
- dataset_info,
- precondition_topk=args.precondition_topk,
- do_sample=args.do_sample,
- length_cutoff=args.length_cutoff,
- condition_lambda=args.condition_lambda,
- article_content=article_content,
- device=args.device)
- # print(results)
- import pdb; pdb.set_trace()
-
-
-def generate_clickbait(model,
- tokenizer,
- conditioning_model,
- input_text,
- dataset_info,
- precondition_topk,
- length_cutoff,
- condition_lambda=1.0,
- article_content=None,
- device='cuda'):
- with torch.no_grad():
- batch_size = len(input_text)
- # encoded_input_article = [tokenizer.encode(article_content, return_tensors='pt',add_special_tokens=False).to(device)] # batch x seq
- encoded_input_article = tokenizer(article_content, return_tensors='pt',add_special_tokens=False, max_length=512).to(device) # batch x seq
- # encoded_input_article = torch.cat(encoded_input_article, dim=0)
- # attention_mask = encoded_input_article.new_ones(encoded_input_article.shape).to(device)
-
- # CHANGE=ko
- encoded_input = tokenizer('', return_tensors='pt',add_special_tokens=False).to(device) # batch x seq
- # encoded_input = tokenizer(''+ input_text[0], return_tensors='pt',add_special_tokens=False).to(device) # batch x seq
- # encoded_input = torch.cat(encoded_input, dim=0)
- encoded_input = encoded_input['input_ids']
-
-
- lengths = torch.LongTensor([encoded_input.shape[1]]).to(device)
- # lengths = 1
-
- past = None
- use_cache = True
-
- # CHANGE
- # model_kwargs = {'encoder_outputs': model.get_encoder()(encoded_input_article, attention_mask=attention_mask)}
- # print(encoded_input_article)
- # print(encoded_input_article['input_ids'].shape, encoded_input_article['attention_mask'].shape)
- model_kwargs = {'encoder_outputs': model.get_encoder()(input_ids=encoded_input_article['input_ids'],
- attention_mask=encoded_input_article['attention_mask'],
- return_dict=True,
- output_attentions=False,
- output_hidden_states=False),
- }
-
- while lengths.max() < length_cutoff:
- model_inputs = model.prepare_inputs_for_generation(
- input_ids = encoded_input_article['input_ids'],
- decoder_input_ids=encoded_input,
- # past=past,
- attention_mask=encoded_input_article['attention_mask'],
- use_cache=use_cache,
- **model_kwargs
- )
-
- outputs = model(**model_inputs, return_dict=True)
- logits = outputs.logits[:, -1, :]
-
- if "past_key_values" in outputs:
- model_kwargs["past"] = outputs.past_key_values
-
- # logits = model(encoded_input)[0][:, -1, :] # batch x vocab
- top_logits, top_indices = logits.topk(precondition_topk, dim=1) # batch x topk
- new_input_candidates = torch.cat([encoded_input.unsqueeze(1).expand(-1, precondition_topk, -1), top_indices.unsqueeze(2)], dim=2) # batch x topk x seq+1
- expanded_lengths = (lengths + 1).unsqueeze(1).expand(batch_size, precondition_topk) # batch x topk
-
- if condition_lambda == 0:
- condition_logits = torch.zeros_like(top_logits).float()
- condition_logits = condition_logits.view(batch_size, precondition_topk, -1) # batch x topk x N
- else:
- decoded_outputs = tokenizer.batch_decode(new_input_candidates.view(-1, new_input_candidates.size(-1)), clean_up_tokenization_spaces=False)
- resulting_tokenization = classifier_tokenizer(decoded_outputs, add_special_tokens=False, padding='longest')
- encoded_with_classifier = resulting_tokenization['input_ids']
- attention_mask = torch.tensor(resulting_tokenization['attention_mask']).to(model.device)
- tplus1_candidates_classifier = torch.tensor(encoded_with_classifier).view(batch_size, precondition_topk, -1).to(model.device)
-
- condition_logits = conditioning_model(tplus1_candidates_classifier.flatten(0, 1), # batch*topk x seq+1
- expanded_lengths.flatten(0, 1), # batch*topk
- None,
- None,
- None,
- attention_mask=attention_mask
- )
- condition_logits = condition_logits.view(batch_size, precondition_topk, -1) # batch x topk x N
- condition_logits = condition_logits - torch.log(1 + torch.exp(condition_logits)) # get correct log probs
-
- condition_logits = torch.mean(condition_logits, dim=2)
- full_logits = top_logits + condition_logits * condition_lambda # batch x topk
- post_logits, post_indices = full_logits.topk(precondition_topk, dim=1)
- post_probs = F.softmax(post_logits, dim=1)
- # index_into_top_indices = post_indices[torch.arange(batch_size).to(post_indices.device), torch.multinomial(post_probs, 1).flatten()] # batch
- index_into_top_indices = post_indices[:, torch.multinomial(post_probs, 1).flatten()] # batch
-
- # next_indices = top_indices[torch.arange(batch_size).to(top_indices.device), index_into_top_indices] # batch
- next_indices = top_indices[:, index_into_top_indices] # batch
-
- # encoded_input = torch.cat([encoded_input, next_indices.unsqueeze(1)], dim=1) # batch x seq+1
- encoded_input = torch.cat([encoded_input, next_indices.squeeze(1)], dim=1)
- lengths = lengths + 1 # batch
-
-# print(tokenizer.decode(encoded_input[0], add_special_tokens=False))
- return [tokenizer.decode(s) for s in encoded_input]
-
-
-if __name__=='__main__':
- parser = ArgumentParser()
-
- # DATA
- parser.add_argument('--ckpt', type=str, required=True)
- parser.add_argument('--dataset_info', type=str, required=True, help='saved dataset info')
- parser.add_argument('--model_string', type=str, default='Helsinki-NLP/opus-mt-es-en')
-
- parser.add_argument('--in_file', type=str, default=None, required=True, help='text to run pred on')
-
- parser.add_argument('--precondition_topk', type=int, default=200, help='consider top k outputs from text generation at each step before conditioning and re-pruning')
- parser.add_argument('--do_sample', action='store_true', default=False, help='sample instead of greedy')
- parser.add_argument('--condition_lambda', type=float, default=1.0, help='lambda weight on conditioning model')
- parser.add_argument('--length_cutoff', type=int, default=512, help='max length')
-
- parser.add_argument('--seed', type=int, default=1, help='random seed')
- parser.add_argument('--device', type=str, default='cuda', choices=['cpu', 'cuda'])
- parser.add_argument('--debug', action='store_true', default=False)
-
- args = parser.parse_args()
-
- random.seed(args.seed)
- np.random.seed(args.seed)
- torch.manual_seed(args.seed)
-
- main(args)
diff --git a/spaces/EuroSciPy2022/timeseries-forecasting-with-prophet/README.md b/spaces/EuroSciPy2022/timeseries-forecasting-with-prophet/README.md
deleted file mode 100644
index 0a84cb90e36913e42ad583673150a229d6e76856..0000000000000000000000000000000000000000
--- a/spaces/EuroSciPy2022/timeseries-forecasting-with-prophet/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Timeseries Forecasting With Prophet
-emoji: 📈
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Fengbinbin/gpt-academic/core_functional.py b/spaces/Fengbinbin/gpt-academic/core_functional.py
deleted file mode 100644
index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/core_functional.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# 'primary' 颜色对应 theme.py 中的 primary_hue
-# 'secondary' 颜色对应 theme.py 中的 neutral_hue
-# 'stop' 颜色对应 theme.py 中的 color_er
-# 默认按钮颜色是 secondary
-from toolbox import clear_line_break
-
-
-def get_core_functions():
- return {
- "英语学术润色": {
- # 前言
- "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
- r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
- r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
- # 后语
- "Suffix": r"",
- "Color": r"secondary", # 按钮颜色
- },
- "中文学术润色": {
- "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
- r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
- "Suffix": r"",
- },
- "查找语法错误": {
- "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
- r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
- r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
- r"put the original text the first column, " +
- r"put the corrected text in the second column and highlight the key words you fixed.""\n"
- r"Example:""\n"
- r"Paragraph: How is you? Do you knows what is it?""\n"
- r"| Original sentence | Corrected sentence |""\n"
- r"| :--- | :--- |""\n"
- r"| How **is** you? | How **are** you? |""\n"
- r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
- r"Below is a paragraph from an academic paper. "
- r"You need to report all grammar and spelling mistakes as the example before."
- + "\n\n",
- "Suffix": r"",
- "PreProcess": clear_line_break, # 预处理:清除换行符
- },
- "中译英": {
- "Prefix": r"Please translate following sentence to English:" + "\n\n",
- "Suffix": r"",
- },
- "学术中英互译": {
- "Prefix": r"I want you to act as a scientific English-Chinese translator, " +
- r"I will provide you with some paragraphs in one language " +
- r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
- r"Do not repeat the original provided paragraphs after translation. " +
- r"You should use artificial intelligence tools, " +
- r"such as natural language processing, and rhetorical knowledge " +
- r"and experience about effective writing techniques to reply. " +
- r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
- "Suffix": "",
- "Color": "secondary",
- },
- "英译中": {
- "Prefix": r"翻译成地道的中文:" + "\n\n",
- "Suffix": r"",
- },
- "找图片": {
- "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
- r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
- "Suffix": r"",
- },
- "解释代码": {
- "Prefix": r"请解释以下代码:" + "\n```\n",
- "Suffix": "\n```\n",
- },
- }
diff --git a/spaces/Foremost/NER/README.md b/spaces/Foremost/NER/README.md
deleted file mode 100644
index 0e1e379329c4c5daf76ad2ea157fd00f51782ad9..0000000000000000000000000000000000000000
--- a/spaces/Foremost/NER/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: NER
-emoji: 🦀
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/server.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/server.py
deleted file mode 100644
index 349bd116a310c8f3ae4e95471b4431c75420432e..0000000000000000000000000000000000000000
--- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/server.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import base64
-from io import BytesIO
-from fastapi import FastAPI
-
-from PIL import Image
-import torch as th
-
-from glide_text2im.download import load_checkpoint
-from glide_text2im.model_creation import (
- create_model_and_diffusion,
- model_and_diffusion_defaults,
- model_and_diffusion_defaults_upsampler
-)
-
-print("Loading models...")
-app = FastAPI()
-
-# This notebook supports both CPU and GPU.
-# On CPU, generating one sample may take on the order of 20 minutes.
-# On a GPU, it should be under a minute.
-
-has_cuda = th.cuda.is_available()
-device = th.device('cpu' if not has_cuda else 'cuda')
-
-# Create base model.
-options = model_and_diffusion_defaults()
-options['use_fp16'] = has_cuda
-options['timestep_respacing'] = '100' # use 100 diffusion steps for fast sampling
-model, diffusion = create_model_and_diffusion(**options)
-model.eval()
-if has_cuda:
- model.convert_to_fp16()
-model.to(device)
-model.load_state_dict(load_checkpoint('base', device))
-print('total base parameters', sum(x.numel() for x in model.parameters()))
-
-# Create upsampler model.
-options_up = model_and_diffusion_defaults_upsampler()
-options_up['use_fp16'] = has_cuda
-options_up['timestep_respacing'] = 'fast27' # use 27 diffusion steps for very fast sampling
-model_up, diffusion_up = create_model_and_diffusion(**options_up)
-model_up.eval()
-if has_cuda:
- model_up.convert_to_fp16()
-model_up.to(device)
-model_up.load_state_dict(load_checkpoint('upsample', device))
-print('total upsampler parameters', sum(x.numel() for x in model_up.parameters()))
-
-
-def get_images(batch: th.Tensor):
- """ Display a batch of images inline. """
- scaled = ((batch + 1)*127.5).round().clamp(0,255).to(th.uint8).cpu()
- reshaped = scaled.permute(2, 0, 3, 1).reshape([batch.shape[2], -1, 3])
- Image.fromarray(reshaped.numpy())
-
-
-# Create a classifier-free guidance sampling function
-guidance_scale = 3.0
-
-def model_fn(x_t, ts, **kwargs):
- half = x_t[: len(x_t) // 2]
- combined = th.cat([half, half], dim=0)
- model_out = model(combined, ts, **kwargs)
- eps, rest = model_out[:, :3], model_out[:, 3:]
- cond_eps, uncond_eps = th.split(eps, len(eps) // 2, dim=0)
- half_eps = uncond_eps + guidance_scale * (cond_eps - uncond_eps)
- eps = th.cat([half_eps, half_eps], dim=0)
- return th.cat([eps, rest], dim=1)
-
-
-@app.get("/")
-def read_root():
- return {"glide!"}
-
-@app.get("/{generate}")
-def sample(prompt):
- # Sampling parameters
- batch_size = 1
-
- # Tune this parameter to control the sharpness of 256x256 images.
- # A value of 1.0 is sharper, but sometimes results in grainy artifacts.
- upsample_temp = 0.997
-
- ##############################
- # Sample from the base model #
- ##############################
-
- # Create the text tokens to feed to the model.
- tokens = model.tokenizer.encode(prompt)
- tokens, mask = model.tokenizer.padded_tokens_and_mask(
- tokens, options['text_ctx']
- )
-
- # Create the classifier-free guidance tokens (empty)
- full_batch_size = batch_size * 2
- uncond_tokens, uncond_mask = model.tokenizer.padded_tokens_and_mask(
- [], options['text_ctx']
- )
-
- # Pack the tokens together into model kwargs.
- model_kwargs = dict(
- tokens=th.tensor(
- [tokens] * batch_size + [uncond_tokens] * batch_size, device=device
- ),
- mask=th.tensor(
- [mask] * batch_size + [uncond_mask] * batch_size,
- dtype=th.bool,
- device=device,
- ),
- )
-
- # Sample from the base model.
- model.del_cache()
- samples = diffusion.p_sample_loop(
- model_fn,
- (full_batch_size, 3, options["image_size"], options["image_size"]),
- device=device,
- clip_denoised=True,
- progress=True,
- model_kwargs=model_kwargs,
- cond_fn=None,
- )[:batch_size]
- model.del_cache()
-
-
- ##############################
- # Upsample the 64x64 samples #
- ##############################
-
- tokens = model_up.tokenizer.encode(prompt)
- tokens, mask = model_up.tokenizer.padded_tokens_and_mask(
- tokens, options_up['text_ctx']
- )
-
- # Create the model conditioning dict.
- model_kwargs = dict(
- # Low-res image to upsample.
- low_res=((samples+1)*127.5).round()/127.5 - 1,
-
- # Text tokens
- tokens=th.tensor(
- [tokens] * batch_size, device=device
- ),
- mask=th.tensor(
- [mask] * batch_size,
- dtype=th.bool,
- device=device,
- ),
- )
-
- # Sample from the base model.
- model_up.del_cache()
- up_shape = (batch_size, 3, options_up["image_size"], options_up["image_size"])
- up_samples = diffusion_up.ddim_sample_loop(
- model_up,
- up_shape,
- noise=th.randn(up_shape, device=device) * upsample_temp,
- device=device,
- clip_denoised=True,
- progress=True,
- model_kwargs=model_kwargs,
- cond_fn=None,
- )[:batch_size]
- model_up.del_cache()
-
- # Show the output
- image = get_images(up_samples)
- image = to_base64(image)
- return {"image": image}
-
-
-def to_base64(pil_image):
- buffered = BytesIO()
- pil_image.save(buffered, format="JPEG")
- return base64.b64encode(buffered.getvalue())
\ No newline at end of file
diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/synthesis_engine/core_wrapper.py b/spaces/GaenKoki/voicevox/voicevox_engine/synthesis_engine/core_wrapper.py
deleted file mode 100644
index fe8f9778707a7476f30ab5b80f1ed1e1f759b8a0..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/voicevox_engine/synthesis_engine/core_wrapper.py
+++ /dev/null
@@ -1,538 +0,0 @@
-import os
-import platform
-from ctypes import CDLL, POINTER, c_bool, c_char_p, c_float, c_int, c_long
-from ctypes.util import find_library
-from dataclasses import dataclass
-from enum import Enum, auto
-from pathlib import Path
-from typing import List, Optional
-
-import numpy as np
-
-
-class OldCoreError(Exception):
- """古いコアが使用されている場合に発生するエラー"""
-
-
-class CoreError(Exception):
- """コア呼び出しで発生したエラー"""
-
-
-def load_runtime_lib(runtime_dirs: List[Path]):
- if platform.system() == "Windows":
- # DirectML.dllはonnxruntimeと互換性のないWindows標準搭載のものを優先して読み込むことがあるため、明示的に読み込む
- # 参考 1. https://github.com/microsoft/onnxruntime/issues/3360
- # 参考 2. https://tadaoyamaoka.hatenablog.com/entry/2020/06/07/113616
- lib_file_names = [
- "torch_cpu.dll",
- "torch_cuda.dll",
- "DirectML.dll",
- "onnxruntime.dll",
- ]
- lib_names = ["torch_cpu", "torch_cuda", "onnxruntime"]
- elif platform.system() == "Linux":
- lib_file_names = ["libtorch.so", "libonnxruntime.so"]
- lib_names = ["torch", "onnxruntime"]
- elif platform.system() == "Darwin":
- lib_file_names = ["libonnxruntime.dylib"]
- lib_names = ["onnxruntime"]
- else:
- raise RuntimeError("不明なOSです")
- for lib_path in runtime_dirs:
- for file_name in lib_file_names:
- try:
- CDLL(str((lib_path / file_name).resolve(strict=True)))
- except OSError:
- pass
- for lib_name in lib_names:
- try:
- CDLL(find_library(lib_name))
- except (OSError, TypeError):
- pass
-
-
-class GPUType(Enum):
- # NONEはCPUしか対応していないことを示す
- NONE = auto()
- CUDA = auto()
- DIRECT_ML = auto()
-
-
-@dataclass(frozen=True)
-class CoreInfo:
- name: str
- platform: str
- arch: str
- core_type: str
- gpu_type: GPUType
-
-
-# version 0.12 より前のコアの情報
-CORE_INFOS = [
- # Windows
- CoreInfo(
- name="core.dll",
- platform="Windows",
- arch="x64",
- core_type="libtorch",
- gpu_type=GPUType.CUDA,
- ),
- CoreInfo(
- name="core_cpu.dll",
- platform="Windows",
- arch="x64",
- core_type="libtorch",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="core_gpu_x64_nvidia.dll",
- platform="Windows",
- arch="x64",
- core_type="onnxruntime",
- gpu_type=GPUType.CUDA,
- ),
- CoreInfo(
- name="core_gpu_x64_directml.dll",
- platform="Windows",
- arch="x64",
- core_type="onnxruntime",
- gpu_type=GPUType.DIRECT_ML,
- ),
- CoreInfo(
- name="core_cpu_x64.dll",
- platform="Windows",
- arch="x64",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="core_cpu_x86.dll",
- platform="Windows",
- arch="x86",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="core_gpu_x86_directml.dll",
- platform="Windows",
- arch="x86",
- core_type="onnxruntime",
- gpu_type=GPUType.DIRECT_ML,
- ),
- CoreInfo(
- name="core_cpu_arm.dll",
- platform="Windows",
- arch="armv7l",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="core_gpu_arm_directml.dll",
- platform="Windows",
- arch="armv7l",
- core_type="onnxruntime",
- gpu_type=GPUType.DIRECT_ML,
- ),
- CoreInfo(
- name="core_cpu_arm64.dll",
- platform="Windows",
- arch="aarch64",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="core_gpu_arm64_directml.dll",
- platform="Windows",
- arch="aarch64",
- core_type="onnxruntime",
- gpu_type=GPUType.DIRECT_ML,
- ),
- # Linux
- CoreInfo(
- name="libcore.so",
- platform="Linux",
- arch="x64",
- core_type="libtorch",
- gpu_type=GPUType.CUDA,
- ),
- CoreInfo(
- name="libcore_cpu.so",
- platform="Linux",
- arch="x64",
- core_type="libtorch",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="libcore_gpu_x64_nvidia.so",
- platform="Linux",
- arch="x64",
- core_type="onnxruntime",
- gpu_type=GPUType.CUDA,
- ),
- CoreInfo(
- name="libcore_cpu_x64.so",
- platform="Linux",
- arch="x64",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="libcore_cpu_armhf.so",
- platform="Linux",
- arch="armv7l",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- CoreInfo(
- name="libcore_cpu_arm64.so",
- platform="Linux",
- arch="aarch64",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
- # macOS
- CoreInfo(
- name="libcore_cpu_universal2.dylib",
- platform="Darwin",
- arch="universal",
- core_type="onnxruntime",
- gpu_type=GPUType.NONE,
- ),
-]
-
-
-# version 0.12 以降のコアの名前の辞書
-# - version 0.12, 0.13 のコアの名前: core
-# - version 0.14 からのコアの名前: voicevox_core
-CORENAME_DICT = {
- "Windows": ("voicevox_core.dll", "core.dll"),
- "Linux": ("libvoicevox_core.so", "libcore.so"),
- "Darwin": ("libvoicevox_core.dylib", "libcore.dylib"),
-}
-
-
-def find_version_0_12_core_or_later(core_dir: Path) -> Optional[str]:
- """
- core_dir で指定したディレクトリにあるコアライブラリが Version 0.12 以降である場合、
- 見つかった共有ライブラリの名前を返す。
-
- Version 0.12 以降と判定する条件は、
-
- - core_dir に metas.json が存在しない
- - コアライブラリの名前が CORENAME_DICT の定義に従っている
-
- の両方が真のときである。
- cf. https://github.com/VOICEVOX/voicevox_engine/issues/385
- """
- if (core_dir / "metas.json").exists():
- return None
-
- for core_name in CORENAME_DICT[platform.system()]:
- if (core_dir / core_name).is_file():
- return core_name
-
- return None
-
-
-def get_arch_name() -> Optional[str]:
- """
- platform.machine() が特定のアーキテクチャ上で複数パターンの文字列を返し得るので、
- 一意な文字列に変換する
- サポート外のアーキテクチャである場合、None を返す
- """
- machine = platform.machine()
- if machine == "x86_64" or machine == "x64" or machine == "AMD64":
- return "x64"
- elif machine == "i386" or machine == "x86":
- return "x86"
- elif machine == "arm64":
- return "aarch64"
- elif machine in ["armv7l", "aarch64"]:
- return machine
- else:
- return None
-
-
-def get_core_name(
- arch_name: str,
- platform_name: str,
- model_type: str,
- gpu_type: GPUType,
-) -> Optional[str]:
- if platform_name == "Darwin":
- if gpu_type == GPUType.NONE and (arch_name == "x64" or arch_name == "aarch64"):
- arch_name = "universal"
- else:
- return None
- for core_info in CORE_INFOS:
- if (
- core_info.platform == platform_name
- and core_info.arch == arch_name
- and core_info.core_type == model_type
- and core_info.gpu_type == gpu_type
- ):
- return core_info.name
- return None
-
-
-def get_suitable_core_name(
- model_type: str,
- gpu_type: GPUType,
-) -> Optional[str]:
- arch_name = get_arch_name()
- if arch_name is None:
- return None
- platform_name = platform.system()
- return get_core_name(arch_name, platform_name, model_type, gpu_type)
-
-
-def check_core_type(core_dir: Path) -> Optional[str]:
- # libtorch版はDirectML未対応なので、ここでは`gpu_type=GPUType.DIRECT_ML`は入れない
- libtorch_core_names = [
- get_suitable_core_name("libtorch", gpu_type=GPUType.CUDA),
- get_suitable_core_name("libtorch", gpu_type=GPUType.NONE),
- ]
- onnxruntime_core_names = [
- get_suitable_core_name("onnxruntime", gpu_type=GPUType.CUDA),
- get_suitable_core_name("onnxruntime", gpu_type=GPUType.DIRECT_ML),
- get_suitable_core_name("onnxruntime", gpu_type=GPUType.NONE),
- ]
- if any([(core_dir / name).is_file() for name in libtorch_core_names if name]):
- return "libtorch"
- elif any([(core_dir / name).is_file() for name in onnxruntime_core_names if name]):
- return "onnxruntime"
- else:
- return None
-
-
-def load_core(core_dir: Path, use_gpu: bool) -> CDLL:
- core_name = find_version_0_12_core_or_later(core_dir)
- if core_name:
- try:
- # NOTE: CDLL クラスのコンストラクタの引数 name には文字列を渡す必要がある。
- # Windows 環境では PathLike オブジェクトを引数として渡すと初期化に失敗する。
- return CDLL(str((core_dir / core_name).resolve(strict=True)))
- except OSError as err:
- raise RuntimeError(f"コアの読み込みに失敗しました:{err}")
-
- model_type = check_core_type(core_dir)
- if model_type is None:
- raise RuntimeError("コアが見つかりません")
- if use_gpu or model_type == "onnxruntime":
- core_name = get_suitable_core_name(model_type, gpu_type=GPUType.CUDA)
- if core_name:
- try:
- return CDLL(str((core_dir / core_name).resolve(strict=True)))
- except OSError:
- pass
- core_name = get_suitable_core_name(model_type, gpu_type=GPUType.DIRECT_ML)
- if core_name:
- try:
- return CDLL(str((core_dir / core_name).resolve(strict=True)))
- except OSError:
- pass
- core_name = get_suitable_core_name(model_type, gpu_type=GPUType.NONE)
- if core_name:
- try:
- return CDLL(str((core_dir / core_name).resolve(strict=True)))
- except OSError as err:
- if model_type == "libtorch":
- core_name = get_suitable_core_name(model_type, gpu_type=GPUType.CUDA)
- if core_name:
- try:
- return CDLL(str((core_dir / core_name).resolve(strict=True)))
- except OSError as err_:
- err = err_
- raise RuntimeError(f"コアの読み込みに失敗しました:{err}")
- else:
- raise RuntimeError(f"このコンピュータのアーキテクチャ {platform.machine()} で利用可能なコアがありません")
-
-
-class CoreWrapper:
- def __init__(
- self,
- use_gpu: bool,
- core_dir: Path,
- cpu_num_threads: int = 0,
- load_all_models: bool = False,
- ) -> None:
-
- self.core = load_core(core_dir, use_gpu)
-
- self.core.initialize.restype = c_bool
- self.core.metas.restype = c_char_p
- self.core.yukarin_s_forward.restype = c_bool
- self.core.yukarin_sa_forward.restype = c_bool
- self.core.decode_forward.restype = c_bool
- self.core.last_error_message.restype = c_char_p
-
- self.exist_supported_devices = False
- self.exist_finalize = False
- exist_cpu_num_threads = False
- self.exist_load_model = False
- self.exist_is_model_loaded = False
-
- is_version_0_12_core_or_later = (
- find_version_0_12_core_or_later(core_dir) is not None
- )
- if is_version_0_12_core_or_later:
- model_type = "onnxruntime"
- self.exist_load_model = True
- self.exist_is_model_loaded = True
- self.core.load_model.argtypes = (c_long,)
- self.core.load_model.restype = c_bool
- self.core.is_model_loaded.argtypes = (c_long,)
- self.core.is_model_loaded.restype = c_bool
- else:
- model_type = check_core_type(core_dir)
- assert model_type is not None
-
- if model_type == "onnxruntime":
- self.core.supported_devices.restype = c_char_p
- self.core.finalize.restype = None
- self.exist_supported_devices = True
- self.exist_finalize = True
- exist_cpu_num_threads = True
-
- self.core.yukarin_s_forward.argtypes = (
- c_int,
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_float),
- )
- self.core.yukarin_sa_forward.argtypes = (
- c_int,
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_long),
- POINTER(c_float),
- )
- self.core.decode_forward.argtypes = (
- c_int,
- c_int,
- POINTER(c_float),
- POINTER(c_float),
- POINTER(c_long),
- POINTER(c_float),
- )
-
- cwd = os.getcwd()
- os.chdir(core_dir)
- try:
- if is_version_0_12_core_or_later:
- self.assert_core_success(
- self.core.initialize(use_gpu, cpu_num_threads, load_all_models)
- )
- elif exist_cpu_num_threads:
- self.assert_core_success(
- self.core.initialize(".", use_gpu, cpu_num_threads)
- )
- else:
- self.assert_core_success(self.core.initialize(".", use_gpu))
- finally:
- os.chdir(cwd)
-
- def metas(self) -> str:
- return self.core.metas().decode("utf-8")
-
- def yukarin_s_forward(
- self,
- length: int,
- phoneme_list: np.ndarray,
- speaker_id: np.ndarray,
- ) -> np.ndarray:
- output = np.zeros((length,), dtype=np.float32)
- self.assert_core_success(
- self.core.yukarin_s_forward(
- c_int(length),
- phoneme_list.ctypes.data_as(POINTER(c_long)),
- speaker_id.ctypes.data_as(POINTER(c_long)),
- output.ctypes.data_as(POINTER(c_float)),
- )
- )
- return output
-
- def yukarin_sa_forward(
- self,
- length: int,
- vowel_phoneme_list: np.ndarray,
- consonant_phoneme_list: np.ndarray,
- start_accent_list: np.ndarray,
- end_accent_list: np.ndarray,
- start_accent_phrase_list: np.ndarray,
- end_accent_phrase_list: np.ndarray,
- speaker_id: np.ndarray,
- ) -> np.ndarray:
- output = np.empty(
- (
- len(speaker_id),
- length,
- ),
- dtype=np.float32,
- )
- self.assert_core_success(
- self.core.yukarin_sa_forward(
- c_int(length),
- vowel_phoneme_list.ctypes.data_as(POINTER(c_long)),
- consonant_phoneme_list.ctypes.data_as(POINTER(c_long)),
- start_accent_list.ctypes.data_as(POINTER(c_long)),
- end_accent_list.ctypes.data_as(POINTER(c_long)),
- start_accent_phrase_list.ctypes.data_as(POINTER(c_long)),
- end_accent_phrase_list.ctypes.data_as(POINTER(c_long)),
- speaker_id.ctypes.data_as(POINTER(c_long)),
- output.ctypes.data_as(POINTER(c_float)),
- )
- )
- return output
-
- def decode_forward(
- self,
- length: int,
- phoneme_size: int,
- f0: np.ndarray,
- phoneme: np.ndarray,
- speaker_id: np.ndarray,
- ) -> np.ndarray:
- output = np.empty((length * 256,), dtype=np.float32)
- self.assert_core_success(
- self.core.decode_forward(
- c_int(length),
- c_int(phoneme_size),
- f0.ctypes.data_as(POINTER(c_float)),
- phoneme.ctypes.data_as(POINTER(c_float)),
- speaker_id.ctypes.data_as(POINTER(c_long)),
- output.ctypes.data_as(POINTER(c_float)),
- )
- )
- return output
-
- def supported_devices(self) -> str:
- if self.exist_supported_devices:
- return self.core.supported_devices().decode("utf-8")
- raise OldCoreError
-
- def finalize(self) -> None:
- if self.exist_finalize:
- self.core.finalize()
- return
- raise OldCoreError
-
- def load_model(self, speaker_id: int) -> None:
- if self.exist_load_model:
- self.assert_core_success(self.core.load_model(c_long(speaker_id)))
- raise OldCoreError
-
- def is_model_loaded(self, speaker_id: int) -> bool:
- if self.exist_is_model_loaded:
- return self.core.is_model_loaded(c_long(speaker_id))
- raise OldCoreError
-
- def assert_core_success(self, result: bool) -> None:
- if not result:
- raise CoreError(
- self.core.last_error_message().decode("utf-8", "backslashreplace")
- )
diff --git a/spaces/Gen-Sim/Gen-Sim/misc/compute_embedding_neighbor_tasks.py b/spaces/Gen-Sim/Gen-Sim/misc/compute_embedding_neighbor_tasks.py
deleted file mode 100644
index 96546131f18e78adba96fae954fa1e4fbc8e6759..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/misc/compute_embedding_neighbor_tasks.py
+++ /dev/null
@@ -1,189 +0,0 @@
-import torch
-import torch.nn
-import torchvision.models as models
-from copy import deepcopy
-import cv2
-
-import cv2
-import numpy as np
-import sys
-import itertools
-import os
-import IPython
-import matplotlib
-matplotlib.use("Agg")
-
-import matplotlib.pyplot as plt
-import pandas as pd
-
-import openai
-from sklearn.manifold import TSNE
-from sklearn.decomposition import PCA, KernelPCA
-import seaborn as sns
-
-import time
-from matplotlib.offsetbox import OffsetImage, AnnotationBbox
-import colorsys
-from torchvision import datasets
-import argparse
-import matplotlib.patheffects as PathEffects
-from scipy.spatial import cKDTree
-
-sns.set_style("white")
-sns.set_palette("muted")
-
-font = {
- "size": 22,
-}
-
-matplotlib.rc("font", **font)
-sns.set_context("paper", font_scale=3.0)
-
-
-plt_param = {'legend.fontsize': 60,
- 'axes.labelsize': 80,
- 'axes.titlesize':80,
- 'font.size' : 80 ,
- 'xtick.labelsize':80,
- 'ytick.labelsize':80,
- 'lines.linewidth': 10,
- 'lines.color': (0,0,0)}
-
-plt.rcParams.update(plt_param)
-
-openai.api_key ="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"
-GPT_MODEL = "gpt4"
-EMBEDDING_MODEL = "text-embedding-ada-002"
-ORIGINAL_NAMES = [
- # demo conditioned
- 'align-box-corner',
- 'assembling-kits',
- 'assembling-kits-easy',
- 'block-insertion',
- 'block-insertion-easy',
- 'block-insertion-nofixture',
- 'block-insertion-sixdof',
- 'block-insertion-translation',
- 'manipulating-rope',
- 'packing-boxes',
- 'palletizing-boxes',
- 'place-red-in-green',
- 'stack-block-pyramid',
- 'sweeping-piles',
- 'towers-of-hanoi',
- 'gen-task',
- # goal conditioned
- 'align-rope',
- 'assembling-kits-seq',
- 'assembling-kits-seq-seen-colors',
- 'assembling-kits-seq-unseen-colors',
- 'assembling-kits-seq-full',
- 'packing-shapes',
- 'packing-boxes-pairs',
- 'packing-boxes-pairs-seen-colors',
- 'packing-boxes-pairs-unseen-colors',
- 'packing-boxes-pairs-full',
- 'packing-seen-google-objects-seq',
- 'packing-unseen-google-objects-seq',
- 'packing-seen-google-objects-group',
- 'packing-unseen-google-objects-group',
- 'put-block-in-bowl',
- 'put-block-in-bowl-seen-colors',
- 'put-block-in-bowl-unseen-colors',
- 'put-block-in-bowl-full',
- 'stack-block-pyramid-seq',
- 'stack-block-pyramid-seq-seen-colors',
- 'stack-block-pyramid-seq-unseen-colors',
- 'stack-block-pyramid-seq-full',
- 'separating-piles',
- 'separating-piles-seen-colors',
- 'separating-piles-unseen-colors',
- 'separating-piles-full',
- 'towers-of-hanoi-seq',
- 'towers-of-hanoi-seq-seen-colors',
- 'towers-of-hanoi-seq-unseen-colors',
- 'towers-of-hanoi-seq-full',
- ]
-
-
-def normalize_numpy_array(arr):
- return arr / (arr.max(axis=-1, keepdims=True) - arr.min(axis=-1, keepdims=True))
-
-
-def compute_embedding(response):
- for _ in range(3):
- try:
- response_embedding = openai.Embedding.create(
- model=EMBEDDING_MODEL,
- input=response,
- )
-
- response_embedding = np.array(response_embedding["data"][0]['embedding'])
- return response_embedding
- except Exception as e:
- print(e)
-
-def find_cliport_neighbor(kdtree, latents, label_sets):
- closest_embeddings, closest_idx = kdtree.query(latents, k=78)
- for i, idx in enumerate(closest_idx[0][1:]):
- s_replaced = label_sets[idx].replace("_", "-")
- if s_replaced in ORIGINAL_NAMES:
- print(label_sets[idx], i)
-
-
-def compute_neighbors(args):
- fig_name=f'output/output_embedding/{args.file}'
- # query: (response, embeddings)
- latents = []
- class_labels = []
- label_sets = []
-
- # chatgpt embedding
- total_tasks = [os.path.join("cliport/tasks", x) for x in os.listdir("cliport/tasks")] + [os.path.join("cliport/generated_tasks", x) for x in os.listdir("cliport/generated_tasks")]
- total_tasks = [t for t in total_tasks if 'pycache' not in t and 'init' not in t \
- and 'README' not in t and 'extended' not in t and 'gripper' not in t and 'primitive' not in t\
- and 'task.py' not in t and 'camera' not in t and 'seq' not in t and 'seen' not in t]
- cache_embedding_path = "output/output_embedding/task_cache_embedding.npz"
- cache_embedding = {}
-
- if os.path.exists(cache_embedding_path):
- cache_embedding = dict(np.load(cache_embedding_path))
-
- # print(total_tasks)
-
- for idx, task_name in enumerate(total_tasks):
- if task_name in cache_embedding:
- code_embedding = cache_embedding[task_name]
- else:
- code = open(task_name).read()
- code_embedding = compute_embedding(code)
-
- latents.append(code_embedding)
- label_sets.append(task_name.split("/")[-1][:-3])
- cache_embedding[task_name] = code_embedding
- class_labels.append(idx)
-
- latents = np.array(latents)
- # print("latents shape:", latents.shape)
- # np.savez(cache_embedding_path, **cache_embedding)
-
- target_task_idx = label_sets.index(args.target_task)
- kdtree = cKDTree(latents)
- closest_embeddings, closest_idx = kdtree.query(latents[[target_task_idx]], k=args.num+1)
- # print(latents.shape, args.num, target_task_idx, closest_idx,label_sets)
-
- print(f"closest tasks to {args.target_task}: {[label_sets[task] for task in closest_idx[0][1:]]}")
-
- # print(f"closest tasks in cliport original tasks: {find_cliport_neighbor(kdtree, latents[[target_task_idx]], label_sets)}")
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Generate chat-gpt embeddings")
- """
- load task descriptions from the tasks folder and embed
- """
- parser.add_argument("--file", type=str, default="task_embedding")
- parser.add_argument("--target_task", type=str, default="align_box_corner")
- parser.add_argument("--num", type=int, default=3)
-
- args = parser.parse_args()
- compute_neighbors(args)
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py
deleted file mode 100644
index f26062fda282fda420a5f48bbc12bfe4efe57c0a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_640_800_coco.py
+++ /dev/null
@@ -1,71 +0,0 @@
-_base_ = [
- '../_base_/models/retinanet_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py'
-]
-# model settings
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='torchvision://resnet101',
- backbone=dict(depth=101),
- bbox_head=dict(
- _delete_=True,
- type='SABLRetinaHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- approx_anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=4,
- scales_per_octave=3,
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32, 64, 128]),
- square_anchor_generator=dict(
- type='AnchorGenerator',
- ratios=[1.0],
- scales=[4],
- strides=[8, 16, 32, 64, 128]),
- norm_cfg=norm_cfg,
- bbox_coder=dict(
- type='BucketingBBoxCoder', num_buckets=14, scale_factor=3.0),
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.5),
- loss_bbox_reg=dict(
- type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(
- type='ApproxMaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0.0,
- ignore_iof_thr=-1),
- allowed_border=-1,
- pos_weight=-1,
- debug=False))
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 800)],
- multiscale_mode='range',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 16e34356e9f8566ec73e3c25c771e281d3eeb975..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ccnet/ccnet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/ccnet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_160k_ade20k.py
deleted file mode 100644
index b4a9d4e1b9123b3c965cd430237ce9fcc7018a11..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index aff70c93e6142ddda3a874d9dfd57ec6c4cd89b3..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r18-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_512x1024_80k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnet18_v1c',
- backbone=dict(depth=18),
- decode_head=dict(
- c1_in_channels=64,
- c1_channels=12,
- in_channels=512,
- channels=128,
- ),
- auxiliary_head=dict(in_channels=256, channels=64))
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/common_utils/wav_utils.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/common_utils/wav_utils.py
deleted file mode 100644
index d3a563ee1749a58217ece55c9a08b8d93c0fc386..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/common_utils/wav_utils.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from pathlib import Path
-import typing as tp
-
-import torch
-import torchaudio
-
-
-def get_white_noise(chs: int = 1, num_frames: int = 1):
- wav = torch.randn(chs, num_frames)
- return wav
-
-
-def get_batch_white_noise(bs: int = 1, chs: int = 1, num_frames: int = 1):
- wav = torch.randn(bs, chs, num_frames)
- return wav
-
-
-def save_wav(path: str, wav: torch.Tensor, sample_rate: int):
- fp = Path(path)
- kwargs: tp.Dict[str, tp.Any] = {}
- if fp.suffix == '.wav':
- kwargs['encoding'] = 'PCM_S'
- kwargs['bits_per_sample'] = 16
- elif fp.suffix == '.mp3':
- kwargs['compression'] = 320
- torchaudio.save(str(fp), wav, sample_rate, **kwargs)
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/BBSNet/BBSNet_model.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/BBSNet/BBSNet_model.py
deleted file mode 100644
index 37e31b19692ee2c0855ffee83bded1632b9750ab..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/BBSNet/BBSNet_model.py
+++ /dev/null
@@ -1,419 +0,0 @@
-import torch
-import torch.nn as nn
-import torchvision.models as models
-from .ResNet import ResNet50
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-class TransBasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, upsample=None, **kwargs):
- super(TransBasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, inplanes)
- self.bn1 = nn.BatchNorm2d(inplanes)
- self.relu = nn.ReLU(inplace=True)
- if upsample is not None and stride != 1:
- self.conv2 = nn.ConvTranspose2d(inplanes, planes,
- kernel_size=3, stride=stride, padding=1,
- output_padding=1, bias=False)
- else:
- self.conv2 = conv3x3(inplanes, planes, stride)
- self.bn2 = nn.BatchNorm2d(planes)
- self.upsample = upsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.upsample is not None:
- residual = self.upsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-class ChannelAttention(nn.Module):
- def __init__(self, in_planes, ratio=16):
- super(ChannelAttention, self).__init__()
-
- self.max_pool = nn.AdaptiveMaxPool2d(1)
-
- self.fc1 = nn.Conv2d(in_planes, in_planes // 16, 1, bias=False)
- self.relu1 = nn.ReLU()
- self.fc2 = nn.Conv2d(in_planes // 16, in_planes, 1, bias=False)
-
- self.sigmoid = nn.Sigmoid()
- def forward(self, x):
- max_out = self.fc2(self.relu1(self.fc1(self.max_pool(x))))
- out = max_out
- return self.sigmoid(out)
-
-class SpatialAttention(nn.Module):
- def __init__(self, kernel_size=7):
- super(SpatialAttention, self).__init__()
-
- assert kernel_size in (3, 7), 'kernel size must be 3 or 7'
- padding = 3 if kernel_size == 7 else 1
-
- self.conv1 = nn.Conv2d(1, 1, kernel_size, padding=padding, bias=False)
- self.sigmoid = nn.Sigmoid()
-
- def forward(self, x):
- max_out, _ = torch.max(x, dim=1, keepdim=True)
- x=max_out
- x = self.conv1(x)
- return self.sigmoid(x)
-
-class BasicConv2d(nn.Module):
- def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1):
- super(BasicConv2d, self).__init__()
- self.conv = nn.Conv2d(in_planes, out_planes,
- kernel_size=kernel_size, stride=stride,
- padding=padding, dilation=dilation, bias=False)
- self.bn = nn.BatchNorm2d(out_planes)
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- x = self.conv(x)
- x = self.bn(x)
- return x
-
-#Global Contextual module
-class GCM(nn.Module):
- def __init__(self, in_channel, out_channel):
- super(GCM, self).__init__()
- self.relu = nn.ReLU(True)
- self.branch0 = nn.Sequential(
- BasicConv2d(in_channel, out_channel, 1),
- )
- self.branch1 = nn.Sequential(
- BasicConv2d(in_channel, out_channel, 1),
- BasicConv2d(out_channel, out_channel, kernel_size=(1, 3), padding=(0, 1)),
- BasicConv2d(out_channel, out_channel, kernel_size=(3, 1), padding=(1, 0)),
- BasicConv2d(out_channel, out_channel, 3, padding=3, dilation=3)
- )
- self.branch2 = nn.Sequential(
- BasicConv2d(in_channel, out_channel, 1),
- BasicConv2d(out_channel, out_channel, kernel_size=(1, 5), padding=(0, 2)),
- BasicConv2d(out_channel, out_channel, kernel_size=(5, 1), padding=(2, 0)),
- BasicConv2d(out_channel, out_channel, 3, padding=5, dilation=5)
- )
- self.branch3 = nn.Sequential(
- BasicConv2d(in_channel, out_channel, 1),
- BasicConv2d(out_channel, out_channel, kernel_size=(1, 7), padding=(0, 3)),
- BasicConv2d(out_channel, out_channel, kernel_size=(7, 1), padding=(3, 0)),
- BasicConv2d(out_channel, out_channel, 3, padding=7, dilation=7)
- )
- self.conv_cat = BasicConv2d(4*out_channel, out_channel, 3, padding=1)
- self.conv_res = BasicConv2d(in_channel, out_channel, 1)
-
- def forward(self, x):
- x0 = self.branch0(x)
- x1 = self.branch1(x)
- x2 = self.branch2(x)
- x3 = self.branch3(x)
-
- x_cat = self.conv_cat(torch.cat((x0, x1, x2, x3), 1))
-
- x = self.relu(x_cat + self.conv_res(x))
- return x
-
-#aggregation of the high-level(teacher) features
-class aggregation_init(nn.Module):
-
- def __init__(self, channel):
- super(aggregation_init, self).__init__()
- self.relu = nn.ReLU(True)
-
- self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
- self.conv_upsample1 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample2 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample3 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample4 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample5 = BasicConv2d(2*channel, 2*channel, 3, padding=1)
-
- self.conv_concat2 = BasicConv2d(2*channel, 2*channel, 3, padding=1)
- self.conv_concat3 = BasicConv2d(3*channel, 3*channel, 3, padding=1)
- self.conv4 = BasicConv2d(3*channel, 3*channel, 3, padding=1)
- self.conv5 = nn.Conv2d(3*channel, 1, 1)
-
- def forward(self, x1, x2, x3):
- x1_1 = x1
- x2_1 = self.conv_upsample1(self.upsample(x1)) * x2
- x3_1 = self.conv_upsample2(self.upsample(self.upsample(x1))) \
- * self.conv_upsample3(self.upsample(x2)) * x3
-
- x2_2 = torch.cat((x2_1, self.conv_upsample4(self.upsample(x1_1))), 1)
- x2_2 = self.conv_concat2(x2_2)
-
- x3_2 = torch.cat((x3_1, self.conv_upsample5(self.upsample(x2_2))), 1)
- x3_2 = self.conv_concat3(x3_2)
-
- x = self.conv4(x3_2)
- x = self.conv5(x)
-
- return x
-
-#aggregation of the low-level(student) features
-class aggregation_final(nn.Module):
-
- def __init__(self, channel):
- super(aggregation_final, self).__init__()
- self.relu = nn.ReLU(True)
-
- self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
- self.conv_upsample1 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample2 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample3 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample4 = BasicConv2d(channel, channel, 3, padding=1)
- self.conv_upsample5 = BasicConv2d(2*channel, 2*channel, 3, padding=1)
-
- self.conv_concat2 = BasicConv2d(2*channel, 2*channel, 3, padding=1)
- self.conv_concat3 = BasicConv2d(3*channel, 3*channel, 3, padding=1)
-
- def forward(self, x1, x2, x3):
- x1_1 = x1
- x2_1 = self.conv_upsample1(self.upsample(x1)) * x2
- x3_1 = self.conv_upsample2(self.upsample(x1)) \
- * self.conv_upsample3(x2) * x3
-
- x2_2 = torch.cat((x2_1, self.conv_upsample4(self.upsample(x1_1))), 1)
- x2_2 = self.conv_concat2(x2_2)
-
- x3_2 = torch.cat((x3_1, self.conv_upsample5(x2_2)), 1)
- x3_2 = self.conv_concat3(x3_2)
-
- return x3_2
-
-#Refinement flow
-class Refine(nn.Module):
- def __init__(self):
- super(Refine,self).__init__()
- self.upsample2 = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
-
- def forward(self, attention,x1,x2,x3):
- #Note that there is an error in the manuscript. In the paper, the refinement strategy is depicted as ""f'=f*S1"", it should be ""f'=f+f*S1"".
- x1 = x1+torch.mul(x1, self.upsample2(attention))
- x2 = x2+torch.mul(x2,self.upsample2(attention))
- x3 = x3+torch.mul(x3,attention)
-
- return x1,x2,x3
-
-#BBSNet
-class BBSNet(nn.Module):
- def __init__(self, channel=32):
- super(BBSNet, self).__init__()
-
- #Backbone model
- self.resnet = ResNet50('rgb')
- self.resnet_depth=ResNet50('rgbd')
-
- #Decoder 1
- self.rfb2_1 = GCM(512, channel)
- self.rfb3_1 = GCM(1024, channel)
- self.rfb4_1 = GCM(2048, channel)
- self.agg1 = aggregation_init(channel)
-
- #Decoder 2
- self.rfb0_2 = GCM(64, channel)
- self.rfb1_2 = GCM(256, channel)
- self.rfb5_2 = GCM(512, channel)
- self.agg2 = aggregation_final(channel)
-
- #upsample function
- self.upsample = nn.Upsample(scale_factor=8, mode='bilinear', align_corners=True)
- self.upsample4 = nn.Upsample(scale_factor=4, mode='bilinear', align_corners=True)
- self.upsample2 = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
-
- #Refinement flow
- self.HA = Refine()
-
- #Components of DEM module
- self.atten_depth_channel_0=ChannelAttention(64)
- self.atten_depth_channel_1=ChannelAttention(256)
- self.atten_depth_channel_2=ChannelAttention(512)
- self.atten_depth_channel_3_1=ChannelAttention(1024)
- self.atten_depth_channel_4_1=ChannelAttention(2048)
-
- self.atten_depth_spatial_0=SpatialAttention()
- self.atten_depth_spatial_1=SpatialAttention()
- self.atten_depth_spatial_2=SpatialAttention()
- self.atten_depth_spatial_3_1=SpatialAttention()
- self.atten_depth_spatial_4_1=SpatialAttention()
-
- #Components of PTM module
- self.inplanes = 32*2
- self.deconv1 = self._make_transpose(TransBasicBlock, 32*2, 3, stride=2)
- self.inplanes =32
- self.deconv2 = self._make_transpose(TransBasicBlock, 32, 3, stride=2)
- self.agant1 = self._make_agant_layer(32*3, 32*2)
- self.agant2 = self._make_agant_layer(32*2, 32)
- self.out0_conv = nn.Conv2d(32*3, 1, kernel_size=1, stride=1, bias=True)
- self.out1_conv = nn.Conv2d(32*2, 1, kernel_size=1, stride=1, bias=True)
- self.out2_conv = nn.Conv2d(32*1, 1, kernel_size=1, stride=1, bias=True)
-
- # if self.training:
- # self.initialize_weights()
-
- def forward(self, x, x_depth):
- x = self.resnet.conv1(x)
- x = self.resnet.bn1(x)
- x = self.resnet.relu(x)
- x = self.resnet.maxpool(x)
-
- x_depth = self.resnet_depth.conv1(x_depth)
- x_depth = self.resnet_depth.bn1(x_depth)
- x_depth = self.resnet_depth.relu(x_depth)
- x_depth = self.resnet_depth.maxpool(x_depth)
-
- #layer0 merge
- temp = x_depth.mul(self.atten_depth_channel_0(x_depth))
- temp = temp.mul(self.atten_depth_spatial_0(temp))
- x=x+temp
- #layer0 merge end
-
- x1 = self.resnet.layer1(x) # 256 x 64 x 64
- x1_depth=self.resnet_depth.layer1(x_depth)
-
- #layer1 merge
- temp = x1_depth.mul(self.atten_depth_channel_1(x1_depth))
- temp = temp.mul(self.atten_depth_spatial_1(temp))
- x1=x1+temp
- #layer1 merge end
-
- x2 = self.resnet.layer2(x1) # 512 x 32 x 32
- x2_depth=self.resnet_depth.layer2(x1_depth)
-
- #layer2 merge
- temp = x2_depth.mul(self.atten_depth_channel_2(x2_depth))
- temp = temp.mul(self.atten_depth_spatial_2(temp))
- x2=x2+temp
- #layer2 merge end
-
- x2_1 = x2
-
- x3_1 = self.resnet.layer3_1(x2_1) # 1024 x 16 x 16
- x3_1_depth=self.resnet_depth.layer3_1(x2_depth)
-
- #layer3_1 merge
- temp = x3_1_depth.mul(self.atten_depth_channel_3_1(x3_1_depth))
- temp = temp.mul(self.atten_depth_spatial_3_1(temp))
- x3_1=x3_1+temp
- #layer3_1 merge end
-
- x4_1 = self.resnet.layer4_1(x3_1) # 2048 x 8 x 8
- x4_1_depth=self.resnet_depth.layer4_1(x3_1_depth)
-
- #layer4_1 merge
- temp = x4_1_depth.mul(self.atten_depth_channel_4_1(x4_1_depth))
- temp = temp.mul(self.atten_depth_spatial_4_1(temp))
- x4_1=x4_1+temp
- #layer4_1 merge end
-
- #produce initial saliency map by decoder1
- x2_1 = self.rfb2_1(x2_1)
- x3_1 = self.rfb3_1(x3_1)
- x4_1 = self.rfb4_1(x4_1)
- attention_map = self.agg1(x4_1, x3_1, x2_1)
-
- #Refine low-layer features by initial map
- x,x1,x5 = self.HA(attention_map.sigmoid(), x,x1,x2)
-
- #produce final saliency map by decoder2
- x0_2 = self.rfb0_2(x)
- x1_2 = self.rfb1_2(x1)
- x5_2 = self.rfb5_2(x5)
- y = self.agg2(x5_2, x1_2, x0_2) #*4
-
- #PTM module
- y =self.agant1(y)
- y = self.deconv1(y)
- y = self.agant2(y)
- y = self.deconv2(y)
- y = self.out2_conv(y)
-
- return self.upsample(attention_map),y
-
- def _make_agant_layer(self, inplanes, planes):
- layers = nn.Sequential(
- nn.Conv2d(inplanes, planes, kernel_size=1,
- stride=1, padding=0, bias=False),
- nn.BatchNorm2d(planes),
- nn.ReLU(inplace=True)
- )
- return layers
-
- def _make_transpose(self, block, planes, blocks, stride=1):
- upsample = None
- if stride != 1:
- upsample = nn.Sequential(
- nn.ConvTranspose2d(self.inplanes, planes,
- kernel_size=2, stride=stride,
- padding=0, bias=False),
- nn.BatchNorm2d(planes),
- )
- elif self.inplanes != planes:
- upsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(planes),
- )
-
- layers = []
-
- for i in range(1, blocks):
- layers.append(block(self.inplanes, self.inplanes))
-
- layers.append(block(self.inplanes, planes, stride, upsample))
- self.inplanes = planes
-
- return nn.Sequential(*layers)
-
- #initialize the weights
- def initialize_weights(self):
- res50 = models.resnet50(pretrained=True)
- pretrained_dict = res50.state_dict()
- all_params = {}
- for k, v in self.resnet.state_dict().items():
- if k in pretrained_dict.keys():
- v = pretrained_dict[k]
- all_params[k] = v
- elif '_1' in k:
- name = k.split('_1')[0] + k.split('_1')[1]
- v = pretrained_dict[name]
- all_params[k] = v
- elif '_2' in k:
- name = k.split('_2')[0] + k.split('_2')[1]
- v = pretrained_dict[name]
- all_params[k] = v
- assert len(all_params.keys()) == len(self.resnet.state_dict().keys())
- self.resnet.load_state_dict(all_params)
-
- all_params = {}
- for k, v in self.resnet_depth.state_dict().items():
- if k=='conv1.weight':
- all_params[k]=torch.nn.init.normal_(v, mean=0, std=1)
- elif k in pretrained_dict.keys():
- v = pretrained_dict[k]
- all_params[k] = v
- elif '_1' in k:
- name = k.split('_1')[0] + k.split('_1')[1]
- v = pretrained_dict[name]
- all_params[k] = v
- elif '_2' in k:
- name = k.split('_2')[0] + k.split('_2')[1]
- v = pretrained_dict[name]
- all_params[k] = v
- assert len(all_params.keys()) == len(self.resnet_depth.state_dict().keys())
- self.resnet_depth.load_state_dict(all_params)
-
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/cross_validation.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/cross_validation.py
deleted file mode 100644
index 90907707a38657bee37f8df64ce6f43b4cd6e3eb..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/cross_validation.py
+++ /dev/null
@@ -1,127 +0,0 @@
-from typing import Optional
-from torch.utils.data import DataLoader
-from torch import nn, Tensor
-import torch
-from tqdm import tqdm
-import wandb
-from torch.utils.data.distributed import DistributedSampler
-import torch.distributed as dist
-
-from .criterion import DevCriterion
-from .distributed_training import get_world_size, is_master_proc
-from .utils import clean_cache
-from .dataset_fn import DevDataset, RGBDDataset
-from .configs.base_config import base_cfg
-from .logger_fn import Logger
-from .device import device, cpu_device
-
-class CrossValidation:
- def __init__(
- self, cfg: base_cfg, max_size: Optional[int] = None,
- max_track: Optional[int] = None,
- data_augmentation_version: int = 1,
- ) -> None:
- self.cfg = cfg
- self.dev_dataset = DevDataset(cfg)
- dev_sampler = DistributedSampler(
- dataset=self.dev_dataset, shuffle=False
- )
- self.dev_dataloader = DataLoader(
- self.dev_dataset, batch_size=cfg.val_batch_size,
- sampler=dev_sampler,
- num_workers=cfg.num_workers,
- pin_memory=True,
- )
- self.dev_criterion = DevCriterion()
- self.dev_num_iters = len(self.dev_dataloader)
-
- self.world_size = get_world_size()
- self.is_master_process = is_master_proc()
-
- def calculate_dev_mae(
- self, model: nn.Module, epoch: int, logger: Optional[Logger] = None
- ) -> float:
- dataloader = self.dev_dataloader
- dataset = self.dev_dataset
- num_iters = self.dev_num_iters
- return self.__calculate_mae(
- epoch, dataloader, dataset,
- num_iters, model, 'dev', logger
- )
-
- @torch.no_grad()
- def __calculate_mae(
- self, epoch: int, dataloader: DataLoader,
- dataset: RGBDDataset,
- num_iters: int, model: nn.Module, log_attr: str,
- logger: Optional[Logger] = None
- ) -> float:
- '''Given that the model is already loaded in GPU
- Note that the model will be in evaluation model after running this function
- '''
- model.eval()
-
- total_mae: float = 0.0
- if logger is not None and self.is_master_process:
- logger.info(f'Cross-validation [{log_attr}] ...')
- for i_batch, (gpu_images, gpu_depths, gpu_gts, indices) in tqdm(
- enumerate(dataloader, start=1), total=num_iters,
- disable=not self.is_master_process,
- ):
- gpu_images: Tensor = gpu_images.cuda()
- gpu_depths: Tensor = gpu_depths.cuda()
- gpu_gts: Tensor = gpu_gts.cuda()
-
- with torch.cuda.amp.autocast(enabled=self.cfg.is_fp16):
- gpu_out: Tensor = model(gpu_images, gpu_depths)
- mae: Tensor = self.dev_criterion(
- gpu_out['semseg'].sigmoid(), gpu_gts
- )
- dist.all_reduce(mae)
-
- total_mae += mae.to(cpu_device).item() * indices.shape[0] # * self.world_size
- del gpu_images, gpu_depths, gpu_gts, indices
- clean_cache()
-
- return total_mae / len(dataset)
-
-def cross_validation_log(
- cfg: base_cfg,
- model: nn.Module,
- logger: Logger,
- cross_val: CrossValidation,
- epoch: int
-) -> None:
- clean_cache()
-
- dev_mae = cross_val.calculate_dev_mae(model, epoch, logger)
-
- if is_master_proc():
- wandb.log({
- # 'train_mae': train_mae,
- 'dev_mae': dev_mae,
- 'epoch': epoch,
- })
- logger.info(f'Epoch {epoch}: Dev MAE {dev_mae:.4f}')
- cfg.em.update(epoch, dev_mae)
-
- clean_cache()
-
-def test_cross_validation(cfg: base_cfg) -> None:
- from .rgbd_model import RGBDModel
- from .checkpoint import load_checkpoint
- from .run_type import run_type
- from .wandb_manager import wandb_login, wandb_init
- wandb_login(cfg)
- wandb_init('test_cross_validation')
-
- model = RGBDModel(cfg, run_type=run_type.rt)
- load_checkpoint(model, None, None, None, ckpt_path = cfg.ckpt_path)
-
- model.to(device)
-
- cross_val = CrossValidation(cfg, max_track=10, max_size=100)
- cross_val.calculate_train_mae(model, 2)
- cross_val.calculate_dev_mae(model, 2)
-
- wandb.finish()
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/mixup.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/mixup.py
deleted file mode 100644
index ef3a00accd871d2e327c457fea1cd15e8d70ddf2..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/mixup.py
+++ /dev/null
@@ -1,322 +0,0 @@
-# --------------------------------------------------------
-# Based on timm and MAE-priv code bases
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/BUPT-PRIV/MAE-priv
-# --------------------------------------------------------
-
-""" Mixup and Cutmix
-
-Papers:
-mixup: Beyond Empirical Risk Minimization (https://arxiv.org/abs/1710.09412)
-
-CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (https://arxiv.org/abs/1905.04899)
-
-Code Reference:
-CutMix: https://github.com/clovaai/CutMix-PyTorch
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-import numpy as np
-import torch
-
-
-def one_hot(x, num_classes, on_value=1., off_value=0., device='cuda'):
- x = x.long().view(-1, 1)
- return torch.full((x.size()[0], num_classes), off_value, device=device).scatter_(1, x, on_value)
-
-
-def mixup_target(target, num_classes, lam=1., smoothing=0.0, device='cuda'):
- off_value = smoothing / num_classes
- on_value = 1. - smoothing + off_value
- y1 = one_hot(target, num_classes, on_value=on_value, off_value=off_value, device=device)
- y2 = one_hot(target.flip(0), num_classes, on_value=on_value, off_value=off_value, device=device)
- return y1 * lam + y2 * (1. - lam)
-
-
-def rand_bbox(img_shape, lam, margin=0., count=None):
- """ Standard CutMix bounding-box
- Generates a random square bbox based on lambda value. This impl includes
- support for enforcing a border margin as percent of bbox dimensions.
-
- Args:
- img_shape (tuple): Image shape as tuple
- lam (float): Cutmix lambda value
- margin (float): Percentage of bbox dimension to enforce as margin (reduce amount of box outside image)
- count (int): Number of bbox to generate
- """
- ratio = np.sqrt(1 - lam)
- img_h, img_w = img_shape[-2:]
- cut_h, cut_w = int(img_h * ratio), int(img_w * ratio)
- margin_y, margin_x = int(margin * cut_h), int(margin * cut_w)
- cy = np.random.randint(0 + margin_y, img_h - margin_y, size=count)
- cx = np.random.randint(0 + margin_x, img_w - margin_x, size=count)
- yl = np.clip(cy - cut_h // 2, 0, img_h)
- yh = np.clip(cy + cut_h // 2, 0, img_h)
- xl = np.clip(cx - cut_w // 2, 0, img_w)
- xh = np.clip(cx + cut_w // 2, 0, img_w)
- return yl, yh, xl, xh
-
-
-def rand_bbox_minmax(img_shape, minmax, count=None):
- """ Min-Max CutMix bounding-box
- Inspired by Darknet cutmix impl, generates a random rectangular bbox
- based on min/max percent values applied to each dimension of the input image.
-
- Typical defaults for minmax are usually in the .2-.3 for min and .8-.9 range for max.
-
- Args:
- img_shape (tuple): Image shape as tuple
- minmax (tuple or list): Min and max bbox ratios (as percent of image size)
- count (int): Number of bbox to generate
- """
- assert len(minmax) == 2
- img_h, img_w = img_shape[-2:]
- cut_h = np.random.randint(int(img_h * minmax[0]), int(img_h * minmax[1]), size=count)
- cut_w = np.random.randint(int(img_w * minmax[0]), int(img_w * minmax[1]), size=count)
- yl = np.random.randint(0, img_h - cut_h, size=count)
- xl = np.random.randint(0, img_w - cut_w, size=count)
- yu = yl + cut_h
- xu = xl + cut_w
- return yl, yu, xl, xu
-
-
-def cutmix_bbox_and_lam(img_shape, lam, ratio_minmax=None, correct_lam=True, count=None):
- """ Generate bbox and apply lambda correction.
- """
- if ratio_minmax is not None:
- yl, yu, xl, xu = rand_bbox_minmax(img_shape, ratio_minmax, count=count)
- else:
- yl, yu, xl, xu = rand_bbox(img_shape, lam, count=count)
- if correct_lam or ratio_minmax is not None:
- bbox_area = (yu - yl) * (xu - xl)
- lam = 1. - bbox_area / float(img_shape[-2] * img_shape[-1])
- return (yl, yu, xl, xu), lam
-
-
-class Mixup:
- """ Mixup/Cutmix that applies different params to each element or whole batch
-
- Args:
- mixup_alpha (float): mixup alpha value, mixup is active if > 0.
- cutmix_alpha (float): cutmix alpha value, cutmix is active if > 0.
- cutmix_minmax (List[float]): cutmix min/max image ratio, cutmix is active and uses this vs alpha if not None.
- prob (float): probability of applying mixup or cutmix per batch or element
- switch_prob (float): probability of switching to cutmix instead of mixup when both are active
- mode (str): how to apply mixup/cutmix params (per 'batch', 'pair' (pair of elements), 'elem' (element)
- correct_lam (bool): apply lambda correction when cutmix bbox clipped by image borders
- label_smoothing (float): apply label smoothing to the mixed target tensor
- num_classes (int): number of classes for target
- """
-
- def __init__(self, mixup_alpha=1., cutmix_alpha=0., cutmix_minmax=None, prob=1.0, switch_prob=0.5,
- mode='batch', correct_lam=True, label_smoothing=0.1, num_classes=1000):
- self.mixup_alpha = mixup_alpha
- self.cutmix_alpha = cutmix_alpha
- self.cutmix_minmax = cutmix_minmax
- if self.cutmix_minmax is not None:
- assert len(self.cutmix_minmax) == 2
- # force cutmix alpha == 1.0 when minmax active to keep logic simple & safe
- self.cutmix_alpha = 1.0
- self.mix_prob = prob
- self.switch_prob = switch_prob
- self.label_smoothing = label_smoothing
- self.num_classes = num_classes
- self.mode = mode
- self.correct_lam = correct_lam # correct lambda based on clipped area for cutmix
- self.mixup_enabled = True # set to false to disable mixing (intended tp be set by train loop)
-
- def _params_per_elem(self, batch_size):
- lam = np.ones(batch_size, dtype=np.float32)
- use_cutmix = np.zeros(batch_size, dtype=np.bool)
- if self.mixup_enabled:
- if self.mixup_alpha > 0. and self.cutmix_alpha > 0.:
- use_cutmix = np.random.rand(batch_size) < self.switch_prob
- lam_mix = np.where(
- use_cutmix,
- np.random.beta(self.cutmix_alpha, self.cutmix_alpha, size=batch_size),
- np.random.beta(self.mixup_alpha, self.mixup_alpha, size=batch_size))
- elif self.mixup_alpha > 0.:
- lam_mix = np.random.beta(self.mixup_alpha, self.mixup_alpha, size=batch_size)
- elif self.cutmix_alpha > 0.:
- use_cutmix = np.ones(batch_size, dtype=np.bool)
- lam_mix = np.random.beta(self.cutmix_alpha, self.cutmix_alpha, size=batch_size)
- else:
- assert False, "One of mixup_alpha > 0., cutmix_alpha > 0., cutmix_minmax not None should be true."
- lam = np.where(np.random.rand(batch_size) < self.mix_prob, lam_mix.astype(np.float32), lam)
- return lam, use_cutmix
-
- def _params_per_batch(self):
- lam = 1.
- use_cutmix = False
- if self.mixup_enabled and np.random.rand() < self.mix_prob:
- if self.mixup_alpha > 0. and self.cutmix_alpha > 0.:
- use_cutmix = np.random.rand() < self.switch_prob
- lam_mix = np.random.beta(self.cutmix_alpha, self.cutmix_alpha) if use_cutmix else \
- np.random.beta(self.mixup_alpha, self.mixup_alpha)
- elif self.mixup_alpha > 0.:
- lam_mix = np.random.beta(self.mixup_alpha, self.mixup_alpha)
- elif self.cutmix_alpha > 0.:
- use_cutmix = True
- lam_mix = np.random.beta(self.cutmix_alpha, self.cutmix_alpha)
- else:
- assert False, "One of mixup_alpha > 0., cutmix_alpha > 0., cutmix_minmax not None should be true."
- lam = float(lam_mix)
- return lam, use_cutmix
-
- def _mix_elem(self, x):
- batch_size = len(x)
- lam_batch, use_cutmix = self._params_per_elem(batch_size)
- x_orig = x.clone() # need to keep an unmodified original for mixing source
- for i in range(batch_size):
- j = batch_size - i - 1
- lam = lam_batch[i]
- if lam != 1.:
- if use_cutmix[i]:
- (yl, yh, xl, xh), lam = cutmix_bbox_and_lam(
- x[i].shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam)
- x[i][:, yl:yh, xl:xh] = x_orig[j][:, yl:yh, xl:xh]
- lam_batch[i] = lam
- else:
- x[i] = x[i] * lam + x_orig[j] * (1 - lam)
- return torch.tensor(lam_batch, device=x.device, dtype=x.dtype).unsqueeze(1)
-
- def _mix_pair(self, x):
- batch_size = len(x)
- lam_batch, use_cutmix = self._params_per_elem(batch_size // 2)
- x_orig = x.clone() # need to keep an unmodified original for mixing source
- for i in range(batch_size // 2):
- j = batch_size - i - 1
- lam = lam_batch[i]
- if lam != 1.:
- if use_cutmix[i]:
- (yl, yh, xl, xh), lam = cutmix_bbox_and_lam(
- x[i].shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam)
- x[i][:, yl:yh, xl:xh] = x_orig[j][:, yl:yh, xl:xh]
- x[j][:, yl:yh, xl:xh] = x_orig[i][:, yl:yh, xl:xh]
- lam_batch[i] = lam
- else:
- x[i] = x[i] * lam + x_orig[j] * (1 - lam)
- x[j] = x[j] * lam + x_orig[i] * (1 - lam)
- lam_batch = np.concatenate((lam_batch, lam_batch[::-1]))
- return torch.tensor(lam_batch, device=x.device, dtype=x.dtype).unsqueeze(1)
-
- def _mix_batch(self, x):
- lam, use_cutmix = self._params_per_batch()
- if lam == 1.:
- return 1.
- if use_cutmix:
- (yl, yh, xl, xh), lam = cutmix_bbox_and_lam(
- x.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam)
- x[:, :, yl:yh, xl:xh] = x.flip(0)[:, :, yl:yh, xl:xh]
- else:
- x_flipped = x.flip(0).mul_(1. - lam)
- x.mul_(lam).add_(x_flipped)
- return lam
-
- def __call__(self, x, target):
- assert len(x) % 2 == 0, 'Batch size should be even when using this'
- if self.mode == 'elem':
- lam = self._mix_elem(x)
- elif self.mode == 'pair':
- lam = self._mix_pair(x)
- else:
- lam = self._mix_batch(x)
- target = mixup_target(target, self.num_classes, lam, self.label_smoothing, x.device)
- return x, target
-
-
-class FastCollateMixup(Mixup):
- """ Fast Collate w/ Mixup/Cutmix that applies different params to each element or whole batch
-
- A Mixup impl that's performed while collating the batches.
- """
-
- def _mix_elem_collate(self, output, batch, half=False):
- batch_size = len(batch)
- num_elem = batch_size // 2 if half else batch_size
- assert len(output) == num_elem
- lam_batch, use_cutmix = self._params_per_elem(num_elem)
- for i in range(num_elem):
- j = batch_size - i - 1
- lam = lam_batch[i]
- mixed = batch[i][0]
- if lam != 1.:
- if use_cutmix[i]:
- if not half:
- mixed = mixed.copy()
- (yl, yh, xl, xh), lam = cutmix_bbox_and_lam(
- output.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam)
- mixed[:, yl:yh, xl:xh] = batch[j][0][:, yl:yh, xl:xh]
- lam_batch[i] = lam
- else:
- mixed = mixed.astype(np.float32) * lam + batch[j][0].astype(np.float32) * (1 - lam)
- np.rint(mixed, out=mixed)
- output[i] += torch.from_numpy(mixed.astype(np.uint8))
- if half:
- lam_batch = np.concatenate((lam_batch, np.ones(num_elem)))
- return torch.tensor(lam_batch).unsqueeze(1)
-
- def _mix_pair_collate(self, output, batch):
- batch_size = len(batch)
- lam_batch, use_cutmix = self._params_per_elem(batch_size // 2)
- for i in range(batch_size // 2):
- j = batch_size - i - 1
- lam = lam_batch[i]
- mixed_i = batch[i][0]
- mixed_j = batch[j][0]
- assert 0 <= lam <= 1.0
- if lam < 1.:
- if use_cutmix[i]:
- (yl, yh, xl, xh), lam = cutmix_bbox_and_lam(
- output.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam)
- patch_i = mixed_i[:, yl:yh, xl:xh].copy()
- mixed_i[:, yl:yh, xl:xh] = mixed_j[:, yl:yh, xl:xh]
- mixed_j[:, yl:yh, xl:xh] = patch_i
- lam_batch[i] = lam
- else:
- mixed_temp = mixed_i.astype(np.float32) * lam + mixed_j.astype(np.float32) * (1 - lam)
- mixed_j = mixed_j.astype(np.float32) * lam + mixed_i.astype(np.float32) * (1 - lam)
- mixed_i = mixed_temp
- np.rint(mixed_j, out=mixed_j)
- np.rint(mixed_i, out=mixed_i)
- output[i] += torch.from_numpy(mixed_i.astype(np.uint8))
- output[j] += torch.from_numpy(mixed_j.astype(np.uint8))
- lam_batch = np.concatenate((lam_batch, lam_batch[::-1]))
- return torch.tensor(lam_batch).unsqueeze(1)
-
- def _mix_batch_collate(self, output, batch):
- batch_size = len(batch)
- lam, use_cutmix = self._params_per_batch()
- if use_cutmix:
- (yl, yh, xl, xh), lam = cutmix_bbox_and_lam(
- output.shape, lam, ratio_minmax=self.cutmix_minmax, correct_lam=self.correct_lam)
- for i in range(batch_size):
- j = batch_size - i - 1
- mixed = batch[i][0]
- if lam != 1.:
- if use_cutmix:
- mixed = mixed.copy() # don't want to modify the original while iterating
- mixed[:, yl:yh, xl:xh] = batch[j][0][:, yl:yh, xl:xh]
- else:
- mixed = mixed.astype(np.float32) * lam + batch[j][0].astype(np.float32) * (1 - lam)
- np.rint(mixed, out=mixed)
- output[i] += torch.from_numpy(mixed.astype(np.uint8))
- return lam
-
- def __call__(self, batch, _=None):
- batch_size = len(batch)
- assert batch_size % 2 == 0, 'Batch size should be even when using this'
- half = 'half' in self.mode
- if half:
- batch_size //= 2
- output = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8)
- if self.mode == 'elem' or self.mode == 'half':
- lam = self._mix_elem_collate(output, batch, half=half)
- elif self.mode == 'pair':
- lam = self._mix_pair_collate(output, batch)
- else:
- lam = self._mix_batch_collate(output, batch)
- target = torch.tensor([b[1] for b in batch], dtype=torch.int64)
- target = mixup_target(target, self.num_classes, lam, self.label_smoothing, device='cpu')
- target = target[:batch_size]
- return output, target
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/tasks/speech_recognition.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/tasks/speech_recognition.py
deleted file mode 100644
index d9f011d55ff4fdfeb4c04ca790c314d685708c3a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/tasks/speech_recognition.py
+++ /dev/null
@@ -1,157 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import json
-import os
-import re
-import sys
-
-import torch
-from examples.speech_recognition.data import AsrDataset
-from examples.speech_recognition.data.replabels import replabel_symbol
-from fairseq.data import Dictionary
-from fairseq.tasks import LegacyFairseqTask, register_task
-
-
-def get_asr_dataset_from_json(data_json_path, tgt_dict):
- """
- Parse data json and create dataset.
- See scripts/asr_prep_json.py which pack json from raw files
-
- Json example:
- {
- "utts": {
- "4771-29403-0025": {
- "input": {
- "length_ms": 170,
- "path": "/tmp/file1.flac"
- },
- "output": {
- "text": "HELLO \n",
- "token": "HE LLO",
- "tokenid": "4815, 861"
- }
- },
- "1564-142299-0096": {
- ...
- }
- }
- """
- if not os.path.isfile(data_json_path):
- raise FileNotFoundError("Dataset not found: {}".format(data_json_path))
- with open(data_json_path, "rb") as f:
- data_samples = json.load(f)["utts"]
- assert len(data_samples) != 0
- sorted_samples = sorted(
- data_samples.items(),
- key=lambda sample: int(sample[1]["input"]["length_ms"]),
- reverse=True,
- )
- aud_paths = [s[1]["input"]["path"] for s in sorted_samples]
- ids = [s[0] for s in sorted_samples]
- speakers = []
- for s in sorted_samples:
- m = re.search("(.+?)-(.+?)-(.+?)", s[0])
- speakers.append(m.group(1) + "_" + m.group(2))
- frame_sizes = [s[1]["input"]["length_ms"] for s in sorted_samples]
- tgt = [
- [int(i) for i in s[1]["output"]["tokenid"].split(", ")]
- for s in sorted_samples
- ]
- # append eos
- tgt = [[*t, tgt_dict.eos()] for t in tgt]
- return AsrDataset(aud_paths, frame_sizes, tgt, tgt_dict, ids, speakers)
-
-
-@register_task("speech_recognition")
-class SpeechRecognitionTask(LegacyFairseqTask):
- """
- Task for training speech recognition model.
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- parser.add_argument("data", help="path to data directory")
- parser.add_argument(
- "--silence-token", default="\u2581", help="token for silence (used by w2l)"
- )
- parser.add_argument(
- "--max-source-positions",
- default=sys.maxsize,
- type=int,
- metavar="N",
- help="max number of frames in the source sequence",
- )
- parser.add_argument(
- "--max-target-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the target sequence",
- )
-
- def __init__(self, args, tgt_dict):
- super().__init__(args)
- self.tgt_dict = tgt_dict
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- """Setup the task (e.g., load dictionaries)."""
- dict_path = os.path.join(args.data, "dict.txt")
- if not os.path.isfile(dict_path):
- raise FileNotFoundError("Dict not found: {}".format(dict_path))
- tgt_dict = Dictionary.load(dict_path)
-
- if args.criterion == "ctc_loss":
- tgt_dict.add_symbol("")
- elif args.criterion == "asg_loss":
- for i in range(1, args.max_replabel + 1):
- tgt_dict.add_symbol(replabel_symbol(i))
-
- print("| dictionary: {} types".format(len(tgt_dict)))
- return cls(args, tgt_dict)
-
- def load_dataset(self, split, combine=False, **kwargs):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- data_json_path = os.path.join(self.args.data, "{}.json".format(split))
- self.datasets[split] = get_asr_dataset_from_json(data_json_path, self.tgt_dict)
-
- def build_generator(self, models, args, **unused):
- w2l_decoder = getattr(args, "w2l_decoder", None)
- if w2l_decoder == "viterbi":
- from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder
-
- return W2lViterbiDecoder(args, self.target_dictionary)
- elif w2l_decoder == "kenlm":
- from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder
-
- return W2lKenLMDecoder(args, self.target_dictionary)
- elif w2l_decoder == "fairseqlm":
- from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder
-
- return W2lFairseqLMDecoder(args, self.target_dictionary)
- else:
- return super().build_generator(models, args)
-
- @property
- def target_dictionary(self):
- """Return the :class:`~fairseq.data.Dictionary` for the language
- model."""
- return self.tgt_dict
-
- @property
- def source_dictionary(self):
- """Return the source :class:`~fairseq.data.Dictionary` (if applicable
- for this task)."""
- return None
-
- def max_positions(self):
- """Return the max speech and sentence length allowed by the task."""
- return (self.args.max_source_positions, self.args.max_target_positions)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/fp32_group_norm.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/fp32_group_norm.py
deleted file mode 100644
index d03aac022e30c8c14a600062d1d86429504ba003..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/fp32_group_norm.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Layer norm done in fp32 (for fp16 training)
-"""
-
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class Fp32GroupNorm(nn.GroupNorm):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def forward(self, input):
- output = F.group_norm(
- input.float(),
- self.num_groups,
- self.weight.float() if self.weight is not None else None,
- self.bias.float() if self.bias is not None else None,
- self.eps,
- )
- return output.type_as(input)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/unfold.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/unfold.py
deleted file mode 100644
index 138272f1ef4f673b29e36aed4531106f7ce95968..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/unfold.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch.nn.functional as F
-
-
-def unfold1d(x, kernel_size, padding_l, pad_value=0):
- """unfold T x B x C to T x B x C x K"""
- if kernel_size > 1:
- T, B, C = x.size()
- x = F.pad(
- x, (0, 0, 0, 0, padding_l, kernel_size - 1 - padding_l), value=pad_value
- )
- x = x.as_strided((T, B, C, kernel_size), (B * C, C, 1, B * C))
- else:
- x = x.unsqueeze(3)
- return x
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/fused_adam.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/fused_adam.py
deleted file mode 100644
index 7a6d1f73d53cae24ff94bb0bbc42bcc1de75548a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/fused_adam.py
+++ /dev/null
@@ -1,384 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import types
-
-import torch
-
-
-def get_fused_adam_class():
- """
- Look for the FusedAdam optimizer from apex. We first try to load the
- "contrib" interface, which is a bit faster than the main interface,
- but is technically deprecated.
- """
- try:
- # The "deprecated" interface in recent versions of apex is a bit
- # faster than the main interface, since we don't use the apex
- # optimizer. This can be installed by passing the
- # `--deprecated_fused_adam` option when building apex.
- global fused_adam_cuda
- import importlib
-
- fused_adam_cuda = importlib.import_module("fused_adam_cuda")
- return FusedAdamV1
- except ImportError:
- try:
- # fallback to the newer interface
- from apex.optimizers import FusedAdam as _FusedAdam # noqa
- from apex.multi_tensor_apply import multi_tensor_applier
-
- if multi_tensor_applier.available:
- return FusedAdamV2
- except ImportError:
- pass
- return None
-
-
-class FusedAdamV1(torch.optim.Optimizer):
- """
- Implements Adam algorithm. Currently GPU-only. Requires Apex to be installed via
- ``python setup.py install --cuda_ext --cpp_ext``.
-
- It has been proposed in `Adam: A Method for Stochastic Optimization`_.
-
- Compared to the original version in Apex, the fairseq version casts grads
- and params to FP32 internally to support ``--memory-efficient-fp16``.
-
- Args:
- params (iterable): iterable of parameters to optimize or dicts defining
- parameter groups.
- lr (float, optional): learning rate. (default: 1e-3)
- betas (Tuple[float, float], optional): coefficients used for computing
- running averages of gradient and its square. (default: (0.9, 0.999))
- eps (float, optional): term added to the denominator to improve
- numerical stability. (default: 1e-8)
- weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
- amsgrad (boolean, optional): whether to use the AMSGrad variant of this
- algorithm from the paper `On the Convergence of Adam and Beyond`_
- (default: False) NOT SUPPORTED in FusedAdam!
- eps_inside_sqrt (boolean, optional): in the 'update parameters' step,
- adds eps to the bias-corrected second moment estimate before
- evaluating square root instead of adding it to the square root of
- second moment estimate as in the original paper. (default: False)
- .. _Adam: A Method for Stochastic Optimization:
- https://arxiv.org/abs/1412.6980
- .. _On the Convergence of Adam and Beyond:
- https://openreview.net/forum?id=ryQu7f-RZ
- """
-
- def __init__(
- self,
- params,
- lr=1e-3,
- bias_correction=True,
- betas=(0.9, 0.999),
- eps=1e-8,
- eps_inside_sqrt=False,
- weight_decay=0.0,
- max_grad_norm=0.0,
- amsgrad=False,
- use_fp16_stats=False,
- ):
- global fused_adam_cuda
- import importlib
-
- fused_adam_cuda = importlib.import_module("fused_adam_cuda")
-
- if amsgrad:
- raise RuntimeError("FusedAdam does not support the AMSGrad variant.")
- defaults = {
- "lr": lr,
- "bias_correction": bias_correction,
- "betas": betas,
- "eps": eps,
- "weight_decay": weight_decay,
- "max_grad_norm": max_grad_norm,
- }
- super().__init__(params, defaults)
- self.eps_mode = 0 if eps_inside_sqrt else 1
-
- self.use_fp16_stats = use_fp16_stats
- self.FLOAT16_MAX = 65504.0
-
- @property
- def supports_memory_efficient_fp16(self):
- return True
-
- @property
- def supports_flat_params(self):
- return True
-
- @property
- def supports_step_with_scale(self):
- return True
-
- def step(self, closure=None, grads=None, scale=1.0, grad_norms=None):
- """Performs a single optimization step.
- Args:
- closure (callable, optional): A closure that reevaluates the model
- and returns the loss.
- grads (list of tensors, optional): weight gradient to use for the
- optimizer update. If gradients have type torch.half, parameters
- are expected to be in type torch.float. (default: None)
- output params (list of tensors, optional): A reduced precision copy
- of the updated weights written out in addition to the regular
- updated weights. Have to be of same type as gradients. (default: None)
- scale (float, optional): factor to divide gradient tensor values
- by before applying to weights. (default: 1)
- """
- loss = None
- if closure is not None:
- loss = closure()
-
- if grads is None:
- grads_group = [None] * len(self.param_groups)
- # backward compatibility
- # assuming a list/generator of parameter means single group
- elif isinstance(grads, types.GeneratorType):
- grads_group = [grads]
- elif type(grads[0]) != list:
- grads_group = [grads]
- else:
- grads_group = grads
-
- if grad_norms is None:
- grad_norms = [None] * len(self.param_groups)
-
- for group, grads_this_group, grad_norm in zip(
- self.param_groups, grads_group, grad_norms
- ):
- if grads_this_group is None:
- grads_this_group = [None] * len(group["params"])
-
- # compute combined scale factor for this group
- combined_scale = scale
- if group.get("max_grad_norm", 0) > 0:
- # norm is in fact norm*scale
- clip = ((grad_norm / scale) + 1e-6) / group["max_grad_norm"]
- if clip > 1:
- combined_scale = clip * scale
-
- bias_correction = 1 if group.get("bias_correction", 1) else 0
-
- for p, grad in zip(group["params"], grads_this_group):
- # note: p.grad should not ever be set for correct
- # operation of mixed precision optimizer that sometimes
- # sends None gradients
- if p.grad is None and grad is None:
- continue
- if grad is None:
- grad = p.grad.data
- if grad.is_sparse:
- raise RuntimeError(
- "FusedAdam does not support sparse gradients, "
- "please consider SparseAdam instead"
- )
-
- if p.device.type == "cpu":
- p_data_fp32 = p.data.cuda(non_blocking=True).float()
- out_p = torch.tensor([], dtype = torch.float)
- else:
- p_data_fp32 = p.data.float()
- out_p = p.data
-
- state = self.state[p]
-
- # State initialization
- dtype = torch.float16 if self.use_fp16_stats else p_data_fp32.dtype
- if len(state) == 0:
- state["step"] = 0
- # Exponential moving average of gradient values
- state["exp_avg"] = torch.zeros_like(p_data_fp32, dtype=dtype)
- # Exponential moving average of squared gradient values
- state["exp_avg_sq"] = torch.zeros_like(p_data_fp32, dtype=dtype)
- if self.use_fp16_stats:
- state["exp_avg_scale"] = 1.0
- state["exp_avg_sq_scale"] = 1.0
- else:
- device = p_data_fp32.device
- state["exp_avg"] = state["exp_avg"].to(device, dtype)
- state["exp_avg_sq"] = state["exp_avg_sq"].to(device, dtype)
-
- exp_avg = state["exp_avg"]
- exp_avg_sq = state["exp_avg_sq"]
- if self.use_fp16_stats:
- assert exp_avg.dtype == torch.float16
- exp_avg = exp_avg.float() * state["exp_avg_scale"]
- exp_avg_sq = exp_avg_sq.float() * state["exp_avg_sq_scale"]
- beta1, beta2 = group["betas"]
-
- state["step"] += 1
-
- with torch.cuda.device(p_data_fp32.device):
- fused_adam_cuda.adam(
- p_data_fp32,
- out_p,
- exp_avg,
- exp_avg_sq,
- grad,
- group["lr"],
- beta1,
- beta2,
- group["eps"],
- combined_scale,
- state["step"],
- self.eps_mode,
- bias_correction,
- group["weight_decay"],
- )
-
- if p.device.type == "cpu":
- p.data.copy_(p_data_fp32, non_blocking=True)
-
- if self.use_fp16_stats:
- def inf_norm(t):
- return torch.norm(t, float("inf"))
-
- # from github.com/openai/jukebox/blob/master/jukebox/utils/fp16.py
- state["exp_avg_scale"], state["exp_avg_sq_scale"] = (
- 1e-8 + inf_norm(exp_avg) / self.FLOAT16_MAX,
- 1e-8 + inf_norm(exp_avg_sq) / self.FLOAT16_MAX,
- )
- state["exp_avg"], state["exp_avg_sq"] = (
- (exp_avg / state["exp_avg_scale"]).half(),
- (exp_avg_sq / state["exp_avg_sq_scale"]).half(),
- )
-
- return loss
-
-
-try:
- from apex.optimizers import FusedAdam
- from apex.multi_tensor_apply import multi_tensor_applier
-
- class FusedAdamV2(FusedAdam):
- """
- Compared to the original version in Apex, the fairseq version casts grads
- and params to FP32 internally to support ``--memory-efficient-fp16``.
- """
-
- def __init__(self, *args, use_fp16_stats=False, **kwargs):
- if use_fp16_stats:
- raise NotImplementedError("--fp16-adam-stats is only supported with FusedAdamV1")
- super().__init__(*args, **kwargs)
- if not hasattr(self, "multi_tensor_adam"):
- raise Exception(
- "Apex installation is outdated. Please install an updated version of apex."
- )
-
- @property
- def supports_memory_efficient_fp16(self):
- return True
-
- @property
- def supports_flat_params(self):
- return True
-
- def step(
- self,
- closure=None,
- grads=None,
- output_params=None,
- scale=None,
- grad_norms=None,
- ):
- """Performs a single optimization step."""
- loss = None
- if closure is not None:
- loss = closure()
-
- for group in self.param_groups:
- bias_correction = 1 if group["bias_correction"] else 0
- beta1, beta2 = group["betas"]
-
- # assume same step across group now to simplify things
- # per parameter step can be easily support by making it tensor, or pass list into kernel
- if "step" in group:
- group["step"] += 1
- else:
- group["step"] = 1
-
- # create lists for multi-tensor apply
- g_16, p_16, orig_p_16, m_16, v_16 = [], [], [], [], []
- g_32, p_32, m_32, v_32 = [], [], [], []
-
- for p in group["params"]:
- if p.grad is None:
- continue
- if p.grad.data.is_sparse:
- raise RuntimeError(
- "FusedAdam does not support sparse gradients, "
- "please consider SparseAdam instead"
- )
-
- state = self.state[p]
- # State initialization
- if len(state) == 0:
- # Exponential moving average of gradient values
- state["exp_avg"] = torch.zeros_like(p.data, dtype=torch.float)
- # Exponential moving average of squared gradient values
- state["exp_avg_sq"] = torch.zeros_like(
- p.data, dtype=torch.float
- )
- else:
- state["exp_avg"] = state["exp_avg"].to(
- device=p.data.device, dtype=torch.float
- )
- state["exp_avg_sq"] = state["exp_avg_sq"].to(
- device=p.data.device, dtype=torch.float
- )
-
- if p.dtype == torch.float16:
- g_16.append(p.grad.data.float())
- p_16.append(p.data.float())
- orig_p_16.append(p.data)
- m_16.append(state["exp_avg"])
- v_16.append(state["exp_avg_sq"])
- elif p.dtype == torch.float32:
- g_32.append(p.grad.data)
- p_32.append(p.data)
- m_32.append(state["exp_avg"])
- v_32.append(state["exp_avg_sq"])
- else:
- raise RuntimeError("FusedAdam only support fp16 and fp32.")
-
- with torch.cuda.device(p.device):
- if len(g_16) > 0:
- multi_tensor_applier(
- self.multi_tensor_adam,
- self._dummy_overflow_buf,
- [g_16, p_16, m_16, v_16],
- group["lr"],
- beta1,
- beta2,
- group["eps"],
- group["step"],
- self.adam_w_mode,
- bias_correction,
- group["weight_decay"],
- )
- for orig_p, p in zip(orig_p_16, p_16):
- orig_p.copy_(p.data)
- if len(g_32) > 0:
- multi_tensor_applier(
- self.multi_tensor_adam,
- self._dummy_overflow_buf,
- [g_32, p_32, m_32, v_32],
- group["lr"],
- beta1,
- beta2,
- group["eps"],
- group["step"],
- self.adam_w_mode,
- bias_correction,
- group["weight_decay"],
- )
-
- return loss
-
-
-except ImportError:
- pass
diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_resources/transliterate/README.md b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_resources/transliterate/README.md
deleted file mode 100644
index 1f55e11e80f6fc5ebbf42dade0266e3d4ee06ce4..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_resources/transliterate/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Transliteration Models for Indian languages
-These are models for transliteration involving Indian languages.
-The models are essentially Statistical Machine Translation systems trained using Moses over a
-character-level parallel corpora of transliterations. Hence, you will need Moses to use these transliteration models.
-The transliteration corpus has itself been mined in an unsupervised fashion from a translation corpus.
-
-Currently we have trained transliteration models for five language pairs: bn-hi, ta-hi, te-hi, en-hi and mr-hi
-
-Support for transliteration has been introduced in Moses from version 2.1
-So please ensure that you have minimum 2.1 version setup for Moses
-
-Commands to run the transliteration module using moses
-
-$moseshome/mosesdecoder/scripts/Transliteration/post-decoding-transliteration.pl \
---moses-src-dir $moseshome/mosesdecoder --external-bin-dir $moseshome/tools \
---transliteration-model-dir {path to transliteration model folder} --oov-file {path to file containing oov words, oovs are space separated with each line containing all oovs for the input line}\
- --input-file {input file to transliterated} --output-file {output file location} \
- --input-extension {input language code for eg. en} --output-extension {output language code for eg. hi} --language-model {path to language model} \
- --decoder $moseshome/mosesdecoder/bin/moses
-
-A sample execution of the model will be as follows:
-
-export moseshome={path to moses installation}
-$moseshome/mosesdecoder/scripts/Transliteration/post-decoding-transliteration.pl \
---moses-src-dir $moseshome/mosesdecoder --external-bin-dir $moseshome/tools \
---transliteration-model-dir /home/ratish/project/nlp_resources/indic_nlp_resources/transliterate/en-hi \
---oov-file /home/ratish/project/translit/input.oov \
- --input-file /home/ratish/project/translit/input.en \
- --output-file /home/ratish/project/translit/output.hi \
- --input-extension en --output-extension hi --language-model /home/ratish/project/translit/lm/nc.binlm.1 \
- --decoder $moseshome/mosesdecoder/bin/moses
-
-So far, we have seen the use of transliteration in a post-editing task for machine translation task.
-In case, the models are needed for purely transliteration purpose, the input file and OOV file are the same.
-Sample input file:
-New Delhi is capital of India
-India is worlds seventh largest nation in the World
-
-OOV file
-New Delhi is capital of India
-India is worlds seventh largest nation in the World
-
-On running the transliteration module, the output is:
-न्यू डेल्ही इस कैपिटल आफ इंडिया
-इंडिया इस वर्ल्ड सेवंथ लारगेस्ट नेशन इन थे वर्ल्ड
diff --git a/spaces/Hexamind/GDOC/test_app.py b/spaces/Hexamind/GDOC/test_app.py
deleted file mode 100644
index 2b2d7ab2d078c4a767e956dd442b17ecb9ad1eae..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/GDOC/test_app.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import docx
-from docx.enum.style import WD_STYLE_TYPE
-import os
-from config import config
-from typing import Dict
-import random
-import datetime
-import string
-
-from lxml import etree
-
-from src.domain.doc import Doc
-
-
-
-
-name = 'CorpTemplate.docx'
-
-template_path = config['templates_path'] + '/' + config['templates'][config['default_template_index']]
-template = Doc(template_path)
-doc_path = config['these_docs_path'] + name
-this_doc = Doc(path=doc_path)
-new_doc_path = config['new_docs_path'] + this_doc.name + '_.docx'
-new_doc = this_doc.copy(new_doc_path)
-
-
-
-
-new_styles = new_doc.styles.xstyles
-print(etree.tostring(new_styles['.Titre1'].element))
-names = new_doc.styles.names
-print(names)
-new_doc.save_as_docx()
-
-
-s = template.styles.xstyles['.BodyText']
-# new_styles.add_style(s.name, WD_STYLE_TYPE.PARAGRAPH)
-
-
-list_styles = [(s, s.name) for s in template.styles.xstyles if s.type==WD_STYLE_TYPE.LIST]
-
-
-base_styles_set = set()
-for s in new_styles:
- if s.type == 1:
- if s.base_style:
- try:
- base_styles_set.add(s.base_style.name)
- except:
- print(f"failure for {s}")
-
-
-base_styles = list(base_styles_set)
-
-
-
-
-"""
-or p in new_doc.xdoc.paragraphs:
- if p.style == new_styles['_newBody__2']:
- p.style = s.name
-
-new_styles['_newBody__2'].delete()
-new_doc.save_as_docx()
-"""
-pass
-etree.tostring(list_styles[1][0].element)
\ No newline at end of file
diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/linefiller/third_party.py b/spaces/HighCWu/Style2Paints-4-Gradio/linefiller/third_party.py
deleted file mode 100644
index 456e4f35387511018ca74aacd18dd307f4bc33c7..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/Style2Paints-4-Gradio/linefiller/third_party.py
+++ /dev/null
@@ -1,396 +0,0 @@
-import cv2
-from .thinning import *
-from .trappedball_fill import *
-from skimage.measure import block_reduce
-from skimage.morphology import disk, dilation, erosion
-from numba import njit
-
-
-def np_min_pool(x):
- return block_reduce(x, (2, 2), np.min)
-
-
-def np_max_pool(x):
- return block_reduce(x, (2, 2), np.max)
-
-
-def np_max_441(x):
- return block_reduce(x, (4, 4, 1), np.max)
-
-
-def np_max_pool_221(x):
- return block_reduce(x, (2, 2, 1), np.max)
-
-
-def np_max_pool_s(x, s):
- return block_reduce(x, (s, s, 1), np.max)
-
-
-def binarize(x):
- xp = x.copy()
- xp[xp < 250] = 0
- xp[xp > 0] = 255
- return xp
-
-
-def get_initial_fillmap(boundary, merge=True):
- fillmap = build_fill_map(boundary, flood_fill_multi(boundary, merge=merge))
- return fillmap
-
-
-def up_propagate(small_fillmap, big_boundary):
- new_fillmap = cv2.resize(small_fillmap, (big_boundary.shape[1], big_boundary.shape[0]), interpolation=cv2.INTER_NEAREST)
- padded_fillmap = np.pad(new_fillmap, [[1, 1], [1, 1]], 'constant', constant_values=0)
- new_mask = np.ones_like(new_fillmap, dtype=np.uint8) * 255
- new_mask[new_fillmap > 0] = 0
- new_mask[big_boundary < 240] = 0
- fills = flood_fill_multi(new_mask, merge=True)
- max_id = np.max(new_fillmap)
- for item in fills:
- points0 = padded_fillmap[(item[0] + 1, item[1] + 0)]
- points1 = padded_fillmap[(item[0] + 1, item[1] + 2)]
- points2 = padded_fillmap[(item[0] + 0, item[1] + 1)]
- points3 = padded_fillmap[(item[0] + 2, item[1] + 1)]
-
- all_points = np.concatenate([points0, points1, points2, points3], axis=0)
- pointsets, pointcounts = np.unique(all_points[all_points > 0], return_counts=True)
-
- if len(pointsets) > 0:
- new_fillmap[item] = pointsets[np.argmax(pointcounts)]
- else:
- max_id += 1
- new_fillmap[item] = max_id
- return new_fillmap
-
-
-def laplas_fill(b_512, b_256, b_128):
- b_512 = binarize(b_512)
- b_256 = binarize(b_256)
- b_128 = binarize(b_128)
- f128 = get_initial_fillmap(b_128)
- f256 = up_propagate(f128, b_256)
- f512 = up_propagate(f256, b_512)
- fin = thinning(f512)
- return fin
-
-
-@ njit
-def get_corner(x):
- corner = x.copy()
- s0 = corner.shape[0]
- s1 = corner.shape[1]
- for i0 in range(1, s0 - 1):
- for i1 in range(1, s1 - 1):
- if x[i0, i1] == 0:
- continue
- if x[i0, i1 - 1] == 0:
- if x[i0 - 1, i1 - 1] == 0:
- continue
- if x[i0 + 1, i1 - 1] == 0:
- continue
- corner[i0, i1] = 0
- continue
- if x[i0, i1 + 1] == 0:
- if x[i0 - 1, i1 + 1] == 0:
- continue
- if x[i0 + 1, i1 + 1] == 0:
- continue
- corner[i0, i1] = 0
- continue
- if x[i0 - 1, i1] == 0:
- if x[i0 - 1, i1 - 1] == 0:
- continue
- if x[i0 - 1, i1 + 1] == 0:
- continue
- corner[i0, i1] = 0
- continue
- if x[i0 + 1, i1] == 0:
- if x[i0 + 1, i1 - 1] == 0:
- continue
- if x[i0 + 1, i1 + 1] == 0:
- continue
- corner[i0, i1] = 0
- continue
- return corner
-
-
-def monogrouh(x):
- y = 255 - x
- y = dilation(y, disk(1))
- y = dilation(y, disk(1))
- y = erosion(y, disk(1))
- y = erosion(y, disk(1))
- y = 255 - y
- return y
-
-
-def corners(x):
- y = x.copy()
- y = monogrouh(y)
- y = get_corner(y)
- y = monogrouh(y)
- y = get_corner(y)
- y = monogrouh(y)
- return y
-
-
-def save_fill(name, fill):
- cv2.imwrite(name, show_fill_map(fill))
-
-
-def double_fill(b_1024, b_512, b256):
- b256 = binarize(b256)
- b_512 = binarize(b_512)
- b_1024 = binarize(b_1024)
- b_1024 = corners(b_1024)
- b_512 = np.min(np.stack([b_512, np_min_pool(b_1024)], axis=2), axis=2)
- b_512 = corners(b_512)
- b_256 = np.min(np.stack([b256, np_min_pool(b_512)], axis=2), axis=2)
- b_256 = corners(b_256)
- b_128 = np_min_pool(b_256)
- b_128 = corners(b_128)
- b_64 = np_min_pool(b_128)
- f64 = get_initial_fillmap(b_64)
- print('get_initial_fillmap(b_64)')
- f128 = up_propagate(f64, b_128)
- print('up_propagate(f64, b_128)')
- f256 = up_propagate(f128, b_256)
- print('up_propagate(f128, b_256)')
- f512 = up_propagate(f256, b_512)
- print('up_propagate(f256, b_512)')
- f1024 = up_propagate(f512, b_1024)
- print('up_propagate(f512, b_1024)')
- fin = thinning(f1024)
- print('thinning(f1024)')
-
- # cv2.imwrite('b_64.png', b_64)
- # cv2.imwrite('b_128.png', b_128)
- # cv2.imwrite('b_256.png', b_256)
- # cv2.imwrite('b_512.png', b_512)
- # cv2.imwrite('b_1024.png', b_1024)
- # save_fill('f64.png', f64)
- # save_fill('f128.png', f128)
- # save_fill('f256.png', f256)
- # save_fill('f512.png', f512)
- # save_fill('f1024.png', f1024)
- # save_fill('fin.png', fin)
-
- return find_all(fin)
-
-
-def single_fill(b_2048, path):
- b_2048 = corners(binarize(b_2048))
- f2048 = get_initial_fillmap(b_2048, merge=False)
- print(path + 'get_initial_fillmap(b_2048, merge=False)')
- fin = thinning(f2048)
- print(path + 'thinning(f2048)')
- # cv2.imwrite(path + 'b_2048.png', b_2048)
- # save_fill(path + 'f2048.png', f2048)
- # save_fill(path + 'fin.png', fin)
- return find_all(fin)
-
-
-def deatlize(x):
- x = cv2.GaussianBlur(x, (0, 0), 0.8)
- x = cv2.medianBlur(x, 3)
- return x
-
-
-def low_down(gradient_mask):
- return 1.0 - cv2.dilate(255 - gradient_mask, np.ones((3, 3), np.uint8), iterations=2).astype(np.float32) / 255.0
-
-
-def cv2pyrDown(x):
- return cv2.pyrDown(cv2.medianBlur(cv2.medianBlur(x, 3), 3))
-
-
-def cv2pyrUp(x):
- return cv2.pyrUp(cv2.medianBlur(cv2.medianBlur(x, 3), 3))
-
-
-def re_deatlize(visulized, s1024):
-
- gradient_mask_1024 = binarize(s1024)
- gradient_mask_512 = np_min_pool(gradient_mask_1024)
- gradient_mask_256 = np_min_pool(gradient_mask_512)
- gradient_mask_128 = np_min_pool(gradient_mask_256)
- gradient_mask_64 = np_min_pool(gradient_mask_128)
-
- gradient_mask_1024 = low_down(gradient_mask_1024)
- gradient_mask_512 = low_down(gradient_mask_512)
- gradient_mask_256 = low_down(gradient_mask_256)
- gradient_mask_128 = low_down(gradient_mask_128)
- gradient_mask_64 = low_down(gradient_mask_64)
-
- sample_1024 = visulized.astype(np.float32)
- sample_512 = cv2pyrDown(sample_1024)
- sample_256 = cv2pyrDown(sample_512)
- sample_128 = cv2pyrDown(sample_256)
- sample_64 = cv2pyrDown(sample_128)
- sample_32 = cv2pyrDown(sample_64)
-
- gradient_1024 = sample_1024 - cv2pyrUp(sample_512)
- gradient_512 = sample_512 - cv2pyrUp(sample_256)
- gradient_256 = sample_256 - cv2pyrUp(sample_128)
- gradient_128 = sample_128 - cv2pyrUp(sample_64)
- gradient_64 = sample_64 - cv2pyrUp(sample_32)
-
- rec_32 = sample_32
- rec_64 = cv2pyrUp(rec_32) + gradient_64 * (1 - gradient_mask_64[:, :, None])
- rec_128 = cv2pyrUp(rec_64) + gradient_128 * (1 - gradient_mask_128[:, :, None])
- rec_256 = cv2pyrUp(rec_128) + gradient_256 * (1 - gradient_mask_256[:, :, None])
- rec_512 = cv2pyrUp(rec_256) + gradient_512 * (1 - gradient_mask_512[:, :, None])
- rec_1024 = cv2pyrUp(rec_512) + gradient_1024 * (1 - gradient_mask_1024[:, :, None])
-
- return rec_1024.clip(0, 255).astype(np.uint8)
-
-
-def tiny_deatlize(visulized, s2048):
- gradient_mask_2048 = s2048.copy()
- gradient_mask_1024 = np_min_pool(gradient_mask_2048)
- gradient_mask_512 = np_min_pool(gradient_mask_1024)
- gradient_mask_256 = np_min_pool(gradient_mask_512)
-
- gradient_mask_2048 = low_down(gradient_mask_2048)
- gradient_mask_1024 = low_down(gradient_mask_1024)
- gradient_mask_512 = low_down(gradient_mask_512)
- gradient_mask_256 = low_down(gradient_mask_256)
-
- sample_2048 = visulized.astype(np.float32)
- sample_1024 = cv2.pyrDown(sample_2048)
- sample_512 = cv2.pyrDown(sample_1024)
- sample_256 = cv2.pyrDown(sample_512)
- sample_128 = cv2.pyrDown(sample_256)
-
- gradient_2048 = sample_2048 - cv2.pyrUp(sample_1024)
- gradient_1024 = sample_1024 - cv2.pyrUp(sample_512)
- gradient_512 = sample_512 - cv2.pyrUp(sample_256)
- gradient_256 = sample_256 - cv2.pyrUp(sample_128)
-
- rec_128 = sample_128
- rec_256 = cv2.pyrUp(rec_128) + gradient_256 * (1 - gradient_mask_256[:, :, None])
- rec_512 = cv2.pyrUp(rec_256) + gradient_512 * (1 - gradient_mask_512[:, :, None])
- rec_1024 = cv2.pyrUp(rec_512) + gradient_1024 * (1 - gradient_mask_1024[:, :, None])
- rec_2048 = cv2.pyrUp(rec_1024) + gradient_2048 * (1 - gradient_mask_2048[:, :, None])
- return rec_2048.clip(0, 255).astype(np.uint8)
-
-
-def adain(x, y):
- x_high = cv2.GaussianBlur(x, (0, 0), 3.0)
- y_high = cv2.GaussianBlur(y, (0, 0), 3.0)
- return (x.astype(np.float32) - x_high.astype(np.float32) + y_high.astype(np.float32)).clip(0, 255).astype(np.uint8)
-
-
-def corrupt(x, b128):
- float_sketch = x.astype(float)
- float_base = cv2.resize(float_sketch, (b128.shape[1], b128.shape[0]), cv2.INTER_AREA)
- alpha = b128[:, :, 0] / 255.0
- float_base = alpha * float_base + (1 - alpha) * np.mean(float_base)
- float_base = cv2.GaussianBlur(float_base, (0, 0), 8.0)
- float_base = cv2.resize(float_base, (x.shape[1], x.shape[0]), cv2.INTER_CUBIC)
- result = float_sketch / (float_base + 1e-10)
- result = result.clip(0, 1)
- result -= np.min(result)
- result /= np.max(result)
- return (result * 255.0).clip(0, 255).astype(np.uint8)
-
-
-def fuse_sketch(color, sketch, fills, fixer, points_arr, colors_arr):
- sketch = cv2.resize(sketch, (color.shape[1], color.shape[0]))
- fills = cv2.resize(fills, (color.shape[1], color.shape[0]), interpolation=cv2.INTER_NEAREST)
- fill_id = np.unique(fills.flatten())
- bg = np.zeros_like(color, dtype=np.uint8)
- checking_result = np.zeros(dtype=np.int32, shape=(np.max(fills) + 1,)) - 1
- length_points = int(len(points_arr))
- for _ in range(length_points):
- checking_result[fills[points_arr[_][0], points_arr[_][1]]] = _
- for id in fill_id:
- points = np.where(fills == id)
- if len(points[0]) > 0:
- color_id = checking_result[id]
- if color_id > -1:
- bg[points] = np.array(colors_arr[color_id])
- else:
- bg[points] = np.median(color[points], axis=0)
- fixed = adain(fixer(sketch, bg), bg)
- result = (fixed.astype(np.float32) + sketch[:, :, None].astype(np.float32) - 255.0).clip(0, 255).astype(np.uint8)
- return result, fixed, bg
-
-
-def balance_fill(color, fills, points, sizer):
- color = cv2.resize(color, (sizer.shape[1], sizer.shape[0]), interpolation=cv2.INTER_NEAREST)
- points = cv2.resize(points, (sizer.shape[1], sizer.shape[0]), interpolation=cv2.INTER_NEAREST)
- bg = np.zeros_like(color, dtype=np.uint8)
- for region in fills:
- if len(region[0]) > 0:
- region_points = points[region]
- region_points = region_points[region_points[:, 3] > 0]
- if region_points.shape[0] > 0:
- points_color, points_color_count = np.unique(region_points, return_counts=True, axis=0)
- bg[region] = points_color[np.argmax(points_color_count)][0:3]
- else:
- bg[region] = np.median(color[region], axis=0)
- return bg
-
-
-def shade_fill(color, fills, points, sizer):
- color = cv2.resize(color, (sizer.shape[1], sizer.shape[0]), interpolation=cv2.INTER_NEAREST)
- points = cv2.resize(points, (sizer.shape[1], sizer.shape[0]), interpolation=cv2.INTER_NEAREST)
- bg = np.zeros_like(color, dtype=np.uint8)
- for region in fills:
- if len(region[0]) > 0:
- region_points = points[region]
- region_points = region_points[region_points[:, 3] > 0]
- if region_points.shape[0] > 0:
- points_color, points_color_count = np.unique(region_points, return_counts=True, axis=0)
- c = points_color[np.argmax(points_color_count)][0:3]
- r = c[0]
- g = c[1]
- b = c[2]
- if r == 1 and g == 233 and b == 0:
- bg[region] = 255
- elif r == 0 and g == 233 and b == 1:
- bg[region] = 0
- else:
- bg[region] = np.median(color[region], axis=0)
- else:
- bg[region] = np.median(color[region], axis=0)
- return bg
-
-
-def get_alpha_piece(points):
- padded_points = np.pad(points, [[1, 1], [1, 1], [0, 0]], 'constant', constant_values=127)
- lines = 255 - padded_points[:, :, 3]
- lines[lines < 240] = 0
- fills = flood_fill_multi(lines, merge=True)
- result = np.zeros_like(padded_points)
- for item in fills:
- points0 = padded_points[(item[0], item[1] + 1)]
- points1 = padded_points[(item[0], item[1] - 1)]
- points2 = padded_points[(item[0] + 1, item[1])]
- points3 = padded_points[(item[0] - 1, item[1])]
- all_points = np.concatenate([points0, points1, points2, points3], axis=0)
- all_points = all_points[all_points[:, 3] > 0]
- all_points = np.unique(all_points, axis=0)
- if all_points.shape[0] == 1:
- result[item] = all_points[0]
- piece = result[1:-1, 1:-1, :]
- piece = np.maximum(piece, points)
- return piece, points
-
-
-def fin_deatlize(color, sketch):
-
- cf = color.astype(np.float32)
- alpha = sketch.astype(np.float32)[:, :, None] / 255.0
-
- plain = cf * alpha
- lines = cf * (1 - alpha)
-
- plain = cv2.medianBlur(plain, 5)
- plain = cv2.medianBlur(plain, 3)
-
- fin = plain + lines
-
- return fin.clip(0, 255).astype(np.uint8)
-
diff --git a/spaces/HighCWu/anime-colorization-with-hint/README.md b/spaces/HighCWu/anime-colorization-with-hint/README.md
deleted file mode 100644
index cd7a1296a57e5401d485a438eff8a8e4f13da3d5..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Anime Colorization With Hint
-emoji: 🌖
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Hina4867/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/Hina4867/bingo/src/lib/hooks/use-at-bottom.tsx
deleted file mode 100644
index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/lib/hooks/use-at-bottom.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import * as React from 'react'
-
-export function useAtBottom(offset = 0) {
- const [isAtBottom, setIsAtBottom] = React.useState(false)
-
- React.useEffect(() => {
- const handleScroll = () => {
- setIsAtBottom(
- window.innerHeight + window.scrollY >=
- document.body.offsetHeight - offset
- )
- }
-
- window.addEventListener('scroll', handleScroll, { passive: true })
- handleScroll()
-
- return () => {
- window.removeEventListener('scroll', handleScroll)
- }
- }, [offset])
-
- return isAtBottom
-}
diff --git a/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/hubert/customtokenizer.py b/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/hubert/customtokenizer.py
deleted file mode 100644
index d8f84d90f198ce08b2ed38be714bcde7df3c46b4..0000000000000000000000000000000000000000
--- a/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/hubert/customtokenizer.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import json
-import os.path
-from zipfile import ZipFile
-
-import numpy
-import torch
-from torch import nn, optim
-from torch.serialization import MAP_LOCATION
-
-
-class CustomTokenizer(nn.Module):
- def __init__(self, hidden_size=1024, input_size=768, output_size=10000, version=0):
- super(CustomTokenizer, self).__init__()
- next_size = input_size
- if version == 0:
- self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True)
- next_size = hidden_size
- if version == 1:
- self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True)
- self.intermediate = nn.Linear(hidden_size, 4096)
- next_size = 4096
-
- self.fc = nn.Linear(next_size, output_size)
- self.softmax = nn.LogSoftmax(dim=1)
- self.optimizer: optim.Optimizer = None
- self.lossfunc = nn.CrossEntropyLoss()
- self.input_size = input_size
- self.hidden_size = hidden_size
- self.output_size = output_size
- self.version = version
-
- def forward(self, x):
- x, _ = self.lstm(x)
- if self.version == 1:
- x = self.intermediate(x)
- x = self.fc(x)
- x = self.softmax(x)
- return x
-
- @torch.no_grad()
- def get_token(self, x):
- """
- Used to get the token for the first
- :param x: An array with shape (N, input_size) where N is a whole number greater or equal to 1, and input_size is the input size used when creating the model.
- :return: An array with shape (N,) where N is the same as N from the input. Every number in the array is a whole number in range 0...output_size - 1 where output_size is the output size used when creating the model.
- """
- return torch.argmax(self(x), dim=1)
-
- def prepare_training(self):
- self.optimizer = optim.Adam(self.parameters(), 0.001)
-
- def train_step(self, x_train, y_train, log_loss=False):
- # y_train = y_train[:-1]
- # y_train = y_train[1:]
-
- optimizer = self.optimizer
- lossfunc = self.lossfunc
- # Zero the gradients
- self.zero_grad()
-
- # Forward pass
- y_pred = self(x_train)
-
- y_train_len = len(y_train)
- y_pred_len = y_pred.shape[0]
-
- if y_train_len > y_pred_len:
- diff = y_train_len - y_pred_len
- y_train = y_train[diff:]
- elif y_train_len < y_pred_len:
- diff = y_pred_len - y_train_len
- y_pred = y_pred[:-diff, :]
-
- y_train_hot = torch.zeros(len(y_train), self.output_size)
- y_train_hot[range(len(y_train)), y_train] = 1
- y_train_hot = y_train_hot.to('cuda')
-
- # Calculate the loss
- loss = lossfunc(y_pred, y_train_hot)
-
- # Print loss
- if log_loss:
- print('Loss', loss.item())
-
- # Backward pass
- loss.backward()
-
- # Update the weights
- optimizer.step()
-
- def save(self, path):
- info_path = os.path.basename(path) + '/.info'
- torch.save(self.state_dict(), path)
- data_from_model = Data(self.input_size, self.hidden_size, self.output_size, self.version)
- with ZipFile(path, 'a') as model_zip:
- model_zip.writestr(info_path, data_from_model.save())
- model_zip.close()
-
- @staticmethod
- def load_from_checkpoint(path, map_location: MAP_LOCATION = None):
- old = True
- with ZipFile(path) as model_zip:
- filesMatch = [file for file in model_zip.namelist() if file.endswith('/.info')]
- file = filesMatch[0] if filesMatch else None
- if file:
- old = False
- data_from_model = Data.load(model_zip.read(file).decode('utf-8'))
- model_zip.close()
- if old:
- model = CustomTokenizer()
- else:
- model = CustomTokenizer(data_from_model.hidden_size, data_from_model.input_size, data_from_model.output_size, data_from_model.version)
- model.load_state_dict(torch.load(path, map_location))
- return model
-
-
-
-class Data:
- input_size: int
- hidden_size: int
- output_size: int
- version: int
-
- def __init__(self, input_size=768, hidden_size=1024, output_size=10000, version=0):
- self.input_size = input_size
- self.hidden_size = hidden_size
- self.output_size = output_size
- self.version = version
-
- @staticmethod
- def load(string):
- data = json.loads(string)
- return Data(data['input_size'], data['hidden_size'], data['output_size'], data['version'])
-
- def save(self):
- data = {
- 'input_size': self.input_size,
- 'hidden_size': self.hidden_size,
- 'output_size': self.output_size,
- 'version': self.version,
- }
- return json.dumps(data)
-
-
-def auto_train(data_path, save_path='model.pth', load_model: str | None = None, save_epochs=1):
- data_x, data_y = [], []
-
- if load_model and os.path.isfile(load_model):
- print('Loading model from', load_model)
- model_training = CustomTokenizer.load_from_checkpoint(load_model, 'cuda')
- else:
- print('Creating new model.')
- model_training = CustomTokenizer(version=1).to('cuda') # Settings for the model to run without lstm
- save_path = os.path.join(data_path, save_path)
- base_save_path = '.'.join(save_path.split('.')[:-1])
-
- sem_string = '_semantic.npy'
- feat_string = '_semantic_features.npy'
-
- ready = os.path.join(data_path, 'ready')
- for input_file in os.listdir(ready):
- full_path = os.path.join(ready, input_file)
- if input_file.endswith(sem_string):
- data_y.append(numpy.load(full_path))
- elif input_file.endswith(feat_string):
- data_x.append(numpy.load(full_path))
- model_training.prepare_training()
-
- epoch = 1
-
- while 1:
- for i in range(save_epochs):
- j = 0
- for x, y in zip(data_x, data_y):
- model_training.train_step(torch.tensor(x).to('cuda'), torch.tensor(y).to('cuda'), j % 50 == 0) # Print loss every 50 steps
- j += 1
- save_p = save_path
- save_p_2 = f'{base_save_path}_epoch_{epoch}.pth'
- model_training.save(save_p)
- model_training.save(save_p_2)
- print(f'Epoch {epoch} completed')
- epoch += 1
diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/transformer/permuter.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/transformer/permuter.py
deleted file mode 100644
index 0d43bb135adde38d94bf18a7e5edaa4523cd95cf..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/transformer/permuter.py
+++ /dev/null
@@ -1,248 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-
-
-class AbstractPermuter(nn.Module):
- def __init__(self, *args, **kwargs):
- super().__init__()
- def forward(self, x, reverse=False):
- raise NotImplementedError
-
-
-class Identity(AbstractPermuter):
- def __init__(self):
- super().__init__()
-
- def forward(self, x, reverse=False):
- return x
-
-
-class Subsample(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- C = 1
- indices = np.arange(H*W).reshape(C,H,W)
- while min(H, W) > 1:
- indices = indices.reshape(C,H//2,2,W//2,2)
- indices = indices.transpose(0,2,4,1,3)
- indices = indices.reshape(C*4,H//2, W//2)
- H = H//2
- W = W//2
- C = C*4
- assert H == W == 1
- idx = torch.tensor(indices.ravel())
- self.register_buffer('forward_shuffle_idx',
- nn.Parameter(idx, requires_grad=False))
- self.register_buffer('backward_shuffle_idx',
- nn.Parameter(torch.argsort(idx), requires_grad=False))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-def mortonify(i, j):
- """(i,j) index to linear morton code"""
- i = np.uint64(i)
- j = np.uint64(j)
-
- z = np.uint(0)
-
- for pos in range(32):
- z = (z |
- ((j & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos)) |
- ((i & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos+1))
- )
- return z
-
-
-class ZCurve(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- reverseidx = [np.int64(mortonify(i,j)) for i in range(H) for j in range(W)]
- idx = np.argsort(reverseidx)
- idx = torch.tensor(idx)
- reverseidx = torch.tensor(reverseidx)
- self.register_buffer('forward_shuffle_idx',
- idx)
- self.register_buffer('backward_shuffle_idx',
- reverseidx)
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class SpiralOut(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- assert H == W
- size = W
- indices = np.arange(size*size).reshape(size,size)
-
- i0 = size//2
- j0 = size//2-1
-
- i = i0
- j = j0
-
- idx = [indices[i0, j0]]
- step_mult = 0
- for c in range(1, size//2+1):
- step_mult += 1
- # steps left
- for k in range(step_mult):
- i = i - 1
- j = j
- idx.append(indices[i, j])
-
- # step down
- for k in range(step_mult):
- i = i
- j = j + 1
- idx.append(indices[i, j])
-
- step_mult += 1
- if c < size//2:
- # step right
- for k in range(step_mult):
- i = i + 1
- j = j
- idx.append(indices[i, j])
-
- # step up
- for k in range(step_mult):
- i = i
- j = j - 1
- idx.append(indices[i, j])
- else:
- # end reached
- for k in range(step_mult-1):
- i = i + 1
- idx.append(indices[i, j])
-
- assert len(idx) == size*size
- idx = torch.tensor(idx)
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class SpiralIn(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- assert H == W
- size = W
- indices = np.arange(size*size).reshape(size,size)
-
- i0 = size//2
- j0 = size//2-1
-
- i = i0
- j = j0
-
- idx = [indices[i0, j0]]
- step_mult = 0
- for c in range(1, size//2+1):
- step_mult += 1
- # steps left
- for k in range(step_mult):
- i = i - 1
- j = j
- idx.append(indices[i, j])
-
- # step down
- for k in range(step_mult):
- i = i
- j = j + 1
- idx.append(indices[i, j])
-
- step_mult += 1
- if c < size//2:
- # step right
- for k in range(step_mult):
- i = i + 1
- j = j
- idx.append(indices[i, j])
-
- # step up
- for k in range(step_mult):
- i = i
- j = j - 1
- idx.append(indices[i, j])
- else:
- # end reached
- for k in range(step_mult-1):
- i = i + 1
- idx.append(indices[i, j])
-
- assert len(idx) == size*size
- idx = idx[::-1]
- idx = torch.tensor(idx)
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class Random(nn.Module):
- def __init__(self, H, W):
- super().__init__()
- indices = np.random.RandomState(1).permutation(H*W)
- idx = torch.tensor(indices.ravel())
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-class AlternateParsing(AbstractPermuter):
- def __init__(self, H, W):
- super().__init__()
- indices = np.arange(W*H).reshape(H,W)
- for i in range(1, H, 2):
- indices[i, :] = indices[i, ::-1]
- idx = indices.flatten()
- assert len(idx) == H*W
- idx = torch.tensor(idx)
- self.register_buffer('forward_shuffle_idx', idx)
- self.register_buffer('backward_shuffle_idx', torch.argsort(idx))
-
- def forward(self, x, reverse=False):
- if not reverse:
- return x[:, self.forward_shuffle_idx]
- else:
- return x[:, self.backward_shuffle_idx]
-
-
-if __name__ == "__main__":
- p0 = AlternateParsing(16, 16)
- print(p0.forward_shuffle_idx)
- print(p0.backward_shuffle_idx)
-
- x = torch.randint(0, 768, size=(11, 256))
- y = p0(x)
- xre = p0(y, reverse=True)
- assert torch.equal(x, xre)
-
- p1 = SpiralOut(2, 2)
- print(p1.forward_shuffle_idx)
- print(p1.backward_shuffle_idx)
diff --git a/spaces/Huniu/niuniu/app.py b/spaces/Huniu/niuniu/app.py
deleted file mode 100644
index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000
--- a/spaces/Huniu/niuniu/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from upcunet_v3 import RealWaifuUpScaler
-import gradio as gr
-import time
-import logging
-import os
-from PIL import ImageOps
-import numpy as np
-import math
-
-
-def greet(input_img, input_model_name, input_tile_mode):
- # if input_img.size[0] * input_img.size[1] > 256 * 256:
- # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1]))
- # x = int(input_img.size[0]/input_img.size[1]*y)
- # input_img = ImageOps.fit(input_img, (x, y))
- input_img = np.array(input_img)
- if input_model_name not in model_cache:
- t1 = time.time()
- upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu")
- t2 = time.time()
- logger.info(f'load model time, {t2 - t1}')
- model_cache[input_model_name] = upscaler
- else:
- upscaler = model_cache[input_model_name]
- logger.info(f'load model from cache')
-
- start = time.time()
- result = upscaler(input_img, tile_mode=input_tile_mode)
- end = time.time()
- logger.info(f'input_model_name, {input_model_name}')
- logger.info(f'input_tile_mode, {input_tile_mode}')
- logger.info(f'input shape, {input_img.shape}')
- logger.info(f'output shape, {result.shape}')
- logger.info(f'speed time, {end - start}')
- return result
-
-
-if __name__ == '__main__':
- logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s")
- logger = logging.getLogger()
-
- ModelPath = "weights_v3/"
- model_cache = {}
-
- input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model')
- input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode')
- input_img = gr.inputs.Image(label='image', type='pil')
-
- inputs = [input_img, input_model_name, input_tile_mode]
- outputs = "image"
- iface = gr.Interface(fn=greet,
- inputs=inputs,
- outputs=outputs,
- allow_screenshot=False,
- allow_flagging='never',
- examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]],
- article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN) '
- '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。 '
- '修改bbb'
- 'The large image will lead to memory limit exceeded. So I crop and resize image. '
- 'If you want to experience the large image, please go to the link above.')
- iface.launch()
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/__init__.py
deleted file mode 100644
index 25408d28ec44cee56eb5fb3ab0c817dc04159e95..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/dataclass/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .configs import FairseqDataclass
-from .constants import ChoiceEnum
-
-
-__all__ = [
- "FairseqDataclass",
- "ChoiceEnum",
-]
diff --git a/spaces/IPN/DM_pb/app.py b/spaces/IPN/DM_pb/app.py
deleted file mode 100644
index 2b9191516016d4d441ed420cc73b4f698f4e3324..0000000000000000000000000000000000000000
--- a/spaces/IPN/DM_pb/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/roberta-large-mnli").launch();
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/data/imagenet.py b/spaces/Iceclear/StableSR/StableSR/ldm/data/imagenet.py
deleted file mode 100644
index 1c473f9c6965b22315dbb289eff8247c71bdc790..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/data/imagenet.py
+++ /dev/null
@@ -1,394 +0,0 @@
-import os, yaml, pickle, shutil, tarfile, glob
-import cv2
-import albumentations
-import PIL
-import numpy as np
-import torchvision.transforms.functional as TF
-from omegaconf import OmegaConf
-from functools import partial
-from PIL import Image
-from tqdm import tqdm
-from torch.utils.data import Dataset, Subset
-
-import taming.data.utils as tdu
-from taming.data.imagenet import str_to_indices, give_synsets_from_indices, download, retrieve
-from taming.data.imagenet import ImagePaths
-
-from ldm.modules.image_degradation import degradation_fn_bsr, degradation_fn_bsr_light
-
-
-def synset2idx(path_to_yaml="data/index_synset.yaml"):
- with open(path_to_yaml) as f:
- di2s = yaml.load(f)
- return dict((v,k) for k,v in di2s.items())
-
-
-class ImageNetBase(Dataset):
- def __init__(self, config=None):
- self.config = config or OmegaConf.create()
- if not type(self.config)==dict:
- self.config = OmegaConf.to_container(self.config)
- self.keep_orig_class_label = self.config.get("keep_orig_class_label", False)
- self.process_images = True # if False we skip loading & processing images and self.data contains filepaths
- self._prepare()
- self._prepare_synset_to_human()
- self._prepare_idx_to_synset()
- self._prepare_human_to_integer_label()
- self._load()
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- return self.data[i]
-
- def _prepare(self):
- raise NotImplementedError()
-
- def _filter_relpaths(self, relpaths):
- ignore = set([
- "n06596364_9591.JPEG",
- ])
- relpaths = [rpath for rpath in relpaths if not rpath.split("/")[-1] in ignore]
- if "sub_indices" in self.config:
- indices = str_to_indices(self.config["sub_indices"])
- synsets = give_synsets_from_indices(indices, path_to_yaml=self.idx2syn) # returns a list of strings
- self.synset2idx = synset2idx(path_to_yaml=self.idx2syn)
- files = []
- for rpath in relpaths:
- syn = rpath.split("/")[0]
- if syn in synsets:
- files.append(rpath)
- return files
- else:
- return relpaths
-
- def _prepare_synset_to_human(self):
- SIZE = 2655750
- URL = "https://heibox.uni-heidelberg.de/f/9f28e956cd304264bb82/?dl=1"
- self.human_dict = os.path.join(self.root, "synset_human.txt")
- if (not os.path.exists(self.human_dict) or
- not os.path.getsize(self.human_dict)==SIZE):
- download(URL, self.human_dict)
-
- def _prepare_idx_to_synset(self):
- URL = "https://heibox.uni-heidelberg.de/f/d835d5b6ceda4d3aa910/?dl=1"
- self.idx2syn = os.path.join(self.root, "index_synset.yaml")
- if (not os.path.exists(self.idx2syn)):
- download(URL, self.idx2syn)
-
- def _prepare_human_to_integer_label(self):
- URL = "https://heibox.uni-heidelberg.de/f/2362b797d5be43b883f6/?dl=1"
- self.human2integer = os.path.join(self.root, "imagenet1000_clsidx_to_labels.txt")
- if (not os.path.exists(self.human2integer)):
- download(URL, self.human2integer)
- with open(self.human2integer, "r") as f:
- lines = f.read().splitlines()
- assert len(lines) == 1000
- self.human2integer_dict = dict()
- for line in lines:
- value, key = line.split(":")
- self.human2integer_dict[key] = int(value)
-
- def _load(self):
- with open(self.txt_filelist, "r") as f:
- self.relpaths = f.read().splitlines()
- l1 = len(self.relpaths)
- self.relpaths = self._filter_relpaths(self.relpaths)
- print("Removed {} files from filelist during filtering.".format(l1 - len(self.relpaths)))
-
- self.synsets = [p.split("/")[0] for p in self.relpaths]
- self.abspaths = [os.path.join(self.datadir, p) for p in self.relpaths]
-
- unique_synsets = np.unique(self.synsets)
- class_dict = dict((synset, i) for i, synset in enumerate(unique_synsets))
- if not self.keep_orig_class_label:
- self.class_labels = [class_dict[s] for s in self.synsets]
- else:
- self.class_labels = [self.synset2idx[s] for s in self.synsets]
-
- with open(self.human_dict, "r") as f:
- human_dict = f.read().splitlines()
- human_dict = dict(line.split(maxsplit=1) for line in human_dict)
-
- self.human_labels = [human_dict[s] for s in self.synsets]
-
- labels = {
- "relpath": np.array(self.relpaths),
- "synsets": np.array(self.synsets),
- "class_label": np.array(self.class_labels),
- "human_label": np.array(self.human_labels),
- }
-
- if self.process_images:
- self.size = retrieve(self.config, "size", default=256)
- self.data = ImagePaths(self.abspaths,
- labels=labels,
- size=self.size,
- random_crop=self.random_crop,
- )
- else:
- self.data = self.abspaths
-
-
-class ImageNetTrain(ImageNetBase):
- NAME = "ILSVRC2012_train"
- URL = "http://www.image-net.org/challenges/LSVRC/2012/"
- AT_HASH = "a306397ccf9c2ead27155983c254227c0fd938e2"
- FILES = [
- "ILSVRC2012_img_train.tar",
- ]
- SIZES = [
- 147897477120,
- ]
-
- def __init__(self, process_images=True, data_root=None, **kwargs):
- self.process_images = process_images
- self.data_root = data_root
- super().__init__(**kwargs)
-
- def _prepare(self):
- if self.data_root:
- self.root = os.path.join(self.data_root, self.NAME)
- else:
- cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
- self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
-
- self.datadir = os.path.join(self.root, "data")
- self.txt_filelist = os.path.join(self.root, "filelist.txt")
- self.expected_length = 1281167
- self.random_crop = retrieve(self.config, "ImageNetTrain/random_crop",
- default=True)
- if not tdu.is_prepared(self.root):
- # prep
- print("Preparing dataset {} in {}".format(self.NAME, self.root))
-
- datadir = self.datadir
- if not os.path.exists(datadir):
- path = os.path.join(self.root, self.FILES[0])
- if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
- import academictorrents as at
- atpath = at.get(self.AT_HASH, datastore=self.root)
- assert atpath == path
-
- print("Extracting {} to {}".format(path, datadir))
- os.makedirs(datadir, exist_ok=True)
- with tarfile.open(path, "r:") as tar:
- tar.extractall(path=datadir)
-
- print("Extracting sub-tars.")
- subpaths = sorted(glob.glob(os.path.join(datadir, "*.tar")))
- for subpath in tqdm(subpaths):
- subdir = subpath[:-len(".tar")]
- os.makedirs(subdir, exist_ok=True)
- with tarfile.open(subpath, "r:") as tar:
- tar.extractall(path=subdir)
-
- filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
- filelist = [os.path.relpath(p, start=datadir) for p in filelist]
- filelist = sorted(filelist)
- filelist = "\n".join(filelist)+"\n"
- with open(self.txt_filelist, "w") as f:
- f.write(filelist)
-
- tdu.mark_prepared(self.root)
-
-
-class ImageNetValidation(ImageNetBase):
- NAME = "ILSVRC2012_validation"
- URL = "http://www.image-net.org/challenges/LSVRC/2012/"
- AT_HASH = "5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5"
- VS_URL = "https://heibox.uni-heidelberg.de/f/3e0f6e9c624e45f2bd73/?dl=1"
- FILES = [
- "ILSVRC2012_img_val.tar",
- "validation_synset.txt",
- ]
- SIZES = [
- 6744924160,
- 1950000,
- ]
-
- def __init__(self, process_images=True, data_root=None, **kwargs):
- self.data_root = data_root
- self.process_images = process_images
- super().__init__(**kwargs)
-
- def _prepare(self):
- if self.data_root:
- self.root = os.path.join(self.data_root, self.NAME)
- else:
- cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
- self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
- self.datadir = os.path.join(self.root, "data")
- self.txt_filelist = os.path.join(self.root, "filelist.txt")
- self.expected_length = 50000
- self.random_crop = retrieve(self.config, "ImageNetValidation/random_crop",
- default=False)
- if not tdu.is_prepared(self.root):
- # prep
- print("Preparing dataset {} in {}".format(self.NAME, self.root))
-
- datadir = self.datadir
- if not os.path.exists(datadir):
- path = os.path.join(self.root, self.FILES[0])
- if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
- import academictorrents as at
- atpath = at.get(self.AT_HASH, datastore=self.root)
- assert atpath == path
-
- print("Extracting {} to {}".format(path, datadir))
- os.makedirs(datadir, exist_ok=True)
- with tarfile.open(path, "r:") as tar:
- tar.extractall(path=datadir)
-
- vspath = os.path.join(self.root, self.FILES[1])
- if not os.path.exists(vspath) or not os.path.getsize(vspath)==self.SIZES[1]:
- download(self.VS_URL, vspath)
-
- with open(vspath, "r") as f:
- synset_dict = f.read().splitlines()
- synset_dict = dict(line.split() for line in synset_dict)
-
- print("Reorganizing into synset folders")
- synsets = np.unique(list(synset_dict.values()))
- for s in synsets:
- os.makedirs(os.path.join(datadir, s), exist_ok=True)
- for k, v in synset_dict.items():
- src = os.path.join(datadir, k)
- dst = os.path.join(datadir, v)
- shutil.move(src, dst)
-
- filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
- filelist = [os.path.relpath(p, start=datadir) for p in filelist]
- filelist = sorted(filelist)
- filelist = "\n".join(filelist)+"\n"
- with open(self.txt_filelist, "w") as f:
- f.write(filelist)
-
- tdu.mark_prepared(self.root)
-
-
-
-class ImageNetSR(Dataset):
- def __init__(self, size=None,
- degradation=None, downscale_f=4, min_crop_f=0.5, max_crop_f=1.,
- random_crop=True):
- """
- Imagenet Superresolution Dataloader
- Performs following ops in order:
- 1. crops a crop of size s from image either as random or center crop
- 2. resizes crop to size with cv2.area_interpolation
- 3. degrades resized crop with degradation_fn
-
- :param size: resizing to size after cropping
- :param degradation: degradation_fn, e.g. cv_bicubic or bsrgan_light
- :param downscale_f: Low Resolution Downsample factor
- :param min_crop_f: determines crop size s,
- where s = c * min_img_side_len with c sampled from interval (min_crop_f, max_crop_f)
- :param max_crop_f: ""
- :param data_root:
- :param random_crop:
- """
- self.base = self.get_base()
- assert size
- assert (size / downscale_f).is_integer()
- self.size = size
- self.LR_size = int(size / downscale_f)
- self.min_crop_f = min_crop_f
- self.max_crop_f = max_crop_f
- assert(max_crop_f <= 1.)
- self.center_crop = not random_crop
-
- self.image_rescaler = albumentations.SmallestMaxSize(max_size=size, interpolation=cv2.INTER_AREA)
-
- self.pil_interpolation = False # gets reset later if incase interp_op is from pillow
-
- if degradation == "bsrgan":
- self.degradation_process = partial(degradation_fn_bsr, sf=downscale_f)
-
- elif degradation == "bsrgan_light":
- self.degradation_process = partial(degradation_fn_bsr_light, sf=downscale_f)
-
- else:
- interpolation_fn = {
- "cv_nearest": cv2.INTER_NEAREST,
- "cv_bilinear": cv2.INTER_LINEAR,
- "cv_bicubic": cv2.INTER_CUBIC,
- "cv_area": cv2.INTER_AREA,
- "cv_lanczos": cv2.INTER_LANCZOS4,
- "pil_nearest": PIL.Image.NEAREST,
- "pil_bilinear": PIL.Image.BILINEAR,
- "pil_bicubic": PIL.Image.BICUBIC,
- "pil_box": PIL.Image.BOX,
- "pil_hamming": PIL.Image.HAMMING,
- "pil_lanczos": PIL.Image.LANCZOS,
- }[degradation]
-
- self.pil_interpolation = degradation.startswith("pil_")
-
- if self.pil_interpolation:
- self.degradation_process = partial(TF.resize, size=self.LR_size, interpolation=interpolation_fn)
-
- else:
- self.degradation_process = albumentations.SmallestMaxSize(max_size=self.LR_size,
- interpolation=interpolation_fn)
-
- def __len__(self):
- return len(self.base)
-
- def __getitem__(self, i):
- example = self.base[i]
- image = Image.open(example["file_path_"])
-
- if not image.mode == "RGB":
- image = image.convert("RGB")
-
- image = np.array(image).astype(np.uint8)
-
- min_side_len = min(image.shape[:2])
- crop_side_len = min_side_len * np.random.uniform(self.min_crop_f, self.max_crop_f, size=None)
- crop_side_len = int(crop_side_len)
-
- if self.center_crop:
- self.cropper = albumentations.CenterCrop(height=crop_side_len, width=crop_side_len)
-
- else:
- self.cropper = albumentations.RandomCrop(height=crop_side_len, width=crop_side_len)
-
- image = self.cropper(image=image)["image"]
- image = self.image_rescaler(image=image)["image"]
-
- if self.pil_interpolation:
- image_pil = PIL.Image.fromarray(image)
- LR_image = self.degradation_process(image_pil)
- LR_image = np.array(LR_image).astype(np.uint8)
-
- else:
- LR_image = self.degradation_process(image=image)["image"]
-
- example["image"] = (image/127.5 - 1.0).astype(np.float32)
- example["LR_image"] = (LR_image/127.5 - 1.0).astype(np.float32)
-
- return example
-
-
-class ImageNetSRTrain(ImageNetSR):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def get_base(self):
- with open("data/imagenet_train_hr_indices.p", "rb") as f:
- indices = pickle.load(f)
- dset = ImageNetTrain(process_images=False,)
- return Subset(dset, indices)
-
-
-class ImageNetSRValidation(ImageNetSR):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def get_base(self):
- with open("data/imagenet_val_hr_indices.p", "rb") as f:
- indices = pickle.load(f)
- dset = ImageNetValidation(process_images=False,)
- return Subset(dset, indices)
diff --git a/spaces/IvaElen/nlp_proj/lstm_preprocessing.py b/spaces/IvaElen/nlp_proj/lstm_preprocessing.py
deleted file mode 100644
index 5eee05ff35a989d9c7543e2d072402a4f85e1d7a..0000000000000000000000000000000000000000
--- a/spaces/IvaElen/nlp_proj/lstm_preprocessing.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import re
-import string
-import numpy as np
-import torch
-
-from nltk.corpus import stopwords
-stop_words = set(stopwords.words('english'))
-
-def data_preprocessing(text: str) -> str:
- """preprocessing string: lowercase, removing html-tags, punctuation and stopwords
-
- Args:
- text (str): input string for preprocessing
-
- Returns:
- str: preprocessed string
- """
-
- text = text.lower()
- text = re.sub('<.*?>', '', text) # Remove html tags
- text = re.sub(r'@\w+', " ", text) # Remove usernames
- text = re.sub(r'#\w+', " ", text) #Remove hash tags
- text = re.sub(r'\d+', " ", text) #Remove digits
- text = ''.join([c for c in text if c not in string.punctuation])# Remove punctuation
- text = [word for word in text.split() if word not in stop_words]
- text = ' '.join(text)
- return text
-
-def get_words_by_freq(sorted_words: list, n: int = 10) -> list:
- return list(filter(lambda x: x[1] > n, sorted_words))
-
-def padding(review_int: list, seq_len: int) -> np.array:
- """Make left-sided padding for input list of tokens
-
- Args:
- review_int (list): input list of tokens
- seq_len (int): max length of sequence, it len(review_int[i]) > seq_len it will be trimmed, else it will be padded by zeros
-
- Returns:
- np.array: padded sequences
- """
- features = np.zeros((len(review_int), seq_len), dtype = int)
- for i, review in enumerate(review_int):
- if len(review) <= seq_len:
- zeros = list(np.zeros(seq_len - len(review)))
- new = zeros + review
- else:
- new = review[: seq_len]
- features[i, :] = np.array(new)
-
- return features
-
-def preprocess_single_string(
- input_string: str,
- seq_len: int,
- vocab_to_int: dict,
- ) -> torch.tensor:
- """Function for all preprocessing steps on a single string
-
- Args:
- input_string (str): input single string for preprocessing
- seq_len (int): max length of sequence, it len(review_int[i]) > seq_len it will be trimmed, else it will be padded by zeros
- vocab_to_int (dict, optional): word corpus {'word' : int index}. Defaults to vocab_to_int.
-
- Returns:
- list: preprocessed string
- """
-
- preprocessed_string = data_preprocessing(input_string)
- result_list = []
- for word in preprocessed_string.split():
- try:
- result_list.append(vocab_to_int[word])
- except KeyError as e:
- print(f'{e}: not in dictionary!')
- result_padded = padding([result_list], seq_len)[0]
-
- return torch.tensor(result_padded)
diff --git a/spaces/JMalott/ai_architecture/dalle/utils/config.py b/spaces/JMalott/ai_architecture/dalle/utils/config.py
deleted file mode 100644
index a957c49fc683e86b04f10715285b61ba25563216..0000000000000000000000000000000000000000
--- a/spaces/JMalott/ai_architecture/dalle/utils/config.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# ------------------------------------------------------------------------------------
-# minDALL-E
-# Copyright (c) 2021 Kakao Brain Corp. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------------------
-
-from typing import Optional, List
-from dataclasses import dataclass, field
-from omegaconf import OmegaConf
-
-
-@dataclass
-class DataConfig:
- dataset: Optional[str] = None
- tokenizer_type: str = 'CharBPE'
- context_length: int = 64
- image_resolution: int = 256
- transforms: str = 'dalle-vqvae'
- bpe_pdrop: Optional[float] = None
-
-
-@dataclass
-class Stage1Hparams:
- double_z: bool = False
- z_channels: int = 256
- resolution: int = 256
- in_channels: int = 3
- out_ch: int = 3
- ch: int = 128
- ch_mult: List[int] = field(default_factory=lambda: [1, 1, 2, 2, 4])
- num_res_blocks: int = 2
- attn_resolutions: List[int] = field(default_factory=lambda: [16])
- pdrop: float = 0.0
-
-
-@dataclass
-class Stage2Hparams:
- embed_dim: int = 1536
- n_layers: int = 42
- n_heads: int = 24
- n_dense_layers: int = 42
- ctx_len_img: int = 256
- ctx_len_txt: int = 64
- embd_pdrop: float = 0.0
- resid_pdrop: float = 0.0
- attn_pdrop: float = 0.0
- mlp_bias: bool = True
- attn_bias: bool = True
- gelu_use_approx: bool = False
- use_head_txt: bool = True
- n_classes: Optional[int] = None
-
-
-@dataclass
-class Stage1Config:
- type: str = 'vqgan'
- embed_dim: int = 256
- n_embed: int = 16384
- hparams: Stage1Hparams = Stage1Hparams()
-
-
-@dataclass
-class Stage2Config:
- type: str = 'transformer1d'
- vocab_size_txt: int = 16384
- vocab_size_img: int = 16384
- use_cls_cond: Optional[bool] = None
- hparams: Stage2Hparams = Stage2Hparams()
-
-
-@dataclass
-class WarmupConfig:
- epoch: int = 1
- multiplier: int = 1
- buffer_epoch: int = 0
- min_lr: float = 0.0
- mode: str = 'fix'
- peak_lr: float = 1e-4
- start_from_zero: bool = True
-
-
-@dataclass
-class OptConfig:
- opt_type: str = 'adamW'
- base_lr: float = 1e-4
- weight_decay: float = 1e-4
- betas: List[float] = field(default_factory=lambda: [0.9, 0.99])
- grad_clip_norm: float = 1.0
-
- sched_type: str = 'cosine'
- max_steps: int = 0
- min_lr: float = 0.0
-
-
-@dataclass
-class ExpConfig:
- local_batch_size: int = 4
- total_batch_size: int = 512
- valid_batch_size: int = 32
- epochs: int = 10
- save_ckpt_freq: int = 2
- test_freq: int = 1
- use_amp: bool = True
-
-
-@dataclass
-class DefaultConfig:
- dataset: DataConfig = DataConfig()
- stage1: Stage1Config = Stage1Config()
- stage2: Stage2Config = Stage2Config()
-
-
-@dataclass
-class FineTuningConfig:
- dataset: DataConfig = DataConfig()
- stage1: Stage1Config = Stage1Config()
- stage2: Stage2Config = Stage2Config()
- optimizer: OptConfig = OptConfig()
- experiment: ExpConfig = ExpConfig()
-
-
-def get_base_config(use_default=True):
- return OmegaConf.structured(DefaultConfig if use_default else FineTuningConfig)
diff --git a/spaces/JanhviSingh/mentalHealthChatbot/entrypoint.sh b/spaces/JanhviSingh/mentalHealthChatbot/entrypoint.sh
deleted file mode 100644
index b8c7c4501142186865f85e750356ecb74cf397e4..0000000000000000000000000000000000000000
--- a/spaces/JanhviSingh/mentalHealthChatbot/entrypoint.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-
-# Activate the virtual environment
-#source venv/bin/activate
-
-# Run the app
-python app.py
\ No newline at end of file
diff --git a/spaces/Jasonyoyo/CodeFormer/README.md b/spaces/Jasonyoyo/CodeFormer/README.md
deleted file mode 100644
index 6fafbe6f03ca8588a58a159d4ab39fe2256c9d88..0000000000000000000000000000000000000000
--- a/spaces/Jasonyoyo/CodeFormer/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: CodeFormer
-emoji: 🐼
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: sczhou/CodeFormer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jimmie/identify_this_insect/app.py b/spaces/Jimmie/identify_this_insect/app.py
deleted file mode 100644
index 978e0e7543a41a6ec24c162b9a62e3042e9bd02d..0000000000000000000000000000000000000000
--- a/spaces/Jimmie/identify_this_insect/app.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# AUTOGENERATED! DO NOT EDIT! File to edit: . (unless otherwise specified).
-
-__all__ = ['repo_id', 'learn', 'classify_image', 'categories', 'title', 'description', 'article', 'image', 'label',
- 'examples', 'intf']
-
-# Cell
-import timm
-from fastai.vision.all import *
-import gradio as gr
-
-# Cell
-from huggingface_hub import from_pretrained_fastai
-
-repo_id = "Jimmie/identify-this-insect"
-
-learn = from_pretrained_fastai(repo_id)
-
-# Cell
-categories = learn.dls.vocab
-
-def classify_image(img):
- pred,idx,probs = learn.predict(img)
- return dict(zip(categories, map(float, probs)))
-
-# Cell
-
-title = "Identify This Insect"
-description = """
-
-This demo was created to distinguish between three types of insects: 'caterpillar', 'centipede', and 'millipede'.
-
-It is just a toy app created mostly because I once got a caterpillar sting and thought that the insect was a centipede and I was scared until I
-googled how different a centipede looks from a caterpillar haha! (The insect that had stung me looked more like the fourth example image below).
-
-Enjoy!
-
-
-"""
-
-article = "Check out how the model was trained: [Training Notebook](https://github.com/jimmiemunyi/deeplearning-experiments/blob/main/notebooks/Centipede_vs_Millipede_vs_Caterpillar.ipynb)."
-image = gr.inputs.Image(shape=(224,224))
-label = gr.outputs.Label()
-examples = ['caterpillar.jpg', 'centipede.jpg', 'millipede.jpg', 'caterpillar-2.jpg']
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples, title = title, description = description, article = article,
-enable_queue=True, cache_examples=False)
-intf.launch()
\ No newline at end of file
diff --git a/spaces/KVNAditya/Personal_News_Summarization_Assistant/app.py b/spaces/KVNAditya/Personal_News_Summarization_Assistant/app.py
deleted file mode 100644
index d6719fef5c7d3c8e57a6155acc8681541807c3bd..0000000000000000000000000000000000000000
--- a/spaces/KVNAditya/Personal_News_Summarization_Assistant/app.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import streamlit as st
-import time
-import gnewsclient.gnewsclient as gnewsclient
-import nltk
-import tempfile
-import os
-
-from googletrans import Translator
-from gtts import gTTS
-from langchain.document_loaders import NewsURLLoader
-
-nltk.download('punkt')
-
-def func__init__gnc_lc_ts(args__mtch_btn):
- op_log = st.empty()
- op_log.text("connecting to GoogleNewsAPI ...")
- time.sleep(2)
- op_log.text("successfully connected to GoogleNewsAPI ...")
- time.sleep(2)
- op_log.text("fetching news ...")
- time.sleep(2)
- op_log.text("summarizing the news extracted from the urls ...")
- time.sleep(2)
- op_log.text("translating the summarized news results ...")
- time.sleep(2)
- op_log.text("returning the translated news results ...")
- time.sleep(2)
- op_log.empty()
- time.sleep(2)
- func__lc_ts(func__gnc(st_sb_opt_loc,st_sb_opt_tpc,st_sb_opt_nc),st_sb_opt_lang,args__mtch_btn)
-
-def func__gnc(args__opt_loc,args__opt_tpc,args__opt_nc):
- config__gnc_nc = gnewsclient.NewsClient(location=args__opt_loc,topic=args__opt_tpc,max_results=args__opt_nc)
- lst__ul__gnc_nc = [] # ul : url - links
- for itr_nc in range(args__opt_nc):
- try:
- lst__ul__gnc_nc.append(config__gnc_nc.get_news()[itr_nc]['link'])
- except:
- pass
- return lst__ul__gnc_nc
-
-def func__lc_ts(args__ul__gnc_nc,args__opt_lang,args__mtch_btn):
- config__ts_langs = {'english' : 'en','telugu' : 'te','hindi' : 'hi'}
- config__lc_nul = NewsURLLoader(args__ul__gnc_nc,nlp=True)
- if(args__mtch_btn==0):
- for itr in enumerate(config__lc_nul.load()):
- try:
- cls__gT = Translator()
- tle__lc_nul_gT,dspn__lc_nul_gT,smry__lc_nul_gT = '','',''
- str__tle_despn_smry = ''
-
- if((len(itr[1].metadata['title']) != 0)):
- tle__lc_nul = 'Title : ' + itr[1].metadata['title']
- tle__lc_nul_gT = cls__gT.translate(tle__lc_nul, dest=config__ts_langs[args__opt_lang]).text
- str__tle_despn_smry += str('.' + tle__lc_nul_gT + '.')
-
- if((len(itr[1].metadata['description']) != 0)):
- dspn__lc_nul = 'Description : ' + itr[1].metadata['description']
- dspn__lc_nul_gT = cls__gT.translate(dspn__lc_nul, dest=config__ts_langs[args__opt_lang]).text
- str__tle_despn_smry += str('.' + dspn__lc_nul_gT + '.')
-
- if((len(itr[1].metadata['summary']) != 0)):
- smry__lc_nul = 'Summary : ' + itr[1].metadata['summary']
- smry__lc_nul_gT = cls__gT.translate(smry__lc_nul, dest=config__ts_langs[args__opt_lang]).text
- str__tle_despn_smry += str('.' + smry__lc_nul_gT + '.')
-
- gTTS__str_tle_despn_smry = gTTS(str__tle_despn_smry,lang=config__ts_langs[args__opt_lang])
- tmpf__gTTS_str_tle_despn_smry = tempfile.NamedTemporaryFile(suffix='.wav',delete=False)
- gTTS__str_tle_despn_smry.save(tmpf__gTTS_str_tle_despn_smry.name)
- tmpf__gTTS_str_tle_despn_smry.close()
-
- st.markdown(f"[{tle__lc_nul_gT}]({args__ul__gnc_nc[itr[0]]})")
- st.audio(tmpf__gTTS_str_tle_despn_smry.name)
- st.write(dspn__lc_nul_gT)
- st.write(smry__lc_nul_gT)
-
- if(itr[0] < len(args__ul__gnc_nc)-1):
- st.subheader('',divider='green')
-
- except Exception as e:
- st.write(e)
-
-
- if(args__mtch_btn==1):
- for itr in config__lc_nul.load():
- try:
- st.write(itr.metadata)
- except Exception as e:
- st.write(e)
-
-
-config__gnc_nc = gnewsclient.NewsClient()
-lst_gnc_nc_locs = config__gnc_nc.locations
-lst_gnc_nc_tpcs = config__gnc_nc.topics
-lst_gnc_nc_langs = config__gnc_nc.languages
-lst_gnc_nc_langs = ['english','telugu','hindi']
-
-st.subheader('',divider='rainbow')
-st.markdown("
-
-Note that:
-1. The pre-built packages have to be used with corresponding version of CUDA and the official package of PyTorch.
- Otherwise, please build detectron2 from source.
-2. New packages are released every few months. Therefore, packages may not contain latest features in the main
- branch and may not be compatible with the main branch of a research project that uses detectron2
- (e.g. those in [projects](projects)).
-
-### Common Installation Issues
-
-Click each issue for its solutions:
-
-
-
-Undefined symbols that looks like "TH..","at::Tensor...","torch..."
-
-
-
-This usually happens when detectron2 or torchvision is not
-compiled with the version of PyTorch you're running.
-
-If the error comes from a pre-built torchvision, uninstall torchvision and pytorch and reinstall them
-following [pytorch.org](http://pytorch.org). So the versions will match.
-
-If the error comes from a pre-built detectron2, check [release notes](https://github.com/facebookresearch/detectron2/releases),
-uninstall and reinstall the correct pre-built detectron2 that matches pytorch version.
-
-If the error comes from detectron2 or torchvision that you built manually from source,
-remove files you built (`build/`, `**/*.so`) and rebuild it so it can pick up the version of pytorch currently in your environment.
-
-If the above instructions do not resolve this problem, please provide an environment (e.g. a dockerfile) that can reproduce the issue.
-
-
-
-
-Missing torch dynamic libraries, OR segmentation fault immediately when using detectron2.
-
-This usually happens when detectron2 or torchvision is not
-compiled with the version of PyTorch you're running. See the previous common issue for the solution.
-
-
-
-
-Undefined C++ symbols (e.g. "GLIBCXX..") or C++ symbols not found.
-
-
-Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime.
-
-This often happens with old anaconda.
-It may help to run `conda update libgcc` to upgrade its runtime.
-
-The fundamental solution is to avoid the mismatch, either by compiling using older version of C++
-compiler, or run the code with proper C++ runtime.
-To run the code with a specific C++ runtime, you can use environment variable `LD_PRELOAD=/path/to/libstdc++.so`.
-
-
-
-
-
-"nvcc not found" or "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available".
-
-
-CUDA is not found when building detectron2.
-You should make sure
-
-```
-python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'
-```
-
-print `(True, a directory with cuda)` at the time you build detectron2.
-
-Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config.
-
-
-
-
-"invalid device function" or "no kernel image is available for execution".
-
-
-Two possibilities:
-
-* You build detectron2 with one version of CUDA but run it with a different version.
-
- To check whether it is the case,
- use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
- In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
- to contain cuda libraries of the same version.
-
- When they are inconsistent,
- you need to either install a different build of PyTorch (or build by yourself)
- to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
-
-* PyTorch/torchvision/Detectron2 is not built for the correct GPU SM architecture (aka. compute capability).
-
- The architecture included by PyTorch/detectron2/torchvision is available in the "architecture flags" in
- `python -m detectron2.utils.collect_env`. It must include
- the architecture of your GPU, which can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).
-
- If you're using pre-built PyTorch/detectron2/torchvision, they have included support for most popular GPUs already.
- If not supported, you need to build them from source.
-
- When building detectron2/torchvision from source, they detect the GPU device and build for only the device.
- This means the compiled code may not work on a different GPU device.
- To recompile them for the correct architecture, remove all installed/compiled files,
- and rebuild them with the `TORCH_CUDA_ARCH_LIST` environment variable set properly.
- For example, `export TORCH_CUDA_ARCH_LIST="6.0;7.0"` makes it compile for both P100s and V100s.
-
-
-
-
-Undefined CUDA symbols; Cannot open libcudart.so
-
-
-The version of NVCC you use to build detectron2 or torchvision does
-not match the version of CUDA you are running with.
-This often happens when using anaconda's CUDA runtime.
-
-Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
-In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
-to contain cuda libraries of the same version.
-
-When they are inconsistent,
-you need to either install a different build of PyTorch (or build by yourself)
-to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
-
-
-
-
-
-C++ compilation errors from NVCC / NVRTC, or "Unsupported gpu architecture"
-
-
-A few possibilities:
-
-1. Local CUDA/NVCC version has to match the CUDA version of your PyTorch. Both can be found in `python collect_env.py`.
- When they are inconsistent, you need to either install a different build of PyTorch (or build by yourself)
- to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
-
-2. Local CUDA/NVCC version shall support the SM architecture (a.k.a. compute capability) of your GPU.
- The capability of your GPU can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).
- The capability supported by NVCC is listed at [here](https://gist.github.com/ax3l/9489132).
- If your NVCC version is too old, this can be workaround by setting environment variable
- `TORCH_CUDA_ARCH_LIST` to a lower, supported capability.
-
-3. The combination of NVCC and GCC you use is incompatible. You need to change one of their versions.
- See [here](https://gist.github.com/ax3l/9489132) for some valid combinations.
- Notably, CUDA<=10.1.105 doesn't support GCC>7.3.
-
- The CUDA/GCC version used by PyTorch can be found by `print(torch.__config__.show())`.
-
-
-
-
-
-
-"ImportError: cannot import name '_C'".
-
-
-Please build and install detectron2 following the instructions above.
-
-Or, if you are running code from detectron2's root directory, `cd` to a different one.
-Otherwise you may not import the code that you installed.
-
-
-
-
-
-Any issue on windows.
-
-
-
-Detectron2 is continuously built on windows with [CircleCI](https://app.circleci.com/pipelines/github/facebookresearch/detectron2?branch=main).
-However we do not provide official support for it.
-PRs that improves code compatibility on windows are welcome.
-
-
-
-
-ONNX conversion segfault after some "TraceWarning".
-
-
-The ONNX package is compiled with a too old compiler.
-
-Please build and install ONNX from its source code using a compiler
-whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`).
-
-
-
-
-
-"library not found for -lstdc++" on older version of MacOS
-
-
-See
-[this stackoverflow answer](https://stackoverflow.com/questions/56083725/macos-build-issues-lstdc-not-found-while-building-python-package).
-
-
-
-
-### Installation inside specific environments:
-
-* __Colab__: see our [Colab Tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
- which has step-by-step instructions.
-
-* __Docker__: The official [Dockerfile](docker) installs detectron2 with a few simple commands.
-
diff --git a/spaces/Tihsrah/Credit_Risk_Assessment/README.md b/spaces/Tihsrah/Credit_Risk_Assessment/README.md
deleted file mode 100644
index 92399dbc03def89cf6b71b43c73261619462d0f1..0000000000000000000000000000000000000000
--- a/spaces/Tihsrah/Credit_Risk_Assessment/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Credit Risk Assessment
-emoji: 🔥
-colorFrom: blue
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Together1415/bingo/README.md b/spaces/Together1415/bingo/README.md
deleted file mode 100644
index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000
--- a/spaces/Together1415/bingo/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: bingo
-emoji: 😊
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/Walterchamy/Kiitec_virtual_assistant/README.md b/spaces/Walterchamy/Kiitec_virtual_assistant/README.md
deleted file mode 100644
index 90f1f94c801e2e1b58e6373bc7abfc7fb5542239..0000000000000000000000000000000000000000
--- a/spaces/Walterchamy/Kiitec_virtual_assistant/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Kiitec Virtual Assistant
-emoji: 💻
-colorFrom: gray
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Wauplin/gradio-user-history/app.py b/spaces/Wauplin/gradio-user-history/app.py
deleted file mode 100644
index 2cd4797777f4131c76287167914cc1e800375a99..0000000000000000000000000000000000000000
--- a/spaces/Wauplin/gradio-user-history/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-#!/usr/bin/env python
-import json
-import pathlib
-import tempfile
-from pathlib import Path
-
-import gradio as gr
-import gradio_user_history as gr_user_history
-from gradio_client import Client
-
-
-client = Client("runwayml/stable-diffusion-v1-5")
-
-
-def generate(prompt: str, profile: gr.OAuthProfile | None) -> tuple[str, list[str]]:
- out_dir = client.predict(prompt, fn_index=1)
-
- metadata = {
- "prompt": prompt,
- "negative_prompt": "",
- "guidance_scale": 0.9,
- }
- with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as metadata_file:
- json.dump(metadata, metadata_file)
-
- with (pathlib.Path(out_dir) / "captions.json").open() as f:
- paths = list(json.load(f).keys())
-
- # Saving user history
- for path in paths:
- gr_user_history.save_image(label=prompt, image=path, profile=profile, metadata=metadata)
-
- return paths # type: ignore
-
-
-with gr.Blocks(css="style.css") as demo:
- with gr.Group():
- prompt = gr.Text(show_label=False, placeholder="Prompt")
- gallery = gr.Gallery(
- show_label=False,
- columns=2,
- rows=2,
- height="600px",
- object_fit="scale-down",
- )
- prompt.submit(fn=generate, inputs=prompt, outputs=gallery)
-
-with gr.Blocks() as demo_with_history:
- with gr.Tab("README"):
- gr.Markdown(Path("README.md").read_text().split("---")[-1])
- with gr.Tab("Demo"):
- demo.render()
- with gr.Tab("Past generations"):
- gr_user_history.render()
-
-if __name__ == "__main__":
- demo_with_history.queue().launch()
diff --git a/spaces/Widium/Image-Recreation/functions/system/devices.py b/spaces/Widium/Image-Recreation/functions/system/devices.py
deleted file mode 100644
index e046ce76d48b77ad82502603c096c893775c9eef..0000000000000000000000000000000000000000
--- a/spaces/Widium/Image-Recreation/functions/system/devices.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# *************************************************************************** #
-# #
-# devices.py #
-# #
-# By: Widium #
-# Github : https://github.com/widium #
-# #
-# Created: 2023/05/05 10:57:02 by Widium #
-# Updated: 2023/05/05 10:57:02 by Widium #
-# #
-# **************************************************************************** #
-
-import os
-
-def deactivate_gpu():
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
-
-import tensorflow as tf
-from tensorflow.python.client import device_lib
-
-
-def get_available_devices():
- local_device_protos = device_lib.list_local_devices()
- devices = [x.name for x in local_device_protos]
- print("Available devices:", devices)
-
-# print("GPU AVAILABLE ?", tf.config.list_physical_devices('GPU'))
diff --git a/spaces/WinWut/Lofi-music-style-transfer/model.py b/spaces/WinWut/Lofi-music-style-transfer/model.py
deleted file mode 100644
index 216e46d1dd8b3d63b5458e1094de7687031863e9..0000000000000000000000000000000000000000
--- a/spaces/WinWut/Lofi-music-style-transfer/model.py
+++ /dev/null
@@ -1,657 +0,0 @@
-#Imports
-
-from __future__ import print_function, division
-import tensorflow as tf
-from glob import glob
-import scipy
-import soundfile as sf
-import matplotlib.pyplot as plt
-from IPython.display import clear_output
-from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Concatenate, Conv2D, Conv2DTranspose, GlobalAveragePooling2D, UpSampling2D, LeakyReLU, ReLU, Add, Multiply, Lambda, Dot, BatchNormalization, Activation, ZeroPadding2D, Cropping2D, Cropping1D
-from tensorflow.keras.models import Sequential, Model, load_model
-from tensorflow.keras.optimizers import Adam
-from tensorflow.keras.initializers import TruncatedNormal, he_normal
-import tensorflow.keras.backend as K
-import datetime
-import numpy as np
-import random
-import matplotlib.pyplot as plt
-import collections
-from PIL import Image
-from skimage.transform import resize
-import imageio
-import librosa
-import librosa.display
-from librosa.feature import melspectrogram
-import os
-import time
-import IPython
-
-#Hyperparameters
-
-hop=192 #hop size (window size = 6*hop)
-sr=16000 #sampling rate
-min_level_db=-100 #reference values to normalize data
-ref_level_db=20
-
-shape=24 #length of time axis of split specrograms to feed to generator
-vec_len=128 #length of vector generated by siamese vector
-bs = 16 #batch size
-delta = 2. #constant for siamese loss
-
-#There seems to be a problem with Tensorflow STFT, so we'll be using pytorch to handle offline mel-spectrogram generation and waveform reconstruction
-#For waveform reconstruction, a gradient-based method is used:
-
-''' Decorsière, Rémi, Peter L. Søndergaard, Ewen N. MacDonald, and Torsten Dau.
-"Inversion of auditory spectrograms, traditional spectrograms, and other envelope representations."
-IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, no. 1 (2014): 46-56.'''
-
-#ORIGINAL CODE FROM https://github.com/yoyololicon/spectrogram-inversion
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from tqdm import tqdm
-from functools import partial
-import math
-import heapq
-from torchaudio.transforms import MelScale, Spectrogram
-
-
-specobj = Spectrogram(n_fft=6*hop, win_length=6*hop, hop_length=hop, pad=0, power=2, normalized=True)
-specfunc = specobj.forward
-melobj = MelScale(n_mels=hop, sample_rate=sr, f_min=0.,n_stft=577)
-melfunc = melobj.forward
-
-def melspecfunc(waveform):
- specgram = specfunc(waveform)
- mel_specgram = melfunc(specgram)
- return mel_specgram
-
-def spectral_convergence(input, target):
- return 20 * ((input - target).norm().log10() - target.norm().log10())
-
-def GRAD(spec, transform_fn, samples=None, init_x0=None, maxiter=1000, tol=1e-6, verbose=1, evaiter=10, lr=0.003):
-
- spec = torch.Tensor(spec)
- samples = (spec.shape[-1]*hop)-hop
-
- if init_x0 is None:
- init_x0 = spec.new_empty((1,samples)).normal_(std=1e-6)
- x = nn.Parameter(init_x0)
- T = spec
-
- criterion = nn.L1Loss()
- optimizer = torch.optim.Adam([x], lr=lr)
-
- bar_dict = {}
- metric_func = spectral_convergence
- bar_dict['spectral_convergence'] = 0
- metric = 'spectral_convergence'
-
- init_loss = None
- with tqdm(total=maxiter, disable=not verbose) as pbar:
- for i in range(maxiter):
- optimizer.zero_grad()
- V = transform_fn(x)
- loss = criterion(V, T)
- loss.backward()
- optimizer.step()
- lr = lr*0.9999
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
-
- if i % evaiter == evaiter - 1:
- with torch.no_grad():
- V = transform_fn(x)
- bar_dict[metric] = metric_func(V, spec).item()
- l2_loss = criterion(V, spec).item()
- pbar.set_postfix(**bar_dict, loss=l2_loss)
- pbar.update(evaiter)
-
- return x.detach().view(-1).cpu()
-
-def normalize(S):
- return np.clip((((S - min_level_db) / -min_level_db)*2.)-1., -1, 1)
-
-def denormalize(S):
- return (((np.clip(S, -1, 1)+1.)/2.) * -min_level_db) + min_level_db
-
-def prep(wv,hop=192):
- S = np.array(torch.squeeze(melspecfunc(torch.Tensor(wv).view(1,-1))).detach().cpu())
- S = librosa.power_to_db(S)-ref_level_db
- return normalize(S)
-
-def deprep(S):
- S = denormalize(S)+ref_level_db
- S = librosa.db_to_power(S)
- wv = GRAD(np.expand_dims(S,0), melspecfunc, maxiter=2000, evaiter=10, tol=1e-8)
- return np.array(np.squeeze(wv))
-
-#Helper functions
-
-#Generate spectrograms from waveform array
-def tospec(data):
- specs=np.empty(data.shape[0], dtype=object)
- for i in range(data.shape[0]):
- x = data[i]
- S=prep(x)
- S = np.array(S, dtype=np.float32)
- specs[i]=np.expand_dims(S, -1)
- print(specs.shape)
- return specs
-
-#Generate multiple spectrograms with a determined length from single wav file
-def tospeclong(path, length=4*16000):
- x, sr = librosa.load(path,sr=16000)
- x,_ = librosa.effects.trim(x)
- loudls = librosa.effects.split(x, top_db=50)
- xls = np.array([])
- for interv in loudls:
- xls = np.concatenate((xls,x[interv[0]:interv[1]]))
- x = xls
- num = x.shape[0]//length
- specs=np.empty(num, dtype=object)
- for i in range(num-1):
- a = x[i*length:(i+1)*length]
- S = prep(a)
- S = np.array(S, dtype=np.float32)
- try:
- sh = S.shape
- specs[i]=S
- except AttributeError:
- print('spectrogram failed')
- print(specs.shape)
- return specs
-
-#Waveform array from path of folder containing wav files
-def audio_array(path):
- ls = glob(f'{path}/*.wav')
- adata = []
- for i in range(len(ls)):
- try:
- x, sr = tf.audio.decode_wav(tf.io.read_file(ls[i]), 1)
- except:
- print(ls[i],"is broken")
- continue
- x = np.array(x, dtype=np.float32)
- adata.append(x)
- return np.array(adata)
-
-#Concatenate spectrograms in array along the time axis
-def testass(a):
- but=False
- con = np.array([])
- nim = a.shape[0]
- for i in range(nim):
- im = a[i]
- im = np.squeeze(im)
- if not but:
- con=im
- but=True
- else:
- con = np.concatenate((con,im), axis=1)
- return np.squeeze(con)
-
-#Split spectrograms in chunks with equal size
-def splitcut(data):
- ls = []
- mini = 0
- minifinal = 10*shape #max spectrogram length
- for i in range(data.shape[0]-1):
- if data[i].shape[1]<=data[i+1].shape[1]:
- mini = data[i].shape[1]
- else:
- mini = data[i+1].shape[1]
- if mini>=3*shape and mini=3*shape:
- for n in range(x.shape[1]//minifinal):
- ls.append(x[:,n*minifinal:n*minifinal+minifinal,:])
- ls.append(x[:,-minifinal:,:])
- return np.array(ls)
-
-#Adding Spectral Normalization to convolutional layers
-
-from tensorflow.python.keras.utils import conv_utils
-from tensorflow.python.ops import array_ops
-from tensorflow.python.ops import math_ops
-from tensorflow.python.ops import sparse_ops
-from tensorflow.python.ops import gen_math_ops
-from tensorflow.python.ops import standard_ops
-from tensorflow.python.eager import context
-from tensorflow.python.framework import tensor_shape
-
-def l2normalize(v, eps=1e-12):
- return v / (tf.norm(v) + eps)
-
-
-class ConvSN2D(tf.keras.layers.Conv2D):
-
- def __init__(self, filters, kernel_size, power_iterations=1, **kwargs):
- super(ConvSN2D, self).__init__(filters, kernel_size, **kwargs)
- self.power_iterations = power_iterations
-
-
- def build(self, input_shape):
- super(ConvSN2D, self).build(input_shape)
-
- if self.data_format == 'channels_first':
- channel_axis = 1
- else:
- channel_axis = -1
-
- self.u = self.add_weight(self.name + '_u',
- shape=tuple([1, self.kernel.shape.as_list()[-1]]),
- initializer=tf.initializers.RandomNormal(0, 1),
- trainable=False
- )
-
- def compute_spectral_norm(self, W, new_u, W_shape):
- for _ in range(self.power_iterations):
-
- new_v = l2normalize(tf.matmul(new_u, tf.transpose(W)))
- new_u = l2normalize(tf.matmul(new_v, W))
-
- sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u))
- W_bar = W/sigma
-
- with tf.control_dependencies([self.u.assign(new_u)]):
- W_bar = tf.reshape(W_bar, W_shape)
-
- return W_bar
-
- def convolution_op(self, inputs, kernel):
- if self.padding == "causal":
- tf_padding = "VALID" # Causal padding handled in `call`.
- elif isinstance(self.padding, str):
- tf_padding = self.padding.upper()
- else:
- tf_padding = self.padding
-
- return tf.nn.convolution(
- inputs,
- kernel,
- strides=list(self.strides),
- padding=tf_padding,
- dilations=list(self.dilation_rate),
- )
- def call(self, inputs):
- W_shape = self.kernel.shape.as_list()
- W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1]))
- new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape)
- outputs = self.convolution_op(inputs, new_kernel)
-
- if self.use_bias:
- if self.data_format == 'channels_first':
- outputs = tf.nn.bias_add(outputs, self.bias, data_format='NCHW')
- else:
- outputs = tf.nn.bias_add(outputs, self.bias, data_format='NHWC')
- if self.activation is not None:
- return self.activation(outputs)
-
- return outputs
-
-
-class ConvSN2DTranspose(tf.keras.layers.Conv2DTranspose):
-
- def __init__(self, filters, kernel_size, power_iterations=1, **kwargs):
- super(ConvSN2DTranspose, self).__init__(filters, kernel_size, **kwargs)
- self.power_iterations = power_iterations
-
-
- def build(self, input_shape):
- super(ConvSN2DTranspose, self).build(input_shape)
-
- if self.data_format == 'channels_first':
- channel_axis = 1
- else:
- channel_axis = -1
-
- self.u = self.add_weight(self.name + '_u',
- shape=tuple([1, self.kernel.shape.as_list()[-1]]),
- initializer=tf.initializers.RandomNormal(0, 1),
- trainable=False
- )
-
- def compute_spectral_norm(self, W, new_u, W_shape):
- for _ in range(self.power_iterations):
-
- new_v = l2normalize(tf.matmul(new_u, tf.transpose(W)))
- new_u = l2normalize(tf.matmul(new_v, W))
-
- sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u))
- W_bar = W/sigma
-
- with tf.control_dependencies([self.u.assign(new_u)]):
- W_bar = tf.reshape(W_bar, W_shape)
-
- return W_bar
-
- def call(self, inputs):
- W_shape = self.kernel.shape.as_list()
- W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1]))
- new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape)
-
- inputs_shape = array_ops.shape(inputs)
- batch_size = inputs_shape[0]
- if self.data_format == 'channels_first':
- h_axis, w_axis = 2, 3
- else:
- h_axis, w_axis = 1, 2
-
- height, width = inputs_shape[h_axis], inputs_shape[w_axis]
- kernel_h, kernel_w = self.kernel_size
- stride_h, stride_w = self.strides
-
- if self.output_padding is None:
- out_pad_h = out_pad_w = None
- else:
- out_pad_h, out_pad_w = self.output_padding
-
- out_height = conv_utils.deconv_output_length(height,
- kernel_h,
- padding=self.padding,
- output_padding=out_pad_h,
- stride=stride_h,
- dilation=self.dilation_rate[0])
- out_width = conv_utils.deconv_output_length(width,
- kernel_w,
- padding=self.padding,
- output_padding=out_pad_w,
- stride=stride_w,
- dilation=self.dilation_rate[1])
- if self.data_format == 'channels_first':
- output_shape = (batch_size, self.filters, out_height, out_width)
- else:
- output_shape = (batch_size, out_height, out_width, self.filters)
-
- output_shape_tensor = array_ops.stack(output_shape)
- outputs = K.conv2d_transpose(
- inputs,
- new_kernel,
- output_shape_tensor,
- strides=self.strides,
- padding=self.padding,
- data_format=self.data_format,
- dilation_rate=self.dilation_rate)
-
- if not context.executing_eagerly():
- out_shape = self.compute_output_shape(inputs.shape)
- outputs.set_shape(out_shape)
-
- if self.use_bias:
- outputs = tf.nn.bias_add(
- outputs,
- self.bias,
- data_format=conv_utils.convert_data_format(self.data_format, ndim=4))
-
- if self.activation is not None:
- return self.activation(outputs)
- return outputs
-
-
-class DenseSN(Dense):
-
- def build(self, input_shape):
- super(DenseSN, self).build(input_shape)
-
- self.u = self.add_weight(self.name + '_u',
- shape=tuple([1, self.kernel.shape.as_list()[-1]]),
- initializer=tf.initializers.RandomNormal(0, 1),
- trainable=False)
-
- def compute_spectral_norm(self, W, new_u, W_shape):
- new_v = l2normalize(tf.matmul(new_u, tf.transpose(W)))
- new_u = l2normalize(tf.matmul(new_v, W))
- sigma = tf.matmul(tf.matmul(new_v, W), tf.transpose(new_u))
- W_bar = W/sigma
- with tf.control_dependencies([self.u.assign(new_u)]):
- W_bar = tf.reshape(W_bar, W_shape)
- return W_bar
-
- def call(self, inputs):
- W_shape = self.kernel.shape.as_list()
- W_reshaped = tf.reshape(self.kernel, (-1, W_shape[-1]))
- new_kernel = self.compute_spectral_norm(W_reshaped, self.u, W_shape)
- rank = len(inputs.shape)
- if rank > 2:
- outputs = standard_ops.tensordot(inputs, new_kernel, [[rank - 1], [0]])
- if not context.executing_eagerly():
- shape = inputs.shape.as_list()
- output_shape = shape[:-1] + [self.units]
- outputs.set_shape(output_shape)
- else:
- inputs = math_ops.cast(inputs, self._compute_dtype)
- if K.is_sparse(inputs):
- outputs = sparse_ops.sparse_tensor_dense_matmul(inputs, new_kernel)
- else:
- outputs = gen_math_ops.mat_mul(inputs, new_kernel)
- if self.use_bias:
- outputs = tf.nn.bias_add(outputs, self.bias)
- if self.activation is not None:
- return self.activation(outputs)
- return outputs
-
-#Networks Architecture
-
-init = tf.keras.initializers.he_uniform()
-
-def conv2d(layer_input, filters, kernel_size=4, strides=2, padding='same', leaky=True, bnorm=True, sn=True):
- if leaky:
- Activ = LeakyReLU(alpha=0.2)
- else:
- Activ = ReLU()
- if sn:
- d = ConvSN2D(filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=init, use_bias=False)(layer_input)
- else:
- d = Conv2D(filters, kernel_size=kernel_size, strides=strides, padding=padding, kernel_initializer=init, use_bias=False)(layer_input)
- if bnorm:
- d = BatchNormalization()(d)
- d = Activ(d)
- return d
-
-def deconv2d(layer_input, layer_res, filters, kernel_size=4, conc=True, scalev=False, bnorm=True, up=True, padding='same', strides=2):
- if up:
- u = UpSampling2D((1,2))(layer_input)
- u = ConvSN2D(filters, kernel_size, strides=(1,1), kernel_initializer=init, use_bias=False, padding=padding)(u)
- else:
- u = ConvSN2DTranspose(filters, kernel_size, strides=strides, kernel_initializer=init, use_bias=False, padding=padding)(layer_input)
- if bnorm:
- u = BatchNormalization()(u)
- u = LeakyReLU(alpha=0.2)(u)
- if conc:
- u = Concatenate()([u,layer_res])
- return u
-
-#Extract function: splitting spectrograms
-def extract_image(im):
- im1 = Cropping2D(((0,0), (0, 2*(im.shape[2]//3))))(im)
- im2 = Cropping2D(((0,0), (im.shape[2]//3,im.shape[2]//3)))(im)
- im3 = Cropping2D(((0,0), (2*(im.shape[2]//3), 0)))(im)
- return im1,im2,im3
-
-#Assemble function: concatenating spectrograms
-def assemble_image(lsim):
- im1,im2,im3 = lsim
- imh = Concatenate(2)([im1,im2,im3])
- return imh
-
-#U-NET style architecture
-def build_generator(input_shape):
- h,w,c = input_shape
- inp = Input(shape=input_shape)
- #downscaling
- g0 = tf.keras.layers.ZeroPadding2D((0,1))(inp)
- g1 = conv2d(g0, 256, kernel_size=(h,3), strides=1, padding='valid')
- g2 = conv2d(g1, 256, kernel_size=(1,9), strides=(1,2))
- g3 = conv2d(g2, 256, kernel_size=(1,7), strides=(1,2))
- #upscaling
- g4 = deconv2d(g3,g2, 256, kernel_size=(1,7), strides=(1,2))
- g5 = deconv2d(g4,g1, 256, kernel_size=(1,9), strides=(1,2), bnorm=False)
- g6 = ConvSN2DTranspose(1, kernel_size=(h,1), strides=(1,1), kernel_initializer=init, padding='valid', activation='tanh')(g5)
- return Model(inp,g6, name='G')
-
-#Siamese Network
-def build_siamese(input_shape):
- h,w,c = input_shape
- inp = Input(shape=input_shape)
- g1 = conv2d(inp, 256, kernel_size=(h,3), strides=1, padding='valid', sn=False)
- g2 = conv2d(g1, 256, kernel_size=(1,9), strides=(1,2), sn=False)
- g3 = conv2d(g2, 256, kernel_size=(1,7), strides=(1,2), sn=False)
- g4 = Flatten()(g3)
- g5 = Dense(vec_len)(g4)
- return Model(inp, g5, name='S')
-
-#Discriminator (Critic) Network
-def build_critic(input_shape):
- h,w,c = input_shape
- inp = Input(shape=input_shape)
- g1 = conv2d(inp, 512, kernel_size=(h,3), strides=1, padding='valid', bnorm=False)
- g2 = conv2d(g1, 512, kernel_size=(1,9), strides=(1,2), bnorm=False)
- g3 = conv2d(g2, 512, kernel_size=(1,7), strides=(1,2), bnorm=False)
- g4 = Flatten()(g3)
- g4 = DenseSN(1, kernel_initializer=init)(g4)
- return Model(inp, g4, name='C')
-
-#Load past models from path to resume training or test
-save_model_path = '/content/drive/MyDrive/weights' #@param {type:"string"}
-def load(path):
- gen = build_generator((hop,shape,1))
- siam = build_siamese((hop,shape,1))
- critic = build_critic((hop,3*shape,1))
- gen.load_weights(path+'/gen.h5')
- critic.load_weights(path+'/critic.h5')
- siam.load_weights(path+'/siam.h5')
- return gen,critic,siam
-
-#Build models
-def build():
- gen = build_generator((hop,shape,1))
- siam = build_siamese((hop,shape,1))
- critic = build_critic((hop,3*shape,1)) #the discriminator accepts as input spectrograms of triple the width of those generated by the generator
- return gen,critic,siam
-
-#Show results mid-training
-def save_test_image_full(path):
- a = testgena()
- print(a.shape)
- ab = gen(a, training=False)
- ab = testass(ab)
- a = testass(a)
- abwv = deprep(ab)
- awv = deprep(a)
- sf.write(path+'/new_file.wav', abwv, sr)
- IPython.display.display(IPython.display.Audio(np.squeeze(abwv), rate=sr))
- IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr))
- fig, axs = plt.subplots(ncols=2)
- axs[0].imshow(np.flip(a, -2), cmap=None)
- axs[0].axis('off')
- axs[0].set_title('Source')
- axs[1].imshow(np.flip(ab, -2), cmap=None)
- axs[1].axis('off')
- axs[1].set_title('Generated')
- plt.show()
-
-#Save in training loop
-def save_end(epoch,gloss,closs,mloss,n_save=3,save_path=save_model_path): #use custom save_path (i.e. Drive '../content/drive/My Drive/')
- if epoch % n_save == 0:
- print('Saving...')
- path = f'{save_path}/MELGANVC-{str(gloss)[:9]}-{str(closs)[:9]}-{str(mloss)[:9]}'
- os.mkdir(path)
- gen.save_weights(path+'/gen.h5')
- critic.save_weights(path+'/critic.h5')
- siam.save_weights(path+'/siam.h5')
- save_test_image_full(path)
-
-#Get models and optimizers
-def get_networks(shape, load_model=False, path=None):
- if not load_model:
- gen,critic,siam = build()
- else:
- gen,critic,siam = load(path)
- print('Built networks')
-
- opt_gen = Adam(0.0001, 0.5)
- opt_disc = Adam(0.0001, 0.5)
-
- return gen,critic,siam, [opt_gen,opt_disc]
-
-#Set learning rate
-def update_lr(lr):
- opt_gen.learning_rate = lr
- opt_disc.learning_rate = lr
-
-#Build models and initialize optimizers
-load_model_path='MELGANVC-0.4886211-0.5750153-0-20230612T163214Z-001\MELGANVC-0.4886211-0.5750153-0' #@param {type:"string"}
-#If load_model=True, specify the path where the models are saved
-
-gen,critic,siam, [opt_gen,opt_disc] = get_networks(shape, load_model=True,path="MELGANVC-0.4886211-0.5750153-0")
-
-#After Training, use these functions to convert data with the generator and save the results
-
-#Assembling generated Spectrogram chunks into final Spectrogram
-def specass(a,spec):
- but=False
- con = np.array([])
- nim = a.shape[0]
- for i in range(nim-1):
- im = a[i]
- im = np.squeeze(im)
- if not but:
- con=im
- but=True
- else:
- con = np.concatenate((con,im), axis=1)
- diff = spec.shape[1]-(nim*shape)
- a = np.squeeze(a)
- con = np.concatenate((con,a[-1,:,-diff:]), axis=1)
- return np.squeeze(con)
-
-#Splitting input spectrogram into different chunks to feed to the generator
-def chopspec(spec):
- dsa=[]
- for i in range(spec.shape[1]//shape):
- im = spec[:,i*shape:i*shape+shape]
- im = np.reshape(im, (im.shape[0],im.shape[1],1))
- dsa.append(im)
- imlast = spec[:,-shape:]
- imlast = np.reshape(imlast, (imlast.shape[0],imlast.shape[1],1))
- dsa.append(imlast)
- return np.array(dsa, dtype=np.float32)
-
-#Converting from source Spectrogram to target Spectrogram
-def towave(spec, name, path='../content/', show=False):
- specarr = chopspec(spec)
- print(specarr.shape)
- a = specarr
- print('Generating...')
- ab = gen(a, training=False)
- print('Assembling and Converting...')
- a = specass(a,spec)
- ab = specass(ab,spec)
- awv = deprep(a)
- abwv = deprep(ab)
- print('Saving...')
- pathfin = f'{path}/{name}'
- try:
- os.mkdir(pathfin)
- except:
- pass
- sf.write(pathfin+'/AB.wav', abwv, sr)
- sf.write(pathfin+'/A.wav', awv, sr)
- print('Saved WAV!')
- IPython.display.display(IPython.display.Audio(np.squeeze(abwv), rate=sr))
- IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr))
- if show:
- fig, axs = plt.subplots(ncols=2)
- axs[0].imshow(np.flip(a, -2), cmap=None)
- axs[0].axis('off')
- axs[0].set_title('Source')
- axs[1].imshow(np.flip(ab, -2), cmap=None)
- axs[1].axis('off')
- axs[1].set_title('Generated')
- plt.show()
- return abwv
\ No newline at end of file
diff --git a/spaces/Xule/ChuanhuChatGPT/assets/custom.css b/spaces/Xule/ChuanhuChatGPT/assets/custom.css
deleted file mode 100644
index af5e9f2118b843b3bbd7627ed45e970c20b13bef..0000000000000000000000000000000000000000
--- a/spaces/Xule/ChuanhuChatGPT/assets/custom.css
+++ /dev/null
@@ -1,353 +0,0 @@
-:root {
- --chatbot-color-light: #F3F3F3;
- --chatbot-color-dark: #121111;
-}
-
-#app_title {
- font-weight: var(--prose-header-text-weight);
- font-size: var(--text-xxl);
- line-height: 1.3;
- text-align: left;
- margin-top: 6px;
- white-space: nowrap;
-}
-#description {
- text-align: center;
- margin:16px 0
-}
-
-/* 覆盖gradio的页脚信息QAQ */
-/* footer {
- display: none !important;
-} */
-#footer {
- text-align: center;
-}
-#footer div {
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.85;
-}
-
-#float_display {
- position: absolute;
- max-height: 30px;
-}
-/* user_info */
-#user_info {
- white-space: nowrap;
- position: absolute; left: 8em; top: .2em;
- z-index: var(--layer-2);
- box-shadow: var(--block-shadow);
- border: none; border-radius: var(--block-label-radius);
- background: var(--color-accent);
- padding: var(--block-label-padding);
- font-size: var(--block-label-text-size); line-height: var(--line-sm);
- width: auto; min-height: 30px!important;
- opacity: 1;
- transition: opacity 0.3s ease-in-out;
-}
-#user_info .wrap {
- opacity: 0;
-}
-#user_info p {
- color: white;
- font-weight: var(--block-label-text-weight);
-}
-#user_info.hideK {
- opacity: 0;
- transition: opacity 1s ease-in-out;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-#status_display {
- transition: all 0.6s;
-}
-#chuanhu_chatbot {
- transition: height 0.3s ease;
-}
-
-/* usage_display */
-.insert_block {
- position: relative;
- margin: 0;
- padding: .5em 1em;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: 0 1em;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-
-.apSwitch {
- top: 2px;
- display: inline-block;
- height: 24px;
- position: relative;
- width: 48px;
- border-radius: 12px;
-}
-.apSwitch input {
- display: none !important;
-}
-.apSlider {
- background-color: var(--block-label-background-fill);
- bottom: 0;
- cursor: pointer;
- left: 0;
- position: absolute;
- right: 0;
- top: 0;
- transition: .4s;
- font-size: 18px;
- border-radius: 12px;
-}
-.apSlider::before {
- bottom: -1.5px;
- left: 1px;
- position: absolute;
- transition: .4s;
- content: "🌞";
-}
-input:checked + .apSlider {
- background-color: var(--block-label-background-fill);
-}
-input:checked + .apSlider::before {
- transform: translateX(23px);
- content:"🌚";
-}
-
-#submit_btn, #cancel_btn {
- height: 42px !important;
-}
-#submit_btn::before {
- content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-#cancel_btn::before {
- content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色(默认) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
- color: #000000 !important;
-}
-[data-testid = "bot"] {
- background-color: #FFFFFF !important;
-}
-[data-testid = "user"] {
- background-color: #95EC69 !important;
-}
-/* 暗色 */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-color-dark) !important;
- color: #FFFFFF !important;
-}
-.dark [data-testid = "bot"] {
- background-color: #2C2C2C !important;
-}
-.dark [data-testid = "user"] {
- background-color: #26B561 !important;
-}
-
-/* 屏幕宽度大于等于500px的设备 */
-/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
-@media screen and (min-width: 500px) {
- #chuanhu_chatbot {
- height: calc(100vh - 200px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
-}
-/* 屏幕宽度小于500px的设备 */
-@media screen and (max-width: 499px) {
- #chuanhu_chatbot {
- height: calc(100vh - 140px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
- [data-testid = "bot"] {
- max-width: 98% !important;
- }
- #app_title h1{
- letter-spacing: -1px; font-size: 22px;
- }
-}
-/* 对话气泡 */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/Yntec/photoMovieX/style.css b/spaces/Yntec/photoMovieX/style.css
deleted file mode 100644
index 142c4b92e938cc8cd33cde5ab580b5fd6a2aac78..0000000000000000000000000000000000000000
--- a/spaces/Yntec/photoMovieX/style.css
+++ /dev/null
@@ -1,97 +0,0 @@
-#col-container {color: white;
- max-width: 1200px;
- margin-left: auto;
- margin-right: auto;
-}
-a {
- color: inherit;
- text-decoration: underline;
-}
-.gradio-container {
- color: #ffaa66;
- background-color: #005566;
- font-family: 'IBM Plex Sans', sans-serif;
-}
-.gr-button {
- color: #ffffff !important;
- text-shadow: 1px 1px 0 rgba(0, 0, 0, 1) !important;
- background-image: linear-gradient(#76635a, #d2a489) !important;
- border-radius: 24px !important;
- border: solid 1px !important;
- border-top-color: #ffc99f !important;
- border-right-color: #000000 !important;
- border-bottom-color: #000000 !important;
- border-left-color: #ffc99f !important;
- padding: 6px 30px;
-}
-input[type='range'] {
- accent-color: #9d66e5;
-}
-.dark input[type='range'] {
- accent-color: #dfdfdf;
-}
-.container {
- color: #ffaa66;
- max-width: 1200px;
- margin: auto;
- padding-top: 1.5rem;
-}
-#gallery {
- color: #ffaa66;
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
-}
-#gallery>div>.h-full {
- color: #ffaa66;
- min-height: 20rem;
-}
-.details:hover {
- text-decoration: underline;
-}
-.gr-button:focus {
- border-color: rgb(255 160 0 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(0 0 0 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
-}
-#advanced-options {
- color: #ffaa66;
- margin-bottom: 20px;
-}
-.footer {
- color: #ffaa66;
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
-}
-.footer>p {
- color: #ffaa66;
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
-}
-.dark .logo{ filter: invert(1); }
-.dark .footer {
- border-color: #303030;
-}
-.dark .footer>p {
- background: #0b0f19;
-}
-.acknowledgments h4{
- color: #ffaa66;
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
-}
-
diff --git a/spaces/Yuelili/RealNagrse/scripts/pytorch2onnx.py b/spaces/Yuelili/RealNagrse/scripts/pytorch2onnx.py
deleted file mode 100644
index 09d99b2e0171265e70e7507ed8e882b616b449a1..0000000000000000000000000000000000000000
--- a/spaces/Yuelili/RealNagrse/scripts/pytorch2onnx.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import argparse
-import torch
-import torch.onnx
-from basicsr.archs.rrdbnet_arch import RRDBNet
-
-
-def main(args):
- # An instance of the model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- if args.params:
- keyname = 'params'
- else:
- keyname = 'params_ema'
- model.load_state_dict(torch.load(args.input)[keyname])
- # set the train mode to false since we will only run the forward pass.
- model.train(False)
- model.cpu().eval()
-
- # An example input
- x = torch.rand(1, 3, 64, 64)
- # Export the model
- with torch.no_grad():
- torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True)
- print(torch_out.shape)
-
-
-if __name__ == '__main__':
- """Convert pytorch model to onnx models"""
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path')
- parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path')
- parser.add_argument('--params', action='store_false', help='Use params instead of params_ema')
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/models/gpt_model.py b/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/models/gpt_model.py
deleted file mode 100644
index 4a64aaf9e56067543a2aab17d9b20f6170b5b75f..0000000000000000000000000000000000000000
--- a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/poetry_diacritizer/models/gpt_model.py
+++ /dev/null
@@ -1,213 +0,0 @@
-"""
-OpenAI's GPT-2 ported to PyTorch.
-"""
-import math
-
-import attr
-import torch
-from torch import nn
-from torch.nn import functional as F
-import torch.utils.checkpoint
-
-
-@attr.s(auto_attribs=True, frozen=True)
-class HParams:
- n_vocab: int
- n_ctx: int
- n_embed: int
- n_hidden: int
- n_head: int
- n_layer: int
- gradient_checkpointing: bool = False
-
-
-class Model(nn.Module):
- def __init__(self, hparams: HParams):
- super().__init__()
- self.hparams = hparams
- self.wpe = nn.Embedding(hparams.n_ctx, hparams.n_embed)
- nn.init.normal_(self.wpe.weight, std=0.01)
- self.wte = nn.Embedding(hparams.n_vocab, hparams.n_embed)
- nn.init.normal_(self.wte.weight, std=0.02)
- self.blocks = nn.ModuleList(
- [Block(hparams) for _ in range(hparams.n_layer)])
- self.ln_f = Norm(self.hparams.n_hidden)
- if hparams.n_hidden != hparams.n_embed:
- self.in_proj = Conv1D(hparams.n_embed, hparams.n_hidden)
- self.out_proj = Conv1D(hparams.n_hidden, hparams.n_embed)
- else:
- self.in_proj = self.out_proj = None
-
- def forward(self, x, past=None):
- # Embedding
- past_length = 0 if past is None else past.shape[-2]
- batch_size, n_ctx = x.shape
- position = position_for(batch_size, n_ctx, past_length, x.device)
- h = self.wte(x) + self.wpe(position)
- assert h.shape == (batch_size, n_ctx, self.hparams.n_embed)
- if self.in_proj:
- h = self.in_proj(h)
- # Transformer
- presents = []
- for i, block in enumerate(self.blocks):
- if self.hparams.gradient_checkpointing:
- h, present = torch.utils.checkpoint.checkpoint(
- block, h, past[:, i] if past is not None else None)
- else:
- h, present = block(
- h, past=past[:, i] if past is not None else None)
- presents.append(present)
- h = self.ln_f(h)
- if self.out_proj:
- h = self.out_proj(h)
- # Output logits
- h_flat = h.reshape([batch_size * n_ctx, self.hparams.n_embed])
- logits = torch.matmul(h_flat, self.wte.weight.t())
- logits = logits.reshape([batch_size, n_ctx, self.hparams.n_vocab])
- return {
- 'presents': torch.stack(tuple(presents), dim=1),
- 'logits': logits,
- }
-
-
-class Block(nn.Module):
- def __init__(self, hparams: HParams):
- super().__init__()
- self.ln_1 = Norm(hparams.n_hidden)
- self.ln_2 = Norm(hparams.n_hidden)
- self.mlp = MLP(hparams.n_hidden, hparams.n_hidden * 4)
- self.attn = Attention(hparams)
-
- def forward(self, x, past):
- a, present = self.attn(self.ln_1(x), past=past)
- x = x + a
- m = self.mlp(self.ln_2(x))
- x = x + m
- return x, present
-
-
-class Norm(nn.Module):
- """ Normalize to mean = 0, std = 1, then do a diagonal affine transform.
- """
- def __init__(self, n_features, *, dim=-1, epsilon=1e-5):
- super().__init__()
- self.n_features = n_features
- self.dim = dim
- self.epsilon = epsilon
- self.g = nn.Parameter(torch.ones(n_features))
- self.b = nn.Parameter(torch.zeros(n_features))
-
- def forward(self, x):
- assert x.shape[-1] == self.n_features
- u = torch.mean(x, dim=self.dim, keepdim=True)
- xmu = x - u
- s = torch.mean(xmu * xmu, dim=self.dim, keepdim=True)
- return xmu * torch.rsqrt(s + self.epsilon) * self.g + self.b
-
-
-class MLP(nn.Module):
- def __init__(self, n_features, n_hidden):
- super().__init__()
- self.c_fc = Conv1D(n_features, n_hidden)
- self.c_proj = Conv1D(n_hidden, n_features)
-
- def forward(self, x):
- x = gelu(self.c_fc(x))
- x = self.c_proj(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, hparams: HParams):
- super().__init__()
- assert hparams.n_hidden % hparams.n_head == 0
- self.hparams = hparams
- self.c_attn = Conv1D(hparams.n_hidden, hparams.n_hidden * 3)
- self.c_proj = Conv1D(hparams.n_hidden, hparams.n_hidden)
-
- def forward(self, x, past):
- assert len(x.shape) == 3 # [batch, sequence, features]
- assert x.shape[-1] == self.hparams.n_hidden
- if past is not None:
- # Should be [batch, 2, heads, sequence, features], where 2 is [k, v]
- assert len(past.shape) == 5
- assert past.shape[-1] == self.hparams.n_hidden
- c = self.c_attn(x)
- q, k, v = map(self.split_heads, torch.split(c, x.shape[-1], dim=2))
- present = torch.stack([k, v], dim=1)
- if past is not None:
- pk, pv = past[:, 0], past[:, 1]
- k = torch.cat([pk, k], dim=-2)
- v = torch.cat([pv, v], dim=-2)
- a = self.multihead_attn(q, k, v)
- a = self.merge_heads(a)
- a = self.c_proj(a)
- return a, present
-
- def split_heads(self, x):
- """ From [batch, sequence, features] to
- [batch, heads, sequence, features].
- """
- return self.split_states(x, self.hparams.n_head).permute(0, 2, 1, 3)
-
- @staticmethod
- def split_states(x, n):
- """ Reshape the last dimension of x into [n, x.shape[-1]/n].
- """
- *start, m = x.shape
- return x.reshape(start + [n, m // n])
-
- def merge_heads(self, x):
- """ Reverse of split_heads.
- """
- return self.merge_states(x.permute(0, 2, 1, 3))
-
- @staticmethod
- def merge_states(x):
- """ Smash the last two dimensions of x into a single dimension.
- """
- *start, a, b = x.shape
- return x.reshape(start + [a * b])
-
- def mask_attn_weights(self, w):
- # w has shape [batch, heads, dst_sequence, src_sequence],
- # where information flows from src to dst.
- _, _, nd, ns = w.shape
- b = self.attention_mask(nd, ns, dtype=w.dtype, device=w.device)
- b = b.reshape((1, 1, nd, ns))
- w = w * b - 1e4 * (1 - b)
- return w
-
- @staticmethod
- def attention_mask(nd, ns, *, dtype, device=None):
- """ 1's in the lower triangle, counting from the lower right corner.
- Same as tf.matrix_band_part(tf.ones([nd, ns]), -1, ns-nd),
- but doesn't produce garbage on TPUs.
- """
- i = torch.arange(0, nd).unsqueeze(1)
- j = torch.arange(ns)
- return (i >= j - ns + nd).to(dtype=dtype, device=device)
-
- def multihead_attn(self, q, k, v):
- # q, k, v have shape [batch, heads, sequence, features]
- w = torch.matmul(q, k.permute(0, 1, 3, 2))
- w = w / math.sqrt(v.shape[-1])
- w = self.mask_attn_weights(w)
- w = F.softmax(w, dim=-1)
- a = torch.matmul(w, v)
- return a
-
-
-class Conv1D(nn.Linear):
- def reset_parameters(self):
- nn.init.normal_(self.weight, std=0.02)
- nn.init.zeros_(self.bias)
-
-
-def gelu(x, c=math.sqrt(2 / math.pi)):
- return 0.5 * x * (1 + torch.tanh(c * (x + 0.044715 * torch.pow(x, 3))))
-
-
-def position_for(batch_size, n_steps, past_length, device=None):
- return (torch.arange(past_length, n_steps + past_length, device=device)
- .unsqueeze(0).repeat(batch_size, 1))
diff --git a/spaces/abhaskumarsinha/MinimalGPT-Ragdoll/subword/bpe_toy.py b/spaces/abhaskumarsinha/MinimalGPT-Ragdoll/subword/bpe_toy.py
deleted file mode 100644
index 0421b255861cb56eb40bf58a8225807cc396e968..0000000000000000000000000000000000000000
--- a/spaces/abhaskumarsinha/MinimalGPT-Ragdoll/subword/bpe_toy.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-# Author: Rico Sennrich
-
-"""Use byte pair encoding (BPE) to learn a variable-length encoding of the vocabulary in a text.
-Unlike the original BPE, it does not compress the plain text, but can be used to reduce the vocabulary
-of a text to a configurable number of symbols, with only a small increase in the number of tokens.
-This is an (inefficient) toy implementation that shows the algorithm. For processing large datasets,
-indexing and incremental updates can be used to speed up the implementation (see learn_bpe.py).
-
-Reference:
-Rico Sennrich, Barry Haddow and Alexandra Birch (2016). Neural Machine Translation of Rare Words with Subword Units.
-Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany.
-"""
-
-
-import re
-import sys
-import collections
-
-def get_stats(vocab):
- pairs = collections.defaultdict(int)
- for word, freq in vocab.items():
- symbols = word.split()
- for i in range(len(symbols)-1):
- pairs[symbols[i],symbols[i+1]] += freq
- return pairs
-
-def merge_vocab(pair, v_in):
- v_out = {}
- bigram_pattern = re.escape(' '.join(pair))
- p = re.compile(r'(?' : 5, 'l o w e r' : 2,
- 'n e w e s t' : 6, 'w i d e s t' : 3}
-num_merges = 15
-for i in range(num_merges):
- pairs = get_stats(vocab)
- try:
- best = max(pairs, key=pairs.get)
- except ValueError:
- break
- if pairs[best] < 2:
- sys.stderr.write('no pair has frequency > 1. Stopping\n')
- break
- vocab = merge_vocab(best, vocab)
- print(best)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/__init__.py
deleted file mode 100644
index e54b088acf644d285ecbeb1440c414e722b9db58..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from .darknet import Darknet
-from .detectors_resnet import DetectoRS_ResNet
-from .detectors_resnext import DetectoRS_ResNeXt
-from .hourglass import HourglassNet
-from .hrnet import HRNet
-from .regnet import RegNet
-from .res2net import Res2Net
-from .resnest import ResNeSt
-from .resnet import ResNet, ResNetV1d
-from .resnext import ResNeXt
-from .ssd_vgg import SSDVGG
-from .trident_resnet import TridentResNet
-from .swin_transformer import SwinTransformer
-from .uniformer import UniFormer
-
-__all__ = [
- 'RegNet', 'ResNet', 'ResNetV1d', 'ResNeXt', 'SSDVGG', 'HRNet', 'Res2Net',
- 'HourglassNet', 'DetectoRS_ResNet', 'DetectoRS_ResNeXt', 'Darknet',
- 'ResNeSt', 'TridentResNet', 'SwinTransformer', 'UniFormer'
-]
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/scnet_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/scnet_roi_head.py
deleted file mode 100644
index 85aaa2f0600afbdfc8b0917cb5f341740776a603..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/scnet_roi_head.py
+++ /dev/null
@@ -1,582 +0,0 @@
-import torch
-import torch.nn.functional as F
-
-from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, merge_aug_bboxes,
- merge_aug_masks, multiclass_nms)
-from ..builder import HEADS, build_head, build_roi_extractor
-from .cascade_roi_head import CascadeRoIHead
-
-
-@HEADS.register_module()
-class SCNetRoIHead(CascadeRoIHead):
- """RoIHead for `SCNet `_.
-
- Args:
- num_stages (int): number of cascade stages.
- stage_loss_weights (list): loss weight of cascade stages.
- semantic_roi_extractor (dict): config to init semantic roi extractor.
- semantic_head (dict): config to init semantic head.
- feat_relay_head (dict): config to init feature_relay_head.
- glbctx_head (dict): config to init global context head.
- """
-
- def __init__(self,
- num_stages,
- stage_loss_weights,
- semantic_roi_extractor=None,
- semantic_head=None,
- feat_relay_head=None,
- glbctx_head=None,
- **kwargs):
- super(SCNetRoIHead, self).__init__(num_stages, stage_loss_weights,
- **kwargs)
- assert self.with_bbox and self.with_mask
- assert not self.with_shared_head # shared head is not supported
-
- if semantic_head is not None:
- self.semantic_roi_extractor = build_roi_extractor(
- semantic_roi_extractor)
- self.semantic_head = build_head(semantic_head)
-
- if feat_relay_head is not None:
- self.feat_relay_head = build_head(feat_relay_head)
-
- if glbctx_head is not None:
- self.glbctx_head = build_head(glbctx_head)
-
- def init_mask_head(self, mask_roi_extractor, mask_head):
- """Initialize ``mask_head``"""
- if mask_roi_extractor is not None:
- self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor)
- self.mask_head = build_head(mask_head)
-
- def init_weights(self, pretrained):
- """Initialize the weights in head.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- for i in range(self.num_stages):
- if self.with_bbox:
- self.bbox_roi_extractor[i].init_weights()
- self.bbox_head[i].init_weights()
- if self.with_mask:
- self.mask_roi_extractor.init_weights()
- self.mask_head.init_weights()
- if self.with_semantic:
- self.semantic_head.init_weights()
- if self.with_glbctx:
- self.glbctx_head.init_weights()
- if self.with_feat_relay:
- self.feat_relay_head.init_weights()
-
- @property
- def with_semantic(self):
- """bool: whether the head has semantic head"""
- return hasattr(self,
- 'semantic_head') and self.semantic_head is not None
-
- @property
- def with_feat_relay(self):
- """bool: whether the head has feature relay head"""
- return (hasattr(self, 'feat_relay_head')
- and self.feat_relay_head is not None)
-
- @property
- def with_glbctx(self):
- """bool: whether the head has global context head"""
- return hasattr(self, 'glbctx_head') and self.glbctx_head is not None
-
- def _fuse_glbctx(self, roi_feats, glbctx_feat, rois):
- """Fuse global context feats with roi feats."""
- assert roi_feats.size(0) == rois.size(0)
- img_inds = torch.unique(rois[:, 0].cpu(), sorted=True).long()
- fused_feats = torch.zeros_like(roi_feats)
- for img_id in img_inds:
- inds = (rois[:, 0] == img_id.item())
- fused_feats[inds] = roi_feats[inds] + glbctx_feat[img_id]
- return fused_feats
-
- def _slice_pos_feats(self, feats, sampling_results):
- """Get features from pos rois."""
- num_rois = [res.bboxes.size(0) for res in sampling_results]
- num_pos_rois = [res.pos_bboxes.size(0) for res in sampling_results]
- inds = torch.zeros(sum(num_rois), dtype=torch.bool)
- start = 0
- for i in range(len(num_rois)):
- start = 0 if i == 0 else start + num_rois[i - 1]
- stop = start + num_pos_rois[i]
- inds[start:stop] = 1
- sliced_feats = feats[inds]
- return sliced_feats
-
- def _bbox_forward(self,
- stage,
- x,
- rois,
- semantic_feat=None,
- glbctx_feat=None):
- """Box head forward function used in both training and testing."""
- bbox_roi_extractor = self.bbox_roi_extractor[stage]
- bbox_head = self.bbox_head[stage]
- bbox_feats = bbox_roi_extractor(
- x[:len(bbox_roi_extractor.featmap_strides)], rois)
- if self.with_semantic and semantic_feat is not None:
- bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat],
- rois)
- if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]:
- bbox_semantic_feat = F.adaptive_avg_pool2d(
- bbox_semantic_feat, bbox_feats.shape[-2:])
- bbox_feats += bbox_semantic_feat
- if self.with_glbctx and glbctx_feat is not None:
- bbox_feats = self._fuse_glbctx(bbox_feats, glbctx_feat, rois)
- cls_score, bbox_pred, relayed_feat = bbox_head(
- bbox_feats, return_shared_feat=True)
-
- bbox_results = dict(
- cls_score=cls_score,
- bbox_pred=bbox_pred,
- relayed_feat=relayed_feat)
- return bbox_results
-
- def _mask_forward(self,
- x,
- rois,
- semantic_feat=None,
- glbctx_feat=None,
- relayed_feat=None):
- """Mask head forward function used in both training and testing."""
- mask_feats = self.mask_roi_extractor(
- x[:self.mask_roi_extractor.num_inputs], rois)
- if self.with_semantic and semantic_feat is not None:
- mask_semantic_feat = self.semantic_roi_extractor([semantic_feat],
- rois)
- if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]:
- mask_semantic_feat = F.adaptive_avg_pool2d(
- mask_semantic_feat, mask_feats.shape[-2:])
- mask_feats += mask_semantic_feat
- if self.with_glbctx and glbctx_feat is not None:
- mask_feats = self._fuse_glbctx(mask_feats, glbctx_feat, rois)
- if self.with_feat_relay and relayed_feat is not None:
- mask_feats = mask_feats + relayed_feat
- mask_pred = self.mask_head(mask_feats)
- mask_results = dict(mask_pred=mask_pred)
-
- return mask_results
-
- def _bbox_forward_train(self,
- stage,
- x,
- sampling_results,
- gt_bboxes,
- gt_labels,
- rcnn_train_cfg,
- semantic_feat=None,
- glbctx_feat=None):
- """Run forward function and calculate loss for box head in training."""
- bbox_head = self.bbox_head[stage]
- rois = bbox2roi([res.bboxes for res in sampling_results])
- bbox_results = self._bbox_forward(
- stage,
- x,
- rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat)
-
- bbox_targets = bbox_head.get_targets(sampling_results, gt_bboxes,
- gt_labels, rcnn_train_cfg)
- loss_bbox = bbox_head.loss(bbox_results['cls_score'],
- bbox_results['bbox_pred'], rois,
- *bbox_targets)
-
- bbox_results.update(
- loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets)
- return bbox_results
-
- def _mask_forward_train(self,
- x,
- sampling_results,
- gt_masks,
- rcnn_train_cfg,
- semantic_feat=None,
- glbctx_feat=None,
- relayed_feat=None):
- """Run forward function and calculate loss for mask head in
- training."""
- pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results])
- mask_results = self._mask_forward(
- x,
- pos_rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat,
- relayed_feat=relayed_feat)
-
- mask_targets = self.mask_head.get_targets(sampling_results, gt_masks,
- rcnn_train_cfg)
- pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
- loss_mask = self.mask_head.loss(mask_results['mask_pred'],
- mask_targets, pos_labels)
-
- mask_results = loss_mask
- return mask_results
-
- def forward_train(self,
- x,
- img_metas,
- proposal_list,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None,
- gt_masks=None,
- gt_semantic_seg=None):
- """
- Args:
- x (list[Tensor]): list of multi-level img features.
-
- img_metas (list[dict]): list of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmdet/datasets/pipelines/formatting.py:Collect`.
-
- proposal_list (list[Tensors]): list of region proposals.
-
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
-
- gt_labels (list[Tensor]): class indices corresponding to each box
-
- gt_bboxes_ignore (None, list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- gt_masks (None, Tensor) : true segmentation masks for each box
- used if the architecture supports a segmentation task.
-
- gt_semantic_seg (None, list[Tensor]): semantic segmentation masks
- used if the architecture supports semantic segmentation task.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- losses = dict()
-
- # semantic segmentation branch
- if self.with_semantic:
- semantic_pred, semantic_feat = self.semantic_head(x)
- loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_seg)
- losses['loss_semantic_seg'] = loss_seg
- else:
- semantic_feat = None
-
- # global context branch
- if self.with_glbctx:
- mc_pred, glbctx_feat = self.glbctx_head(x)
- loss_glbctx = self.glbctx_head.loss(mc_pred, gt_labels)
- losses['loss_glbctx'] = loss_glbctx
- else:
- glbctx_feat = None
-
- for i in range(self.num_stages):
- self.current_stage = i
- rcnn_train_cfg = self.train_cfg[i]
- lw = self.stage_loss_weights[i]
-
- # assign gts and sample proposals
- sampling_results = []
- bbox_assigner = self.bbox_assigner[i]
- bbox_sampler = self.bbox_sampler[i]
- num_imgs = len(img_metas)
- if gt_bboxes_ignore is None:
- gt_bboxes_ignore = [None for _ in range(num_imgs)]
-
- for j in range(num_imgs):
- assign_result = bbox_assigner.assign(proposal_list[j],
- gt_bboxes[j],
- gt_bboxes_ignore[j],
- gt_labels[j])
- sampling_result = bbox_sampler.sample(
- assign_result,
- proposal_list[j],
- gt_bboxes[j],
- gt_labels[j],
- feats=[lvl_feat[j][None] for lvl_feat in x])
- sampling_results.append(sampling_result)
-
- bbox_results = \
- self._bbox_forward_train(
- i, x, sampling_results, gt_bboxes, gt_labels,
- rcnn_train_cfg, semantic_feat, glbctx_feat)
- roi_labels = bbox_results['bbox_targets'][0]
-
- for name, value in bbox_results['loss_bbox'].items():
- losses[f's{i}.{name}'] = (
- value * lw if 'loss' in name else value)
-
- # refine boxes
- if i < self.num_stages - 1:
- pos_is_gts = [res.pos_is_gt for res in sampling_results]
- with torch.no_grad():
- proposal_list = self.bbox_head[i].refine_bboxes(
- bbox_results['rois'], roi_labels,
- bbox_results['bbox_pred'], pos_is_gts, img_metas)
-
- if self.with_feat_relay:
- relayed_feat = self._slice_pos_feats(bbox_results['relayed_feat'],
- sampling_results)
- relayed_feat = self.feat_relay_head(relayed_feat)
- else:
- relayed_feat = None
-
- mask_results = self._mask_forward_train(x, sampling_results, gt_masks,
- rcnn_train_cfg, semantic_feat,
- glbctx_feat, relayed_feat)
- mask_lw = sum(self.stage_loss_weights)
- losses['loss_mask'] = mask_lw * mask_results['loss_mask']
-
- return losses
-
- def simple_test(self, x, proposal_list, img_metas, rescale=False):
- """Test without augmentation."""
- if self.with_semantic:
- _, semantic_feat = self.semantic_head(x)
- else:
- semantic_feat = None
-
- if self.with_glbctx:
- mc_pred, glbctx_feat = self.glbctx_head(x)
- else:
- glbctx_feat = None
-
- num_imgs = len(proposal_list)
- img_shapes = tuple(meta['img_shape'] for meta in img_metas)
- ori_shapes = tuple(meta['ori_shape'] for meta in img_metas)
- scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
-
- # "ms" in variable names means multi-stage
- ms_scores = []
- rcnn_test_cfg = self.test_cfg
-
- rois = bbox2roi(proposal_list)
- for i in range(self.num_stages):
- bbox_head = self.bbox_head[i]
- bbox_results = self._bbox_forward(
- i,
- x,
- rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat)
- # split batch bbox prediction back to each image
- cls_score = bbox_results['cls_score']
- bbox_pred = bbox_results['bbox_pred']
- num_proposals_per_img = tuple(len(p) for p in proposal_list)
- rois = rois.split(num_proposals_per_img, 0)
- cls_score = cls_score.split(num_proposals_per_img, 0)
- bbox_pred = bbox_pred.split(num_proposals_per_img, 0)
- ms_scores.append(cls_score)
-
- if i < self.num_stages - 1:
- bbox_label = [s[:, :-1].argmax(dim=1) for s in cls_score]
- rois = torch.cat([
- bbox_head.regress_by_class(rois[i], bbox_label[i],
- bbox_pred[i], img_metas[i])
- for i in range(num_imgs)
- ])
-
- # average scores of each image by stages
- cls_score = [
- sum([score[i] for score in ms_scores]) / float(len(ms_scores))
- for i in range(num_imgs)
- ]
-
- # apply bbox post-processing to each image individually
- det_bboxes = []
- det_labels = []
- for i in range(num_imgs):
- det_bbox, det_label = self.bbox_head[-1].get_bboxes(
- rois[i],
- cls_score[i],
- bbox_pred[i],
- img_shapes[i],
- scale_factors[i],
- rescale=rescale,
- cfg=rcnn_test_cfg)
- det_bboxes.append(det_bbox)
- det_labels.append(det_label)
- det_bbox_results = [
- bbox2result(det_bboxes[i], det_labels[i],
- self.bbox_head[-1].num_classes)
- for i in range(num_imgs)
- ]
-
- if self.with_mask:
- if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
- mask_classes = self.mask_head.num_classes
- det_segm_results = [[[] for _ in range(mask_classes)]
- for _ in range(num_imgs)]
- else:
- if rescale and not isinstance(scale_factors[0], float):
- scale_factors = [
- torch.from_numpy(scale_factor).to(det_bboxes[0].device)
- for scale_factor in scale_factors
- ]
- _bboxes = [
- det_bboxes[i][:, :4] *
- scale_factors[i] if rescale else det_bboxes[i]
- for i in range(num_imgs)
- ]
- mask_rois = bbox2roi(_bboxes)
-
- # get relay feature on mask_rois
- bbox_results = self._bbox_forward(
- -1,
- x,
- mask_rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat)
- relayed_feat = bbox_results['relayed_feat']
- relayed_feat = self.feat_relay_head(relayed_feat)
-
- mask_results = self._mask_forward(
- x,
- mask_rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat,
- relayed_feat=relayed_feat)
- mask_pred = mask_results['mask_pred']
-
- # split batch mask prediction back to each image
- num_bbox_per_img = tuple(len(_bbox) for _bbox in _bboxes)
- mask_preds = mask_pred.split(num_bbox_per_img, 0)
-
- # apply mask post-processing to each image individually
- det_segm_results = []
- for i in range(num_imgs):
- if det_bboxes[i].shape[0] == 0:
- det_segm_results.append(
- [[] for _ in range(self.mask_head.num_classes)])
- else:
- segm_result = self.mask_head.get_seg_masks(
- mask_preds[i], _bboxes[i], det_labels[i],
- self.test_cfg, ori_shapes[i], scale_factors[i],
- rescale)
- det_segm_results.append(segm_result)
-
- # return results
- if self.with_mask:
- return list(zip(det_bbox_results, det_segm_results))
- else:
- return det_bbox_results
-
- def aug_test(self, img_feats, proposal_list, img_metas, rescale=False):
- if self.with_semantic:
- semantic_feats = [
- self.semantic_head(feat)[1] for feat in img_feats
- ]
- else:
- semantic_feats = [None] * len(img_metas)
-
- if self.with_glbctx:
- glbctx_feats = [self.glbctx_head(feat)[1] for feat in img_feats]
- else:
- glbctx_feats = [None] * len(img_metas)
-
- rcnn_test_cfg = self.test_cfg
- aug_bboxes = []
- aug_scores = []
- for x, img_meta, semantic_feat, glbctx_feat in zip(
- img_feats, img_metas, semantic_feats, glbctx_feats):
- # only one image in the batch
- img_shape = img_meta[0]['img_shape']
- scale_factor = img_meta[0]['scale_factor']
- flip = img_meta[0]['flip']
-
- proposals = bbox_mapping(proposal_list[0][:, :4], img_shape,
- scale_factor, flip)
- # "ms" in variable names means multi-stage
- ms_scores = []
-
- rois = bbox2roi([proposals])
- for i in range(self.num_stages):
- bbox_head = self.bbox_head[i]
- bbox_results = self._bbox_forward(
- i,
- x,
- rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat)
- ms_scores.append(bbox_results['cls_score'])
- if i < self.num_stages - 1:
- bbox_label = bbox_results['cls_score'].argmax(dim=1)
- rois = bbox_head.regress_by_class(
- rois, bbox_label, bbox_results['bbox_pred'],
- img_meta[0])
-
- cls_score = sum(ms_scores) / float(len(ms_scores))
- bboxes, scores = self.bbox_head[-1].get_bboxes(
- rois,
- cls_score,
- bbox_results['bbox_pred'],
- img_shape,
- scale_factor,
- rescale=False,
- cfg=None)
- aug_bboxes.append(bboxes)
- aug_scores.append(scores)
-
- # after merging, bboxes will be rescaled to the original image size
- merged_bboxes, merged_scores = merge_aug_bboxes(
- aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)
- det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores,
- rcnn_test_cfg.score_thr,
- rcnn_test_cfg.nms,
- rcnn_test_cfg.max_per_img)
-
- det_bbox_results = bbox2result(det_bboxes, det_labels,
- self.bbox_head[-1].num_classes)
-
- if self.with_mask:
- if det_bboxes.shape[0] == 0:
- det_segm_results = [[]
- for _ in range(self.mask_head.num_classes)]
- else:
- aug_masks = []
- for x, img_meta, semantic_feat, glbctx_feat in zip(
- img_feats, img_metas, semantic_feats, glbctx_feats):
- img_shape = img_meta[0]['img_shape']
- scale_factor = img_meta[0]['scale_factor']
- flip = img_meta[0]['flip']
- _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape,
- scale_factor, flip)
- mask_rois = bbox2roi([_bboxes])
- # get relay feature on mask_rois
- bbox_results = self._bbox_forward(
- -1,
- x,
- mask_rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat)
- relayed_feat = bbox_results['relayed_feat']
- relayed_feat = self.feat_relay_head(relayed_feat)
- mask_results = self._mask_forward(
- x,
- mask_rois,
- semantic_feat=semantic_feat,
- glbctx_feat=glbctx_feat,
- relayed_feat=relayed_feat)
- mask_pred = mask_results['mask_pred']
- aug_masks.append(mask_pred.sigmoid().cpu().numpy())
- merged_masks = merge_aug_masks(aug_masks, img_metas,
- self.test_cfg)
- ori_shape = img_metas[0][0]['ori_shape']
- det_segm_results = self.mask_head.get_seg_masks(
- merged_masks,
- det_bboxes,
- det_labels,
- rcnn_test_cfg,
- ori_shape,
- scale_factor=1.0,
- rescale=False)
- return [(det_bbox_results, det_segm_results)]
- else:
- return [det_bbox_results]
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/__init__.py
deleted file mode 100644
index 170724be38de42daf2bc1a1910e181d68818f165..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from .inference import inference_segmentor, init_segmentor, show_result_pyplot
-from .test import multi_gpu_test, single_gpu_test
-from .train import get_root_logger, set_random_seed, train_segmentor
-
-__all__ = [
- 'get_root_logger', 'set_random_seed', 'train_segmentor', 'init_segmentor',
- 'inference_segmentor', 'multi_gpu_test', 'single_gpu_test',
- 'show_result_pyplot'
-]
diff --git a/spaces/abhishek/sketch-to-image/lib/util.py b/spaces/abhishek/sketch-to-image/lib/util.py
deleted file mode 100644
index 5471db970580cf9e437c3397190c38b3a7421cda..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/lib/util.py
+++ /dev/null
@@ -1,280 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
-'''
-
-# adopted from
-# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
-# and
-# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-# and
-# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
-#
-# thanks!
-
-
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-from utils import instantiate_from_config
-
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
- ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(),
- "dtype": torch.get_autocast_gpu_dtype(),
- "cache_enabled": torch.is_autocast_cache_enabled()}
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad(), \
- torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs):
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
diff --git a/spaces/ai4bharat/IndicNLG/README.md b/spaces/ai4bharat/IndicNLG/README.md
deleted file mode 100644
index 64f047e727666b3ada45e161188af27d354babf8..0000000000000000000000000000000000000000
--- a/spaces/ai4bharat/IndicNLG/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: IndicNLG
-emoji: ⚡
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.1.7
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jnas/voc1/path.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jnas/voc1/path.sh
deleted file mode 100644
index b0ca27c615f70aa29e240222ec370f8ad4e7b45a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jnas/voc1/path.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-# cuda related
-export CUDA_HOME=/usr/local/cuda-10.0
-export LD_LIBRARY_PATH="${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}"
-
-# path related
-export PRJ_ROOT="${PWD}/../../.."
-if [ -e "${PRJ_ROOT}/tools/venv/bin/activate" ]; then
- # shellcheck disable=SC1090
- . "${PRJ_ROOT}/tools/venv/bin/activate"
-fi
-
-# python related
-export OMP_NUM_THREADS=1
-export PYTHONIOENCODING=UTF-8
-export MPL_BACKEND=Agg
-
-# check installation
-if ! command -v parallel-wavegan-train > /dev/null; then
- echo "Error: It seems setup is not finished." >&2
- echo "Error: Please setup your environment by following README.md" >&2
- return 1
-fi
-if ! command -v jq > /dev/null; then
- echo "Error: It seems jq is not installed." >&2
- echo "Error: Please install via \`sudo apt-get install jq\`." >&2
- echo "Error: If you do not have sudo, please download from https://stedolan.github.io/jq/download/." >&2
- return 1
-fi
-if ! command -v yq > /dev/null; then
- echo "Error: It seems yq is not installed." >&2
- echo "Error: Please install via \`pip install yq\`." >&2
- return 1
-fi
diff --git a/spaces/akhaliq/deeplab2/CONTRIBUTING.md b/spaces/akhaliq/deeplab2/CONTRIBUTING.md
deleted file mode 100644
index 939e5341e74dc2371c8b47f0e27b50581bed5f63..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/CONTRIBUTING.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# How to Contribute
-
-We'd love to accept your patches and contributions to this project. There are
-just a few small guidelines you need to follow.
-
-## Contributor License Agreement
-
-Contributions to this project must be accompanied by a Contributor License
-Agreement. You (or your employer) retain the copyright to your contribution;
-this simply gives us permission to use and redistribute your contributions as
-part of the project. Head over to to see
-your current agreements on file or to sign a new one.
-
-You generally only need to submit a CLA once, so if you've already submitted one
-(even if it was for a different project), you probably don't need to do it
-again.
-
-## Code reviews
-
-All submissions, including submissions by project members, require review. We
-use GitHub pull requests for this purpose. Consult
-[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
-information on using pull requests.
-
-## Community Guidelines
-
-This project follows [Google's Open Source Community
-Guidelines](https://opensource.google.com/conduct/).
diff --git a/spaces/akhaliq/lama/bin/paper_runfiles/find_best_checkpoint.py b/spaces/akhaliq/lama/bin/paper_runfiles/find_best_checkpoint.py
deleted file mode 100644
index 42f5e0f9bb1a2ea25dd9a97a58cf318e6de19532..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/bin/paper_runfiles/find_best_checkpoint.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python3
-
-
-import os
-from argparse import ArgumentParser
-
-
-def ssim_fid100_f1(metrics, fid_scale=100):
- ssim = metrics.loc['total', 'ssim']['mean']
- fid = metrics.loc['total', 'fid']['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * ssim * fid_rel / (ssim + fid_rel + 1e-3)
- return f1
-
-
-def find_best_checkpoint(model_list, models_dir):
- with open(model_list) as f:
- models = [m.strip() for m in f.readlines()]
- with open(f'{model_list}_best', 'w') as f:
- for model in models:
- print(model)
- best_f1 = 0
- best_epoch = 0
- best_step = 0
- with open(os.path.join(models_dir, model, 'train.log')) as fm:
- lines = fm.readlines()
- for line_index in range(len(lines)):
- line = lines[line_index]
- if 'Validation metrics after epoch' in line:
- sharp_index = line.index('#')
- cur_ep = line[sharp_index + 1:]
- comma_index = cur_ep.index(',')
- cur_ep = int(cur_ep[:comma_index])
- total_index = line.index('total ')
- step = int(line[total_index:].split()[1].strip())
- total_line = lines[line_index + 5]
- if not total_line.startswith('total'):
- continue
- words = total_line.strip().split()
- f1 = float(words[-1])
- print(f'\tEpoch: {cur_ep}, f1={f1}')
- if f1 > best_f1:
- best_f1 = f1
- best_epoch = cur_ep
- best_step = step
- f.write(f'{model}\t{best_epoch}\t{best_step}\t{best_f1}\n')
-
-
-if __name__ == '__main__':
- parser = ArgumentParser()
- parser.add_argument('model_list')
- parser.add_argument('models_dir')
- args = parser.parse_args()
- find_best_checkpoint(args.model_list, args.models_dir)
diff --git a/spaces/akhaliq/yolov7/utils/metrics.py b/spaces/akhaliq/yolov7/utils/metrics.py
deleted file mode 100644
index 666b8c7ec1c0a488eab1b4e7f2f0474973589525..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/yolov7/utils/metrics.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Model validation metrics
-
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-from . import general
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = (target_cls == c).sum() # number of labels
- n_p = i.sum() # number of predictions
-
- if n_p == 0 or n_l == 0:
- continue
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + 1e-16) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + 1e-16)
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
-
- i = f1.mean(0).argmax() # max F1 index
- return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
- mpre = np.concatenate(([1.], precision, [0.]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = general.box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(np.int16)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[gc, detection_classes[m1[j]]] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # background FP
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # background FN
-
- def matrix(self):
- return self.matrix
-
- def plot(self, save_dir='', names=()):
- try:
- import seaborn as sn
-
- array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig = plt.figure(figsize=(12, 9), tight_layout=True)
- sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
- labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
- sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
- xticklabels=names + ['background FP'] if labels else "auto",
- yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
- fig.axes[0].set_xlabel('True')
- fig.axes[0].set_ylabel('Predicted')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- except Exception as e:
- pass
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-
-def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = py.mean(0)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
diff --git a/spaces/alan-chen-intel/dagan-demo/modules/keypoint_detector.py b/spaces/alan-chen-intel/dagan-demo/modules/keypoint_detector.py
deleted file mode 100644
index b39069195d8315460546d74d3576d09b03ec8915..0000000000000000000000000000000000000000
--- a/spaces/alan-chen-intel/dagan-demo/modules/keypoint_detector.py
+++ /dev/null
@@ -1,75 +0,0 @@
-from torch import nn
-import torch
-import torch.nn.functional as F
-from modules.util import Hourglass, make_coordinate_grid, AntiAliasInterpolation2d,Hourglass_2branch
-import pdb
-
-class KPDetector(nn.Module):
- """
- Detecting a keypoints. Return keypoint position and jacobian near each keypoint.
- """
-
- def __init__(self, block_expansion, num_kp, num_channels, max_features,
- num_blocks, temperature, estimate_jacobian=False, scale_factor=1,
- single_jacobian_map=False, pad=0):
- super(KPDetector, self).__init__()
- self.predictor = Hourglass(block_expansion, in_features=num_channels,
- max_features=max_features, num_blocks=num_blocks)
-
- self.kp = nn.Conv2d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=(7, 7),
- padding=pad)
-
- if estimate_jacobian:
- self.num_jacobian_maps = 1 if single_jacobian_map else num_kp
- self.jacobian = nn.Conv2d(in_channels=self.predictor.out_filters,
- out_channels=4 * self.num_jacobian_maps, kernel_size=(7, 7), padding=pad)
- self.jacobian.weight.data.zero_()
- self.jacobian.bias.data.copy_(torch.tensor([1, 0, 0, 1] * self.num_jacobian_maps, dtype=torch.float))
- else:
- self.jacobian = None
-
- self.temperature = temperature
- self.scale_factor = scale_factor
- if self.scale_factor != 1:
- self.down = AntiAliasInterpolation2d(num_channels, self.scale_factor)
-
- def gaussian2kp(self, heatmap):
- """
- Extract the mean and from a heatmap
- """
- shape = heatmap.shape
- heatmap = heatmap.unsqueeze(-1)
- grid = make_coordinate_grid(shape[2:], heatmap.type()).unsqueeze_(0).unsqueeze_(0)
- value = (heatmap * grid).sum(dim=(2, 3))
- kp = {'value': value}
-
- return kp
-
- def forward(self, x):
- if self.scale_factor != 1:
- x = self.down(x)
- feature_map = self.predictor(x) #x bz,4,64,64
- prediction = self.kp(feature_map)
-
- final_shape = prediction.shape
- heatmap = prediction.view(final_shape[0], final_shape[1], -1)
- heatmap = F.softmax(heatmap / self.temperature, dim=2)
- heatmap = heatmap.view(*final_shape)
-
- out = self.gaussian2kp(heatmap)
-
- if self.jacobian is not None:
- jacobian_map = self.jacobian(feature_map)
- # pdb.set_trace()
- jacobian_map = jacobian_map.reshape(final_shape[0], self.num_jacobian_maps, 4, final_shape[2],
- final_shape[3])
- heatmap = heatmap.unsqueeze(2)
-
- jacobian = heatmap * jacobian_map
- jacobian = jacobian.view(final_shape[0], final_shape[1], 4, -1)
- jacobian = jacobian.sum(dim=-1)
- jacobian = jacobian.view(jacobian.shape[0], jacobian.shape[1], 2, 2)
- out['jacobian'] = jacobian
-
- return out
-
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py
deleted file mode 100644
index 913912c7b8e0c2dcbf142f81991dfec0d26f4f41..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py
+++ /dev/null
@@ -1,429 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2013-2015 Vinay Sajip.
-# Licensed to the Python Software Foundation under a contributor agreement.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-from io import BytesIO
-import logging
-import os
-import re
-import struct
-import sys
-
-from .compat import sysconfig, detect_encoding, ZipFile
-from .resources import finder
-from .util import (FileOperator, get_export_entry, convert_path,
- get_executable, get_platform, in_venv)
-
-logger = logging.getLogger(__name__)
-
-_DEFAULT_MANIFEST = '''
-
-
-
-
-
-
-
-
-
-
-
-
-'''.strip()
-
-# check if Python is called on the first line with this expression
-FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$')
-SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*-
-import re
-import sys
-from %(module)s import %(import_name)s
-if __name__ == '__main__':
- sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
- sys.exit(%(func)s())
-'''
-
-
-def enquote_executable(executable):
- if ' ' in executable:
- # make sure we quote only the executable in case of env
- # for example /usr/bin/env "/dir with spaces/bin/jython"
- # instead of "/usr/bin/env /dir with spaces/bin/jython"
- # otherwise whole
- if executable.startswith('/usr/bin/env '):
- env, _executable = executable.split(' ', 1)
- if ' ' in _executable and not _executable.startswith('"'):
- executable = '%s "%s"' % (env, _executable)
- else:
- if not executable.startswith('"'):
- executable = '"%s"' % executable
- return executable
-
-# Keep the old name around (for now), as there is at least one project using it!
-_enquote_executable = enquote_executable
-
-class ScriptMaker(object):
- """
- A class to copy or create scripts from source scripts or callable
- specifications.
- """
- script_template = SCRIPT_TEMPLATE
-
- executable = None # for shebangs
-
- def __init__(self, source_dir, target_dir, add_launchers=True,
- dry_run=False, fileop=None):
- self.source_dir = source_dir
- self.target_dir = target_dir
- self.add_launchers = add_launchers
- self.force = False
- self.clobber = False
- # It only makes sense to set mode bits on POSIX.
- self.set_mode = (os.name == 'posix') or (os.name == 'java' and
- os._name == 'posix')
- self.variants = set(('', 'X.Y'))
- self._fileop = fileop or FileOperator(dry_run)
-
- self._is_nt = os.name == 'nt' or (
- os.name == 'java' and os._name == 'nt')
- self.version_info = sys.version_info
-
- def _get_alternate_executable(self, executable, options):
- if options.get('gui', False) and self._is_nt: # pragma: no cover
- dn, fn = os.path.split(executable)
- fn = fn.replace('python', 'pythonw')
- executable = os.path.join(dn, fn)
- return executable
-
- if sys.platform.startswith('java'): # pragma: no cover
- def _is_shell(self, executable):
- """
- Determine if the specified executable is a script
- (contains a #! line)
- """
- try:
- with open(executable) as fp:
- return fp.read(2) == '#!'
- except (OSError, IOError):
- logger.warning('Failed to open %s', executable)
- return False
-
- def _fix_jython_executable(self, executable):
- if self._is_shell(executable):
- # Workaround for Jython is not needed on Linux systems.
- import java
-
- if java.lang.System.getProperty('os.name') == 'Linux':
- return executable
- elif executable.lower().endswith('jython.exe'):
- # Use wrapper exe for Jython on Windows
- return executable
- return '/usr/bin/env %s' % executable
-
- def _build_shebang(self, executable, post_interp):
- """
- Build a shebang line. In the simple case (on Windows, or a shebang line
- which is not too long or contains spaces) use a simple formulation for
- the shebang. Otherwise, use /bin/sh as the executable, with a contrived
- shebang which allows the script to run either under Python or sh, using
- suitable quoting. Thanks to Harald Nordgren for his input.
-
- See also: http://www.in-ulm.de/~mascheck/various/shebang/#length
- https://hg.mozilla.org/mozilla-central/file/tip/mach
- """
- if os.name != 'posix':
- simple_shebang = True
- else:
- # Add 3 for '#!' prefix and newline suffix.
- shebang_length = len(executable) + len(post_interp) + 3
- if sys.platform == 'darwin':
- max_shebang_length = 512
- else:
- max_shebang_length = 127
- simple_shebang = ((b' ' not in executable) and
- (shebang_length <= max_shebang_length))
-
- if simple_shebang:
- result = b'#!' + executable + post_interp + b'\n'
- else:
- result = b'#!/bin/sh\n'
- result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n'
- result += b"' '''"
- return result
-
- def _get_shebang(self, encoding, post_interp=b'', options=None):
- enquote = True
- if self.executable:
- executable = self.executable
- enquote = False # assume this will be taken care of
- elif not sysconfig.is_python_build():
- executable = get_executable()
- elif in_venv(): # pragma: no cover
- executable = os.path.join(sysconfig.get_path('scripts'),
- 'python%s' % sysconfig.get_config_var('EXE'))
- else: # pragma: no cover
- executable = os.path.join(
- sysconfig.get_config_var('BINDIR'),
- 'python%s%s' % (sysconfig.get_config_var('VERSION'),
- sysconfig.get_config_var('EXE')))
- if not os.path.isfile(executable):
- # for Python builds from source on Windows, no Python executables with
- # a version suffix are created, so we use python.exe
- executable = os.path.join(sysconfig.get_config_var('BINDIR'),
- 'python%s' % (sysconfig.get_config_var('EXE')))
- if options:
- executable = self._get_alternate_executable(executable, options)
-
- if sys.platform.startswith('java'): # pragma: no cover
- executable = self._fix_jython_executable(executable)
-
- # Normalise case for Windows - COMMENTED OUT
- # executable = os.path.normcase(executable)
- # N.B. The normalising operation above has been commented out: See
- # issue #124. Although paths in Windows are generally case-insensitive,
- # they aren't always. For example, a path containing a ẞ (which is a
- # LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a
- # LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by
- # Windows as equivalent in path names.
-
- # If the user didn't specify an executable, it may be necessary to
- # cater for executable paths with spaces (not uncommon on Windows)
- if enquote:
- executable = enquote_executable(executable)
- # Issue #51: don't use fsencode, since we later try to
- # check that the shebang is decodable using utf-8.
- executable = executable.encode('utf-8')
- # in case of IronPython, play safe and enable frames support
- if (sys.platform == 'cli' and '-X:Frames' not in post_interp
- and '-X:FullFrames' not in post_interp): # pragma: no cover
- post_interp += b' -X:Frames'
- shebang = self._build_shebang(executable, post_interp)
- # Python parser starts to read a script using UTF-8 until
- # it gets a #coding:xxx cookie. The shebang has to be the
- # first line of a file, the #coding:xxx cookie cannot be
- # written before. So the shebang has to be decodable from
- # UTF-8.
- try:
- shebang.decode('utf-8')
- except UnicodeDecodeError: # pragma: no cover
- raise ValueError(
- 'The shebang (%r) is not decodable from utf-8' % shebang)
- # If the script is encoded to a custom encoding (use a
- # #coding:xxx cookie), the shebang has to be decodable from
- # the script encoding too.
- if encoding != 'utf-8':
- try:
- shebang.decode(encoding)
- except UnicodeDecodeError: # pragma: no cover
- raise ValueError(
- 'The shebang (%r) is not decodable '
- 'from the script encoding (%r)' % (shebang, encoding))
- return shebang
-
- def _get_script_text(self, entry):
- return self.script_template % dict(module=entry.prefix,
- import_name=entry.suffix.split('.')[0],
- func=entry.suffix)
-
- manifest = _DEFAULT_MANIFEST
-
- def get_manifest(self, exename):
- base = os.path.basename(exename)
- return self.manifest % base
-
- def _write_script(self, names, shebang, script_bytes, filenames, ext):
- use_launcher = self.add_launchers and self._is_nt
- linesep = os.linesep.encode('utf-8')
- if not shebang.endswith(linesep):
- shebang += linesep
- if not use_launcher:
- script_bytes = shebang + script_bytes
- else: # pragma: no cover
- if ext == 'py':
- launcher = self._get_launcher('t')
- else:
- launcher = self._get_launcher('w')
- stream = BytesIO()
- with ZipFile(stream, 'w') as zf:
- zf.writestr('__main__.py', script_bytes)
- zip_data = stream.getvalue()
- script_bytes = launcher + shebang + zip_data
- for name in names:
- outname = os.path.join(self.target_dir, name)
- if use_launcher: # pragma: no cover
- n, e = os.path.splitext(outname)
- if e.startswith('.py'):
- outname = n
- outname = '%s.exe' % outname
- try:
- self._fileop.write_binary_file(outname, script_bytes)
- except Exception:
- # Failed writing an executable - it might be in use.
- logger.warning('Failed to write executable - trying to '
- 'use .deleteme logic')
- dfname = '%s.deleteme' % outname
- if os.path.exists(dfname):
- os.remove(dfname) # Not allowed to fail here
- os.rename(outname, dfname) # nor here
- self._fileop.write_binary_file(outname, script_bytes)
- logger.debug('Able to replace executable using '
- '.deleteme logic')
- try:
- os.remove(dfname)
- except Exception:
- pass # still in use - ignore error
- else:
- if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover
- outname = '%s.%s' % (outname, ext)
- if os.path.exists(outname) and not self.clobber:
- logger.warning('Skipping existing file %s', outname)
- continue
- self._fileop.write_binary_file(outname, script_bytes)
- if self.set_mode:
- self._fileop.set_executable_mode([outname])
- filenames.append(outname)
-
- variant_separator = '-'
-
- def get_script_filenames(self, name):
- result = set()
- if '' in self.variants:
- result.add(name)
- if 'X' in self.variants:
- result.add('%s%s' % (name, self.version_info[0]))
- if 'X.Y' in self.variants:
- result.add('%s%s%s.%s' % (name, self.variant_separator,
- self.version_info[0], self.version_info[1]))
- return result
-
- def _make_script(self, entry, filenames, options=None):
- post_interp = b''
- if options:
- args = options.get('interpreter_args', [])
- if args:
- args = ' %s' % ' '.join(args)
- post_interp = args.encode('utf-8')
- shebang = self._get_shebang('utf-8', post_interp, options=options)
- script = self._get_script_text(entry).encode('utf-8')
- scriptnames = self.get_script_filenames(entry.name)
- if options and options.get('gui', False):
- ext = 'pyw'
- else:
- ext = 'py'
- self._write_script(scriptnames, shebang, script, filenames, ext)
-
- def _copy_script(self, script, filenames):
- adjust = False
- script = os.path.join(self.source_dir, convert_path(script))
- outname = os.path.join(self.target_dir, os.path.basename(script))
- if not self.force and not self._fileop.newer(script, outname):
- logger.debug('not copying %s (up-to-date)', script)
- return
-
- # Always open the file, but ignore failures in dry-run mode --
- # that way, we'll get accurate feedback if we can read the
- # script.
- try:
- f = open(script, 'rb')
- except IOError: # pragma: no cover
- if not self.dry_run:
- raise
- f = None
- else:
- first_line = f.readline()
- if not first_line: # pragma: no cover
- logger.warning('%s is an empty file (skipping)', script)
- return
-
- match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n'))
- if match:
- adjust = True
- post_interp = match.group(1) or b''
-
- if not adjust:
- if f:
- f.close()
- self._fileop.copy_file(script, outname)
- if self.set_mode:
- self._fileop.set_executable_mode([outname])
- filenames.append(outname)
- else:
- logger.info('copying and adjusting %s -> %s', script,
- self.target_dir)
- if not self._fileop.dry_run:
- encoding, lines = detect_encoding(f.readline)
- f.seek(0)
- shebang = self._get_shebang(encoding, post_interp)
- if b'pythonw' in first_line: # pragma: no cover
- ext = 'pyw'
- else:
- ext = 'py'
- n = os.path.basename(outname)
- self._write_script([n], shebang, f.read(), filenames, ext)
- if f:
- f.close()
-
- @property
- def dry_run(self):
- return self._fileop.dry_run
-
- @dry_run.setter
- def dry_run(self, value):
- self._fileop.dry_run = value
-
- if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover
- # Executable launcher support.
- # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/
-
- def _get_launcher(self, kind):
- if struct.calcsize('P') == 8: # 64-bit
- bits = '64'
- else:
- bits = '32'
- platform_suffix = '-arm' if get_platform() == 'win-arm64' else ''
- name = '%s%s%s.exe' % (kind, bits, platform_suffix)
- # Issue 31: don't hardcode an absolute package name, but
- # determine it relative to the current package
- distlib_package = __name__.rsplit('.', 1)[0]
- resource = finder(distlib_package).find(name)
- if not resource:
- msg = ('Unable to find resource %s in package %s' % (name,
- distlib_package))
- raise ValueError(msg)
- return resource.bytes
-
- # Public API follows
-
- def make(self, specification, options=None):
- """
- Make a script.
-
- :param specification: The specification, which is either a valid export
- entry specification (to make a script from a
- callable) or a filename (to make a script by
- copying from a source location).
- :param options: A dictionary of options controlling script generation.
- :return: A list of all absolute pathnames written to.
- """
- filenames = []
- entry = get_export_entry(specification)
- if entry is None:
- self._copy_script(specification, filenames)
- else:
- self._make_script(entry, filenames, options=options)
- return filenames
-
- def make_multiple(self, specifications, options=None):
- """
- Take a list of specifications and make scripts from them,
- :param specifications: A list of specifications.
- :return: A list of all absolute pathnames written to,
- """
- filenames = []
- for specification in specifications:
- filenames.extend(self.make(specification, options))
- return filenames
diff --git a/spaces/ali-ghamdan/realesrgan-models/scripts/generate_meta_info.py b/spaces/ali-ghamdan/realesrgan-models/scripts/generate_meta_info.py
deleted file mode 100644
index 9c3b7a37e85f534075c50e6c33d7cca999d8b836..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/realesrgan-models/scripts/generate_meta_info.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import argparse
-import cv2
-import glob
-import os
-
-
-def main(args):
- txt_file = open(args.meta_info, 'w')
- for folder, root in zip(args.input, args.root):
- img_paths = sorted(glob.glob(os.path.join(folder, '*')))
- for img_path in img_paths:
- status = True
- if args.check:
- # read the image once for check, as some images may have errors
- try:
- img = cv2.imread(img_path)
- except (IOError, OSError) as error:
- print(f'Read {img_path} error: {error}')
- status = False
- if img is None:
- status = False
- print(f'Img is None: {img_path}')
- if status:
- # get the relative path
- img_name = os.path.relpath(img_path, root)
- print(img_name)
- txt_file.write(f'{img_name}\n')
-
-
-if __name__ == '__main__':
- """Generate meta info (txt file) for only Ground-Truth images.
-
- It can also generate meta info from several folders into one txt file.
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--input',
- nargs='+',
- default=['datasets/DF2K/DF2K_HR', 'datasets/DF2K/DF2K_multiscale'],
- help='Input folder, can be a list')
- parser.add_argument(
- '--root',
- nargs='+',
- default=['datasets/DF2K', 'datasets/DF2K'],
- help='Folder root, should have the length as input folders')
- parser.add_argument(
- '--meta_info',
- type=str,
- default='datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt',
- help='txt path for meta info')
- parser.add_argument('--check', action='store_true', help='Read image to check whether it is ok')
- args = parser.parse_args()
-
- assert len(args.input) == len(args.root), ('Input folder and folder root should have the same length, but got '
- f'{len(args.input)} and {len(args.root)}.')
- os.makedirs(os.path.dirname(args.meta_info), exist_ok=True)
-
- main(args)
diff --git a/spaces/allknowingroger/Image-Models-Test173/app.py b/spaces/allknowingroger/Image-Models-Test173/app.py
deleted file mode 100644
index 4b15b23ca1d159d96b21c758d4404f826be5da84..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test173/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "Yntec/dosmixVAE",
- "Shiva1602/my-pet-dog",
- "Nikithaa/my-pet-dog",
- "Hvijapuram22/my-pet-dog",
- "Priyakatta02/my-peacock",
- "Jayalakshmi2004/parrot-jlb",
- "Aman242526/my-pet-cockteil-bid",
- "flobbit/monster-cars-sdxl-lora",
- "Yntec/Cetus",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test86/app.py b/spaces/allknowingroger/Image-Models-Test86/app.py
deleted file mode 100644
index 787bb46be319041a8db08a2f28be7ef80702f9df..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test86/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "stephanebhiri/lora-trained-xl-colab-stp25",
- "stephanebhiri/lora-trained-xl-colab-stp23",
- "a2a/lora-trained-xl",
- "perraju/lora-trained-xl-colab",
- "JustAIGuy/lora-trained-xl-colab_2",
- "jbilcke-hf/sdxl-starfield",
- "goofyai/3d_render_style_xl",
- "MirageML/lowpoly-cyberpunk",
- "ddPn08/subtly",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/amin2809/rvc-models/infer_pack/models.py b/spaces/amin2809/rvc-models/infer_pack/models.py
deleted file mode 100644
index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000
--- a/spaces/amin2809/rvc-models/infer_pack/models.py
+++ /dev/null
@@ -1,982 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- z_slice, ids_slice = commons.rand_slice_segments(
- x, y_lengths, self.segment_size
- )
-
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice
-
- def infer(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o, o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/anasanchezf/cloome/src/clip/clip.py b/spaces/anasanchezf/cloome/src/clip/clip.py
deleted file mode 100644
index 6e55c9c588958925f65adcf8b883eb8ece70daa1..0000000000000000000000000000000000000000
--- a/spaces/anasanchezf/cloome/src/clip/clip.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Code ported from https://github.com/openai/CLIP
-
-import hashlib
-import os
-import urllib
-import warnings
-from typing import Union, List
-
-import torch
-from PIL import Image
-from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize, RandomResizedCrop, InterpolationMode, RandomCrop, RandomRotation
-from tqdm import tqdm
-
-from clip.model import build_model
-# from clip.tokenizer import SimpleTokenizer as _Tokenizer
-
-__all__ = ["available_models", "load", "tokenize"]
-# _tokenizer = _Tokenizer()
-
-_MODELS = {
- "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
- "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
- "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
- "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
-}
-
-
-class NormalizeByImage(object):
- """Normalize an tensor image with mean and standard deviation.
- Given mean: ``(M1,...,Mn)`` and std: ``(S1,..,Sn)`` for ``n`` channels, this transform
- will normalize each channel of the input ``torch.*Tensor`` i.e.
- ``input[channel] = (input[channel] - mean[channel]) / std[channel]``
- Args:
- mean (sequence): Sequence of means for each channel.
- std (sequence): Sequence of standard deviations for each channel.
- """
-
- def __call__(self, tensor):
- """
- Args:
- tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
- Returns:
- Tensor: Normalized Tensor image.
- """
- for t in tensor:
- t.sub_(t.mean()).div_(t.std() + 1e-7)
- return tensor
-
-
-def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
- os.makedirs(root, exist_ok=True)
- filename = os.path.basename(url)
-
- expected_sha256 = url.split("/")[-2]
- download_target = os.path.join(root, filename)
-
- if os.path.exists(download_target) and not os.path.isfile(download_target):
- raise RuntimeError(f"{download_target} exists and is not a regular file")
-
- if os.path.isfile(download_target):
- if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
- return download_target
- else:
- warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
-
- with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
- with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop:
- while True:
- buffer = source.read(8192)
- if not buffer:
- break
-
- output.write(buffer)
- loop.update(len(buffer))
-
- if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
- raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
-
- return download_target
-
-def _convert_to_rgb(image):
- return image.convert('RGB')
-
-def _transform(n_px_tr: int, n_px_val: int, is_train: bool, normalize:str = "dataset", preprocess:str = "downsize"):
- #normalize = Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
- # print(n_px_tr)
- # print(n_px_val)
- if normalize == "img":
- normalize = NormalizeByImage()
- elif normalize == "dataset":
- normalize = Normalize((47.1314, 40.8138, 53.7692, 46.2656, 28.7243), (47.1314, 40.8138, 53.7692, 46.2656, 28.7243)) # normalize for CellPainting
- if normalize == "None":
- normalize = None
-
- if is_train:
- if preprocess == "crop":
- #resize = RandomResizedCrop(n_px_tr, scale=(0.25,0.3), ratio=(0.95, 1.05), interpolation=InterpolationMode.BICUBIC)
- resize = RandomCrop(n_px_tr)
- elif preprocess == "downsize":
- resize = RandomResizedCrop(n_px_tr, scale=(0.9, 1.0), interpolation=InterpolationMode.BICUBIC)
- elif preprocess == "rotate":
- resize = Compose([
- RandomRotation((0, 360)),
- CenterCrop(n_px_tr)
- ])
-
- else:
- if preprocess == "crop" or "rotate":
- resize = Compose([
- #RandomResizedCrop(n_px_tr, scale=(0.25,0.3), ratio=(0.95, 1.05), interpolation=InterpolationMode.BICUBIC)
- CenterCrop(n_px_val),
- ])
- elif preprocess == "downsize":
- resize = Compose([
- Resize(n_px_val, interpolation=InterpolationMode.BICUBIC),
- CenterCrop(n_px_val),
- ])
- if normalize:
- return Compose([
- ToTensor(),
- resize,
- normalize,
- ])
- else:
- return Compose([
- ToTensor(),
- resize,
- ])
-
-
-
-def available_models() -> List[str]:
- """Returns the names of available CLIP models"""
- return list(_MODELS.keys())
-
-
-def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=True, is_train=False, pretrained=True):
- """Load a CLIP model
- Parameters
- ----------
- name : str
- A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
- device : Union[str, torch.device]
- The device to put the loaded model
- jit : bool
- Whether to load the optimized JIT model (default) or more hackable non-JIT model.
- Returns
- -------
- model : torch.nn.Module
- The CLIP model
- preprocess : Callable[[PIL.Image], torch.Tensor]
- A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
- """
- if name in _MODELS:
- model_path = _download(_MODELS[name])
- elif os.path.isfile(name):
- model_path = name
- else:
- raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
-
- try:
- # loading JIT archive
- model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
- state_dict = None
- except RuntimeError:
- # loading saved state dict
- if jit:
- warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
- jit = False
- state_dict = torch.load(model_path, map_location="cpu")
-
- if not jit:
- try:
- model = build_model(state_dict or model.state_dict()).to(device)
- except KeyError:
- sd = {k[7:]: v for k,v in state_dict["state_dict"].items()}
- model = build_model(sd).to(device)
-
- if str(device) == "cpu":
- model.float()
- return model, \
- _transform(model.visual.input_resolution, is_train=True), \
- _transform(model.visual.input_resolution, is_train=False)
-
- # patch the device names
- device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
- device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
-
- def patch_device(module):
- graphs = [module.graph] if hasattr(module, "graph") else []
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("prim::Constant"):
- if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
- node.copyAttributes(device_node)
-
- model.apply(patch_device)
- patch_device(model.encode_image)
- patch_device(model.encode_text)
-
- # patch dtype to float32 on CPU
- if str(device) == "cpu":
- float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
- float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
- float_node = float_input.node()
-
- def patch_float(module):
- graphs = [module.graph] if hasattr(module, "graph") else []
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("aten::to"):
- inputs = list(node.inputs())
- for i in [1, 2]: # dtype can be the second or third argument to aten::to()
- if inputs[i].node()["value"] == 5:
- inputs[i].node().copyAttributes(float_node)
-
- model.apply(patch_float)
- patch_float(model.encode_image)
- patch_float(model.encode_text)
-
- model.float()
-
- return model, \
- _transform(model.input_resolution.item(), is_train=True), \
- _transform(model.input_resolution.item(), is_train=False)
-
-
-def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor:
- """
- Returns the tokenized representation of given input string(s)
- Parameters
- ----------
- texts : Union[str, List[str]]
- An input string or a list of input strings to tokenize
- context_length : int
- The context length to use; all CLIP models use 77 as the context length
- Returns
- -------
- A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
- """
- if isinstance(texts, str):
- texts = [texts]
-
- sot_token = _tokenizer.encoder[""]
- eot_token = _tokenizer.encoder[""]
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
-
- for i, tokens in enumerate(all_tokens):
- if len(tokens) > context_length: # Truncate
- tokens = tokens[:context_length]
- result[i, :len(tokens)] = torch.tensor(tokens)
-
- return result
diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/theme-toggler.css b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/theme-toggler.css
deleted file mode 100644
index b673b5920a24693e7ea15b873e46731b388ec527..0000000000000000000000000000000000000000
--- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/theme-toggler.css
+++ /dev/null
@@ -1,33 +0,0 @@
-.theme-toggler-container {
- margin: 24px 0px 8px 0px;
- justify-content: center;
-}
-
-.theme-toggler-container.checkbox input + label,
-.theme-toggler-container.checkbox input:checked + label:after {
- background: var(--colour-1);
-}
-
-.theme-toggler-container.checkbox input + label:after,
-.theme-toggler-container.checkbox input:checked + label {
- background: var(--colour-3);
-}
-
-.theme-toggler-container.checkbox span {
- font-size: 0.75rem;
-}
-
-.theme-toggler-container.checkbox label {
- width: 24px;
- height: 16px;
-}
-
-.theme-toggler-container.checkbox label:after {
- left: 2px;
- width: 10px;
- height: 10px;
-}
-
-.theme-toggler-container.checkbox input:checked + label:after {
- left: calc(100% - 2px - 10px);
-}
\ No newline at end of file
diff --git a/spaces/aniketingole92/gradiolangchainChatbotopenAI/README.md b/spaces/aniketingole92/gradiolangchainChatbotopenAI/README.md
deleted file mode 100644
index f8481f29aee0ca65271f302c08c8f1ebe7579b76..0000000000000000000000000000000000000000
--- a/spaces/aniketingole92/gradiolangchainChatbotopenAI/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GradiolangchainChatbotopenAI
-emoji: 📈
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/annt/mrc_uit_squadv2/retro_reader/preprocess.py b/spaces/annt/mrc_uit_squadv2/retro_reader/preprocess.py
deleted file mode 100644
index fbb334bda950482c30174981305f836d8d512c04..0000000000000000000000000000000000000000
--- a/spaces/annt/mrc_uit_squadv2/retro_reader/preprocess.py
+++ /dev/null
@@ -1,301 +0,0 @@
-import numpy as np
-from .constants import (
- QUESTION_COLUMN_NAME,
- CONTEXT_COLUMN_NAME,
- ANSWER_COLUMN_NAME,
- ANSWERABLE_COLUMN_NAME,
- ID_COLUMN_NAME,
-)
-
-
-def get_sketch_features(tokenizer, mode, data_args):
-
- pad_on_right = tokenizer.padding_side == "right"
- max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
-
- def tokenize_fn(examples):
- """Tokenize questions and contexts
- Args:
- examples (Dict): DatasetDict
- Returns:
- Dict: Tokenized examples
- """
- # truncation과 padding을 통해 tokenization을 진행
- # stride를 이용하여 overflow를 유지
- # 각 example들은 이전의 context와 조금씩 겹침
- # overflow 발생 시 지정한 batch size보다 더 많은 sample이 들어올 수 있음 -> data augmentation
- tokenized_examples = tokenizer(
- examples[QUESTION_COLUMN_NAME if pad_on_right else CONTEXT_COLUMN_NAME],
- examples[CONTEXT_COLUMN_NAME if pad_on_right else QUESTION_COLUMN_NAME],
- # 길이가 긴 context가 등장할 경우 truncation을 진행
- truncation="only_second" if pad_on_right else "only_first",
- max_length=max_seq_length,
- stride=data_args.doc_stride,
- # overflow 발생 시 원래 인덱스를 찾을 수 있게 mapping 가능한 값이 필요
- return_overflowing_tokens=True,
- return_offsets_mapping=False,
- # sentence pair가 입력으로 들어올 때 0과 1로 구분지음
- return_token_type_ids=data_args.return_token_type_ids,
- padding="max_length" if data_args.pad_to_max_length else False,
- # return_tensors='pt'
- )
- return tokenized_examples
-
- def prepare_train_features(examples):
- tokenized_examples = tokenize_fn(examples)
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
-
- tokenized_examples["labels"] = []
-
- for i in range(len(tokenized_examples["input_ids"])):
- # 하나의 example이 여러 개의 span을 가질 수 있음
- sample_index = sample_mapping[i]
-
- # unanswerable label 생성
- # answerable: 0, unanswerable: 1
- is_impossible = examples[ANSWERABLE_COLUMN_NAME][sample_index]
- tokenized_examples["labels"].append(0 if not is_impossible else 1)
-
- return tokenized_examples
-
- def prepare_eval_features(examples):
- tokenized_examples = tokenize_fn(examples)
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
-
- tokenized_examples["example_id"] = []
- tokenized_examples["labels"] = []
-
- for i in range(len(tokenized_examples["input_ids"])):
- # 하나의 example이 여러 개의 span을 가질 수 있음
- sample_index = sample_mapping[i]
-
- id_col = examples[ID_COLUMN_NAME][sample_index]
- tokenized_examples["example_id"].append(id_col)
-
- # unanswerable label 생성
- # answerable: 0, unanswerable: 1
- is_impossible = examples[ANSWERABLE_COLUMN_NAME][sample_index]
- tokenized_examples["labels"].append(0 if not is_impossible else 1)
-
- return tokenized_examples
-
- def prepare_test_features(examples):
- tokenized_examples = tokenize_fn(examples)
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
-
- tokenized_examples["example_id"] = []
-
- for i in range(len(tokenized_examples["input_ids"])):
- # 하나의 example이 여러 개의 span을 가질 수 있음
- sample_index = sample_mapping[i]
-
- id_col = examples[ID_COLUMN_NAME][sample_index]
- tokenized_examples["example_id"].append(id_col)
-
- return tokenized_examples
-
- if mode == "train":
- get_features_fn = prepare_train_features
- elif mode == "eval":
- get_features_fn = prepare_eval_features
- elif mode == "test":
- get_features_fn = prepare_test_features
-
- return get_features_fn, True
-
-
-def get_intensive_features(tokenizer, mode, data_args):
-
- pad_on_right = tokenizer.padding_side == "right"
- max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
- beam_based = data_args.intensive_model_type in ["xlnet", "xlm"]
-
- def tokenize_fn(examples):
- """Tokenize questions and contexts
- Args:
- examples (Dict): DatasetDict
- Returns:
- Dict: Tokenized examples
- """
- # truncation과 padding을 통해 tokenization을 진행
- # stride를 이용하여 overflow를 유지
- # 각 example들은 이전의 context와 조금씩 겹침
- # overflow 발생 시 지정한 batch size보다 더 많은 sample이 들어올 수 있음
- tokenized_examples = tokenizer(
- examples[QUESTION_COLUMN_NAME if pad_on_right else CONTEXT_COLUMN_NAME],
- examples[CONTEXT_COLUMN_NAME if pad_on_right else QUESTION_COLUMN_NAME],
- # 길이가 긴 context가 등장할 경우 truncation을 진행
- truncation="only_second" if pad_on_right else "only_first",
- max_length=max_seq_length,
- stride=data_args.doc_stride,
- # overflow 발생 시 원래 인덱스를 찾을 수 있게 mapping 가능한 값이 필요
- return_overflowing_tokens=True,
- # token의 캐릭터 단위 position을 찾을 수 있는 offset을 반환
- # start position과 end position을 찾는데 도움을 줌
- return_offsets_mapping=True,
- # sentence pair가 입력으로 들어올 때 0과 1로 구분지음
- return_token_type_ids=data_args.return_token_type_ids,
- padding="max_length" if data_args.pad_to_max_length else False,
- # return_tensors='pt'
- )
- return tokenized_examples
-
- def prepare_train_features(examples):
- tokenized_examples = tokenize_fn(examples)
- # Since one example might give us several features if it has a long context,
- # we need a map from a feature to its corresponding example.
- # This key gives us just that.
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
- # The offset mappings will give us a map from token to character position in the original context
- # This will help us compute the start_positions and end_positions.
- offset_mapping = tokenized_examples.pop("offset_mapping")
-
- # Let's label those exmaples!
- tokenized_examples["start_positions"] = []
- tokenized_examples["end_positions"] = []
- tokenized_examples["is_impossibles"] = []
- if beam_based:
- tokenized_examples["cls_index"] = []
- tokenized_examples["p_mask"] = []
-
- for i, offsets in enumerate(offset_mapping):
- # We will label impossible answers with the index of the CLS token.
- input_ids = tokenized_examples["input_ids"][i]
- cls_index = input_ids.index(tokenizer.cls_token_id)
-
- # Grab the sequence corresponding to that example
- # (to know what is the context and what is the question.)
- sequence_ids = tokenized_examples.sequence_ids(i)
- context_index = 1 if pad_on_right else 0
-
- # `p_mask` which indicates the tokens that can't be in answers
- # Build the p_mask: non special tokens and context gets 0.0, the others get 1.0.
- # The cls token gets 0.0 too (for predictions of empty answers).
- # iInspired by XLNet.
- if beam_based:
- tokenized_examples["cls_index"].append(cls_index)
- tokenized_examples["p_mask"].append(
- [
- 0.0 if s == context_index or k == cls_index else 1.0
- for k, s in enumerate(sequence_ids)
- ]
- )
-
- # One example can give several spans,
- # this is the index of the example containing this span of text.
- sample_index = sample_mapping[i]
- answers = examples[ANSWER_COLUMN_NAME][sample_index]
- is_impossible = examples[ANSWERABLE_COLUMN_NAME][sample_index]
-
- # If no answers are given, set the cls_index as answer.
- if is_impossible or len(answers["answer_start"]) == 0:
- tokenized_examples["start_positions"].append(cls_index)
- tokenized_examples["end_positions"].append(cls_index)
- tokenized_examples["is_impossibles"].append(1.0) # unanswerable
- else:
- # Start/end character index of the answer in the text.
- start_char = answers["answer_start"][0]
- end_char = start_char + len(answers["text"][0])
-
- # sequence_ids는 0, 1, None의 세 값만 가짐
- # None 0 0 ... 0 None 1 1 ... 1 None
-
- # Start token index of the current span in the text.
- token_start_index = 0
- while sequence_ids[token_start_index] != context_index:
- token_start_index += 1
-
- # End token index of the current span in the text.
- token_end_index = len(input_ids) - 1
- while sequence_ids[token_end_index] != context_index:
- token_end_index -= 1
-
- # Detect if the answer is out of the span
- # (in which case this feature is labeled with the CLS index.)
- if not (
- offsets[token_start_index][0] <= start_char and
- offsets[token_end_index][1] >= end_char
- ):
- tokenized_examples["start_positions"].append(cls_index)
- tokenized_examples["end_positions"].append(cls_index)
- tokenized_examples["is_impossibles"].append(1.0) # unanswerable
- else:
- # Otherwise move the token_start_index and token_end_index to the two ends of the answer.
- # Note: we could go after the last offset if the answer is the last word (edge case).
- while (
- token_start_index < len(offsets) and
- offsets[token_start_index][0] <= start_char
- ):
- token_start_index += 1
- tokenized_examples["start_positions"].append(token_start_index - 1)
-
- while offsets[token_end_index][1] >= end_char:
- token_end_index -= 1
- tokenized_examples["end_positions"].append(token_end_index + 1)
-
- tokenized_examples["is_impossibles"].append(0.0) # answerable
-
- return tokenized_examples
-
- def prepare_eval_features(examples):
- tokenized_examples = tokenize_fn(examples)
- # Since one example might give us several features if it has a long context,
- # we need a map from a feature to its corresponding example.
- # This key gives us just that.
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
-
- # For evaluation, we will need to convert our predictions to substrings of the context,
- # so we keep the corresponding example_id and we will store the offset mappings.
- tokenized_examples["example_id"] = []
-
- # We will provide the index of the CLS token ans the p_mask to the model,
- # but not the is_impossible label.
- if beam_based:
- tokenized_examples["cls_index"] = []
- tokenized_examples["p_mask"] = []
-
- for i, input_ids in enumerate(tokenized_examples["input_ids"]):
- # Find the CLS token in the input ids.
- cls_index = input_ids.index(tokenizer.cls_token_id)
-
- # Grab the sequence corresponding to that example
- # (to know what is the context and what is the question.)
- sequence_ids = tokenized_examples.sequence_ids(i)
- context_index = 1 if pad_on_right else 0
-
- # `p_mask` which indicates the tokens that can't be in answers
- # Build the p_mask: non special tokens and context gets 0.0, the others get 1.0.
- # The cls token gets 0.0 too (for predictions of empty answers).
- # iInspired by XLNet.
- if beam_based:
- tokenized_examples["cls_index"].append(cls_index)
- tokenized_examples["p_mask"].append(
- [
- 0.0 if s == context_index or k == cls_index else 1.0
- for k, s in enumerate(sequence_ids)
- ]
- )
-
- # One example can give several spans,
- # this is the index of the example containing this span of text.
- sample_index = sample_mapping[i]
- id_col = examples[ID_COLUMN_NAME][sample_index]
- tokenized_examples["example_id"].append(id_col)
-
- # Set to None the offset_mapping that are note part of the context
- # so it's easy to determine if a token position is part of the context or not.
- tokenized_examples["offset_mapping"][i] = [
- (o if sequence_ids[k] == context_index else None)
- for k, o in enumerate(tokenized_examples["offset_mapping"][i])
- ]
-
- return tokenized_examples
-
- if mode == "train":
- get_features_fn = prepare_train_features
- elif mode == "eval":
- get_features_fn = prepare_eval_features
- elif mode == "test":
- get_features_fn = prepare_eval_features
-
- return get_features_fn, True
\ No newline at end of file
diff --git a/spaces/ansfarooq7/l4-project/README.md b/spaces/ansfarooq7/l4-project/README.md
deleted file mode 100644
index 40caf51a9f75c08dfd47ef2ea22d6e15314c80ed..0000000000000000000000000000000000000000
--- a/spaces/ansfarooq7/l4-project/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: Limerick Generation
-emoji: 🧝
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-# Configuration
-`title`: _string_
-Display title for the Space
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/llamacpp_model.py b/spaces/antonovmaxim/text-generation-webui-space/modules/llamacpp_model.py
deleted file mode 100644
index 0ed33543dcf5ca61f0dddc6b3c35add9d535df59..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/modules/llamacpp_model.py
+++ /dev/null
@@ -1,86 +0,0 @@
-'''
-Based on
-https://github.com/abetlen/llama-cpp-python
-
-Documentation:
-https://abetlen.github.io/llama-cpp-python/
-'''
-
-import logging
-import re
-
-from llama_cpp import Llama, LlamaCache
-
-from modules import shared
-from modules.callbacks import Iteratorize
-
-
-class LlamaCppModel:
- def __init__(self):
- self.initialized = False
-
- def __del__(self):
- self.model.__del__()
-
- @classmethod
- def from_pretrained(self, path):
- result = self()
-
- cache_capacity = 0
- if shared.args.cache_capacity is not None:
- if 'GiB' in shared.args.cache_capacity:
- cache_capacity = int(re.sub('[a-zA-Z]', '', shared.args.cache_capacity)) * 1000 * 1000 * 1000
- elif 'MiB' in shared.args.cache_capacity:
- cache_capacity = int(re.sub('[a-zA-Z]', '', shared.args.cache_capacity)) * 1000 * 1000
- else:
- cache_capacity = int(shared.args.cache_capacity)
-
- logging.info("Cache capacity is " + str(cache_capacity) + " bytes")
-
- params = {
- 'model_path': str(path),
- 'n_ctx': 2048,
- 'seed': 0,
- 'n_threads': shared.args.threads or None,
- 'n_batch': shared.args.n_batch,
- 'use_mmap': not shared.args.no_mmap,
- 'use_mlock': shared.args.mlock,
- 'n_gpu_layers': shared.args.n_gpu_layers
- }
- self.model = Llama(**params)
- if cache_capacity > 0:
- self.model.set_cache(LlamaCache(capacity_bytes=cache_capacity))
-
- # This is ugly, but the model and the tokenizer are the same object in this library.
- return result, result
-
- def encode(self, string):
- if type(string) is str:
- string = string.encode()
- return self.model.tokenize(string)
-
- def generate(self, context="", token_count=20, temperature=1, top_p=1, top_k=50, repetition_penalty=1, callback=None):
- context = context if type(context) is str else context.decode()
- completion_chunks = self.model.create_completion(
- prompt=context,
- max_tokens=token_count,
- temperature=temperature,
- top_p=top_p,
- top_k=top_k,
- repeat_penalty=repetition_penalty,
- stream=True
- )
- output = ""
- for completion_chunk in completion_chunks:
- text = completion_chunk['choices'][0]['text']
- output += text
- if callback:
- callback(text)
- return output
-
- def generate_with_streaming(self, **kwargs):
- with Iteratorize(self.generate, kwargs, callback=None) as generator:
- reply = ''
- for token in generator:
- reply += token
- yield reply
diff --git a/spaces/apratap5/Abhay-ASRLiveSpeechRecognition-ZR/README.md b/spaces/apratap5/Abhay-ASRLiveSpeechRecognition-ZR/README.md
deleted file mode 100644
index 3f2f400c2ae7ea2c26e51a15af9693deeeae548c..0000000000000000000000000000000000000000
--- a/spaces/apratap5/Abhay-ASRLiveSpeechRecognition-ZR/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Abhay ASRLiveSpeechRecognition ZR
-emoji: ⚡
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/argilla/argilla-streamlit-customs/my_app/pages/ui-record-creator.py b/spaces/argilla/argilla-streamlit-customs/my_app/pages/ui-record-creator.py
deleted file mode 100644
index 1595e6a0cfaa3acb4a16cb9dfe8869757d660e29..0000000000000000000000000000000000000000
--- a/spaces/argilla/argilla-streamlit-customs/my_app/pages/ui-record-creator.py
+++ /dev/null
@@ -1,117 +0,0 @@
-from ast import literal_eval
-
-import argilla as rg
-import pandas as pd
-import spacy
-import streamlit as st
-from streamlit_tags import st_tags
-from text_highlighter import text_highlighter
-from utils.commons import (
- ArgillaSingleton,
- argilla_login_flow,
- get_data_snapshot,
- get_dataset_list,
-)
-
-st.set_page_config(
- page_title="Argilla - ✍️ - Manual record creator",
- page_icon="✍️",
- layout="wide",
-)
-
-
-api_url, api_key = argilla_login_flow("✍️ Manual record creator")
-
-st.write(
- """
- This page allows you to create and annotate individual records from Argilla without using any code!
- In the background it uses `argilla.log()` and `TextClassificationRecord`, `TokenClassificationRecord`, and `Text2TextRecord`.
- """
-)
-
-nlp = spacy.blank("en")
-datasets_list = [
- f"{ds['owner']}/{ds['name']}" for ds in get_dataset_list(api_url, api_key)
-]
-dataset_argilla = st.selectbox(
- "Argilla Dataset Name", options=["other"] + datasets_list
-)
-if dataset_argilla == "other":
- ArgillaSingleton.init(api_url, api_key)
- dataset_argilla_name = st.text_input("New Dataset Name")
- labels = []
- disabled = False
- options = ["TextClassification", "TokenClassification", "Text2Text"]
-else:
- dataset_argilla_name = dataset_argilla.split("/")[-1]
- dataset_argilla_workspace = dataset_argilla.split("/")[0]
- get_data_snapshot(dataset_argilla_name, dataset_argilla_workspace)
- rg.set_workspace(dataset_argilla_workspace)
- for dataset in get_dataset_list(api_url, api_key):
- if (
- dataset["name"] == dataset_argilla_name
- and dataset["owner"] == dataset_argilla_workspace
- ):
- labels = dataset["labels"]
- dataset_type = dataset["task"]
- disabled = True
- options = [dataset_type]
- break
-
-
-if dataset_argilla_name:
- dataset_type = st.selectbox("Dataset Type", options, disabled=disabled)
- if dataset_type in ["TextClassification", "TokenClassification"]:
- labels = st_tags(label="Labels", value=labels, text="Press enter to add more")
-
- if not any(labels):
- st.warning("No labels provided")
-
- st.stop()
- if dataset_type == "TextClassification":
- multi_label = st.radio("multi label", [False, True], horizontal=True)
- else:
- multi_label = False
- text = st.text_area("Text")
-
- if text:
- if dataset_type == "TextClassification":
- if multi_label:
- annotation = st.multiselect("annotation", labels, default=labels)
- else:
- annotation = st.radio("annotation", labels, horizontal=True)
-
- record = rg.TextClassificationRecord(
- text=text, annotation=annotation, multi_label=multi_label
- )
- elif dataset_type == "TokenClassification":
- annotation = text_highlighter(
- text=text,
- labels=labels,
- )
- if annotation:
- annotation = [(an["tag"], an["start"], an["end"]) for an in annotation]
-
- tokens = [token.text for token in nlp(text)]
- record = rg.TokenClassificationRecord(
- text=text, tokens=tokens, annotation=annotation
- )
-
- elif dataset_type == "Text2Text":
- annotation = st.text_area("Annotation")
- record = rg.Text2TextRecord(text=text, annotation=annotation)
- metadata = st.text_area("Metadata", value="{}")
- metadata = literal_eval(metadata)
-
- record.metadata = metadata
- new_record = st.write(pd.DataFrame(record.dict()))
- else:
- st.warning("Please enter text")
-
- save = st.button("Save")
- if save:
- rg.log(record, dataset_argilla_name)
- st.success("Saved")
-else:
- st.warning("Please enter dataset name")
-
diff --git a/spaces/arseny-chebyshev/vox-diffusion/README.md b/spaces/arseny-chebyshev/vox-diffusion/README.md
deleted file mode 100644
index a7e41fca94e51b6297f2f9c3e29aae1b78786423..0000000000000000000000000000000000000000
--- a/spaces/arseny-chebyshev/vox-diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: vox-diffusion
-emoji: 👨🔬
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA512.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA512.py
deleted file mode 100644
index 20961aca993f588a0d8a7b381d92958af8dba159..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA512.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# SelfTest/Hash/test_SHA512.py: Self-test for the SHA-512 hash function
-#
-# Written in 2008 by Dwayne C. Litzenberger
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-"""Self-test suite for Crypto.Hash.SHA512"""
-
-from binascii import hexlify
-
-from Crypto.Hash import SHA512
-from .common import make_hash_tests
-from Crypto.SelfTest.loader import load_test_vectors
-
-# Test vectors from various sources
-# This is a list of (expected_result, input[, description]) tuples.
-test_data_512_other = [
-
- # RFC 4634: Section Page 8.4, "Test 1"
- ('ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f', 'abc'),
-
- # RFC 4634: Section Page 8.4, "Test 2.1"
- ('8e959b75dae313da8cf4f72814fc143f8f7779c6eb9f7fa17299aeadb6889018501d289e4900f7e4331b99dec4b5433ac7d329eeb6dd26545e96e55b874be909', 'abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmnhijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu'),
-
- # RFC 4634: Section Page 8.4, "Test 3"
- ('e718483d0ce769644e2e42c7bc15b4638e1f98b13b2044285632a803afa973ebde0ff244877ea60a4cb0432ce577c31beb009c5c2c49aa2e4eadb217ad8cc09b', 'a' * 10**6, "'a' * 10**6"),
-
- # Taken from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm
- ('cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e', ''),
-
- ('af9ed2de700433b803240a552b41b5a472a6ef3fe1431a722b2063c75e9f07451f67a28e37d09cde769424c96aea6f8971389db9e1993d6c565c3c71b855723c', 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'),
-]
-
-
-def get_tests_SHA512():
-
- test_vectors = load_test_vectors(("Hash", "SHA2"),
- "SHA512ShortMsg.rsp",
- "KAT SHA-512",
- {"len": lambda x: int(x)}) or []
-
- test_data = test_data_512_other[:]
- for tv in test_vectors:
- try:
- if tv.startswith('['):
- continue
- except AttributeError:
- pass
- if tv.len == 0:
- tv.msg = b""
- test_data.append((hexlify(tv.md), tv.msg, tv.desc))
-
- tests = make_hash_tests(SHA512, "SHA512", test_data,
- digest_size=64,
- oid="2.16.840.1.101.3.4.2.3")
- return tests
-
-
-def get_tests_SHA512_224():
-
- test_vectors = load_test_vectors(("Hash", "SHA2"),
- "SHA512_224ShortMsg.rsp",
- "KAT SHA-512/224",
- {"len": lambda x: int(x)}) or []
-
- test_data = []
- for tv in test_vectors:
- try:
- if tv.startswith('['):
- continue
- except AttributeError:
- pass
- if tv.len == 0:
- tv.msg = b""
- test_data.append((hexlify(tv.md), tv.msg, tv.desc))
-
- tests = make_hash_tests(SHA512, "SHA512/224", test_data,
- digest_size=28,
- oid="2.16.840.1.101.3.4.2.5",
- extra_params={ "truncate" : "224" })
- return tests
-
-
-def get_tests_SHA512_256():
-
- test_vectors = load_test_vectors(("Hash", "SHA2"),
- "SHA512_256ShortMsg.rsp",
- "KAT SHA-512/256",
- {"len": lambda x: int(x)}) or []
-
- test_data = []
- for tv in test_vectors:
- try:
- if tv.startswith('['):
- continue
- except AttributeError:
- pass
- if tv.len == 0:
- tv.msg = b""
- test_data.append((hexlify(tv.md), tv.msg, tv.desc))
-
- tests = make_hash_tests(SHA512, "SHA512/256", test_data,
- digest_size=32,
- oid="2.16.840.1.101.3.4.2.6",
- extra_params={ "truncate" : "256" })
- return tests
-
-
-def get_tests(config={}):
-
- tests = []
- tests += get_tests_SHA512()
- tests += get_tests_SHA512_224()
- tests += get_tests_SHA512_256()
- return tests
-
-if __name__ == '__main__':
- import unittest
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
-
-# vim:set ts=4 sw=4 sts=4 expandtab:
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/save.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/save.py
deleted file mode 100644
index 94ddab6f7b63e469746b43b9874b4ad2079649f5..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/save.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import json
-import pathlib
-
-from .mimebundle import spec_to_mimebundle
-
-
-def write_file_or_filename(fp, content, mode="w"):
- """Write content to fp, whether fp is a string, a pathlib Path or a
- file-like object"""
- if isinstance(fp, str) or isinstance(fp, pathlib.PurePath):
- with open(fp, mode) as f:
- f.write(content)
- else:
- fp.write(content)
-
-
-def save(
- chart,
- fp,
- vega_version,
- vegaembed_version,
- format=None,
- mode=None,
- vegalite_version=None,
- embed_options=None,
- json_kwds=None,
- webdriver="chrome",
- scale_factor=1,
- **kwargs,
-):
- """Save a chart to file in a variety of formats
-
- Supported formats are [json, html, png, svg]
-
- Parameters
- ----------
- chart : alt.Chart
- the chart instance to save
- fp : string filename, pathlib.Path or file-like object
- file to which to write the chart.
- format : string (optional)
- the format to write: one of ['json', 'html', 'png', 'svg'].
- If not specified, the format will be determined from the filename.
- mode : string (optional)
- Either 'vega' or 'vegalite'. If not specified, then infer the mode from
- the '$schema' property of the spec, or the ``opt`` dictionary.
- If it's not specified in either of those places, then use 'vegalite'.
- vega_version : string
- For html output, the version of vega.js to use
- vegalite_version : string
- For html output, the version of vegalite.js to use
- vegaembed_version : string
- For html output, the version of vegaembed.js to use
- embed_options : dict
- The vegaEmbed options dictionary. Default is {}
- (See https://github.com/vega/vega-embed for details)
- json_kwds : dict
- Additional keyword arguments are passed to the output method
- associated with the specified format.
- webdriver : string {'chrome' | 'firefox'}
- Webdriver to use for png or svg output
- scale_factor : float
- scale_factor to use to change size/resolution of png or svg output
- **kwargs :
- additional kwargs passed to spec_to_mimebundle.
- """
- if json_kwds is None:
- json_kwds = {}
-
- if embed_options is None:
- embed_options = {}
-
- if format is None:
- if isinstance(fp, str):
- format = fp.split(".")[-1]
- elif isinstance(fp, pathlib.PurePath):
- format = fp.suffix.lstrip(".")
- else:
- raise ValueError(
- "must specify file format: " "['png', 'svg', 'pdf', 'html', 'json']"
- )
-
- spec = chart.to_dict()
-
- if mode is None:
- if "mode" in embed_options:
- mode = embed_options["mode"]
- elif "$schema" in spec:
- mode = spec["$schema"].split("/")[-2]
- else:
- mode = "vega-lite"
-
- if mode not in ["vega", "vega-lite"]:
- raise ValueError("mode must be 'vega' or 'vega-lite', " "not '{}'".format(mode))
-
- if mode == "vega-lite" and vegalite_version is None:
- raise ValueError("must specify vega-lite version")
-
- if format == "json":
- json_spec = json.dumps(spec, **json_kwds)
- write_file_or_filename(fp, json_spec, mode="w")
- elif format == "html":
- mimebundle = spec_to_mimebundle(
- spec=spec,
- format=format,
- mode=mode,
- vega_version=vega_version,
- vegalite_version=vegalite_version,
- vegaembed_version=vegaembed_version,
- embed_options=embed_options,
- json_kwds=json_kwds,
- **kwargs,
- )
- write_file_or_filename(fp, mimebundle["text/html"], mode="w")
- elif format in ["png", "svg", "pdf"]:
- mimebundle = spec_to_mimebundle(
- spec=spec,
- format=format,
- mode=mode,
- vega_version=vega_version,
- vegalite_version=vegalite_version,
- vegaembed_version=vegaembed_version,
- webdriver=webdriver,
- scale_factor=scale_factor,
- **kwargs,
- )
- if format == "png":
- write_file_or_filename(fp, mimebundle["image/png"], mode="wb")
- elif format == "pdf":
- write_file_or_filename(fp, mimebundle["application/pdf"], mode="wb")
- else:
- write_file_or_filename(fp, mimebundle["image/svg+xml"], mode="w")
- else:
- raise ValueError("unrecognized format: '{}'".format(format))
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_mimebundle.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_mimebundle.py
deleted file mode 100644
index c893b7ce21d34a050362b3eb1aa3d89376bafbe8..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/utils/tests/test_mimebundle.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import pytest
-
-import altair as alt
-from ..mimebundle import spec_to_mimebundle
-
-
-@pytest.fixture
-def require_altair_saver():
- try:
- import altair_saver # noqa: F401
- except ImportError:
- pytest.skip("altair_saver not importable; cannot run saver tests")
-
-
-@pytest.fixture
-def vegalite_spec():
- return {
- "$schema": "https://vega.github.io/schema/vega-lite/v4.json",
- "description": "A simple bar chart with embedded data.",
- "data": {
- "values": [
- {"a": "A", "b": 28},
- {"a": "B", "b": 55},
- {"a": "C", "b": 43},
- {"a": "D", "b": 91},
- {"a": "E", "b": 81},
- {"a": "F", "b": 53},
- {"a": "G", "b": 19},
- {"a": "H", "b": 87},
- {"a": "I", "b": 52},
- ]
- },
- "mark": "bar",
- "encoding": {
- "x": {"field": "a", "type": "ordinal"},
- "y": {"field": "b", "type": "quantitative"},
- },
- }
-
-
-@pytest.fixture
-def vega_spec():
- return {
- "$schema": "https://vega.github.io/schema/vega/v5.json",
- "axes": [
- {
- "aria": False,
- "domain": False,
- "grid": True,
- "gridScale": "x",
- "labels": False,
- "maxExtent": 0,
- "minExtent": 0,
- "orient": "left",
- "scale": "y",
- "tickCount": {"signal": "ceil(height/40)"},
- "ticks": False,
- "zindex": 0,
- },
- {
- "grid": False,
- "labelAlign": "right",
- "labelAngle": 270,
- "labelBaseline": "middle",
- "orient": "bottom",
- "scale": "x",
- "title": "a",
- "zindex": 0,
- },
- {
- "grid": False,
- "labelOverlap": True,
- "orient": "left",
- "scale": "y",
- "tickCount": {"signal": "ceil(height/40)"},
- "title": "b",
- "zindex": 0,
- },
- ],
- "background": "white",
- "data": [
- {
- "name": "source_0",
- "values": [
- {"a": "A", "b": 28},
- {"a": "B", "b": 55},
- {"a": "C", "b": 43},
- {"a": "D", "b": 91},
- {"a": "E", "b": 81},
- {"a": "F", "b": 53},
- {"a": "G", "b": 19},
- {"a": "H", "b": 87},
- {"a": "I", "b": 52},
- ],
- },
- {
- "name": "data_0",
- "source": "source_0",
- "transform": [
- {
- "expr": 'isValid(datum["b"]) && isFinite(+datum["b"])',
- "type": "filter",
- }
- ],
- },
- ],
- "description": "A simple bar chart with embedded data.",
- "height": 200,
- "marks": [
- {
- "encode": {
- "update": {
- "ariaRoleDescription": {"value": "bar"},
- "description": {
- "signal": '"a: " + (isValid(datum["a"]) ? datum["a"] : ""+datum["a"]) + "; b: " + (format(datum["b"], ""))'
- },
- "fill": {"value": "#4c78a8"},
- "width": {"band": 1, "scale": "x"},
- "x": {"field": "a", "scale": "x"},
- "y": {"field": "b", "scale": "y"},
- "y2": {"scale": "y", "value": 0},
- }
- },
- "from": {"data": "data_0"},
- "name": "marks",
- "style": ["bar"],
- "type": "rect",
- }
- ],
- "padding": 5,
- "scales": [
- {
- "domain": {"data": "data_0", "field": "a", "sort": True},
- "name": "x",
- "paddingInner": 0.1,
- "paddingOuter": 0.05,
- "range": {"step": {"signal": "x_step"}},
- "type": "band",
- },
- {
- "domain": {"data": "data_0", "field": "b"},
- "name": "y",
- "nice": True,
- "range": [{"signal": "height"}, 0],
- "type": "linear",
- "zero": True,
- },
- ],
- "signals": [
- {"name": "x_step", "value": 20},
- {
- "name": "width",
- "update": "bandspace(domain('x').length, 0.1, 0.05) * x_step",
- },
- ],
- "style": "cell",
- }
-
-
-def test_vegalite_to_vega_mimebundle(require_altair_saver, vegalite_spec, vega_spec):
- # temporay fix for https://github.com/vega/vega-lite/issues/7776
- def delete_none(axes):
- for axis in axes:
- for key, value in list(axis.items()):
- if value is None:
- del axis[key]
- return axes
-
- bundle = spec_to_mimebundle(
- spec=vegalite_spec,
- format="vega",
- mode="vega-lite",
- vega_version=alt.VEGA_VERSION,
- vegalite_version=alt.VEGALITE_VERSION,
- vegaembed_version=alt.VEGAEMBED_VERSION,
- )
-
- bundle["application/vnd.vega.v5+json"]["axes"] = delete_none(
- bundle["application/vnd.vega.v5+json"]["axes"]
- )
- assert bundle == {"application/vnd.vega.v5+json": vega_spec}
-
-
-def test_spec_to_vegalite_mimebundle(vegalite_spec):
- bundle = spec_to_mimebundle(
- spec=vegalite_spec,
- mode="vega-lite",
- format="vega-lite",
- vegalite_version=alt.VEGALITE_VERSION,
- )
- assert bundle == {"application/vnd.vegalite.v4+json": vegalite_spec}
-
-
-def test_spec_to_vega_mimebundle(vega_spec):
- bundle = spec_to_mimebundle(
- spec=vega_spec, mode="vega", format="vega", vega_version=alt.VEGA_VERSION
- )
- assert bundle == {"application/vnd.vega.v5+json": vega_spec}
-
-
-def test_spec_to_json_mimebundle():
- bundle = spec_to_mimebundle(
- spec=vegalite_spec,
- mode="vega-lite",
- format="json",
- )
- assert bundle == {"application/json": vegalite_spec}
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ashercn97/AsherTesting/docs/README.md b/spaces/ashercn97/AsherTesting/docs/README.md
deleted file mode 100644
index 06b73b8468ab263a230cb44ba45a6c95f00b2ada..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/docs/README.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# text-generation-webui documentation
-
-## Table of contents
-
-* [Audio Notification](Audio-Notification.md)
-* [Chat mode](Chat-mode.md)
-* [DeepSpeed](DeepSpeed.md)
-* [Docker](Docker.md)
-* [ExLlama](ExLlama.md)
-* [Extensions](Extensions.md)
-* [FlexGen](FlexGen.md)
-* [Generation parameters](Generation-parameters.md)
-* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md)
-* [llama.cpp models](llama.cpp-models.md)
-* [LLaMA model](LLaMA-model.md)
-* [LoRA](LoRA.md)
-* [Low VRAM guide](Low-VRAM-guide.md)
-* [RWKV model](RWKV-model.md)
-* [Spell book](Spell-book.md)
-* [System requirements](System-requirements.md)
-* [Training LoRAs](Training-LoRAs.md)
-* [Windows installation guide](Windows-installation-guide.md)
-* [WSL installation guide](WSL-installation-guide.md)
diff --git a/spaces/aubmindlab/Arabic-NLP/backend/utils.py b/spaces/aubmindlab/Arabic-NLP/backend/utils.py
deleted file mode 100644
index db38742bd9d65368f533f5e8f9cc84ff2b41bac0..0000000000000000000000000000000000000000
--- a/spaces/aubmindlab/Arabic-NLP/backend/utils.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import re
-import numpy as np
-import psutil
-import os
-from tqdm.auto import tqdm
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-def get_current_ram_usage():
- ram = psutil.virtual_memory()
- return ram.available / 1024 / 1024 / 1024, ram.total / 1024 / 1024 / 1024
-
-
-def download_models(models):
- for model in tqdm(models, desc="Downloading models"):
- logger.info(f"Downloading {model}")
- for i in range(0, 5):
- curr_dir = f"{model}/train_{i}/best_model/"
- os.makedirs(curr_dir, exist_ok=True)
- os.system(
- f"wget -q https://huggingface.co/researchaccount/{model}/resolve/main/train_{i}/best_model/config.json -P {curr_dir}"
- )
- os.system(
- f"wget -q https://huggingface.co/researchaccount/{model}/resolve/main/train_{i}/best_model/pytorch_model.bin -P {curr_dir}"
- )
- os.system(
- f"wget -q https://huggingface.co/researchaccount/{model}/resolve/main/train_{i}/best_model/special_tokens_map.json -P {curr_dir}"
- )
- os.system(
- f"wget -q https://huggingface.co/researchaccount/{model}/resolve/main/train_{i}/best_model/tokenizer_config.json -P {curr_dir}"
- )
- os.system(
- f"wget -q https://huggingface.co/researchaccount/{model}/resolve/main/train_{i}/best_model/training_args.bin -P {curr_dir}"
- )
- os.system(
- f"wget -q https://huggingface.co/researchaccount/{model}/resolve/main/train_{i}/best_model/vocab.txt -P {curr_dir}"
- )
-
-
-def softmax(x):
- return np.exp(x) / sum(np.exp(x))
-
-
-def ga(file):
- code = """
-
-
-
- """
-
- a = os.path.dirname(file) + "/static/index.html"
- with open(a, "r") as f:
- data = f.read()
- if len(re.findall("G-", data)) == 0:
- with open(a, "w") as ff:
- newdata = re.sub("", "" + code, data)
- ff.write(newdata)
diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/webui.bat b/spaces/awaawawawa/iurf7irfuyytruyyugb/webui.bat
deleted file mode 100644
index c8bfe1d5308edb844c68b9dd981a9b59bd03f98c..0000000000000000000000000000000000000000
--- a/spaces/awaawawawa/iurf7irfuyytruyyugb/webui.bat
+++ /dev/null
@@ -1,62 +0,0 @@
-@echo off
-
-if not defined PYTHON (set PYTHON=python)
-if not defined VENV_DIR (set VENV_DIR=venv)
-
-set ERROR_REPORTING=FALSE
-
-mkdir tmp 2>NUL
-
-%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :start_venv
-echo Couldn't launch python
-goto :show_stdout_stderr
-
-:start_venv
-if [%VENV_DIR%] == [-] goto :skip_venv
-
-dir %VENV_DIR%\Scripts\Python.exe >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :activate_venv
-
-for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i"
-echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME%
-%PYTHON_FULLNAME% -m venv %VENV_DIR% >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :activate_venv
-echo Unable to create venv in directory %VENV_DIR%
-goto :show_stdout_stderr
-
-:activate_venv
-set PYTHON="%~dp0%VENV_DIR%\Scripts\Python.exe"
-echo venv %PYTHON%
-goto :launch
-
-:skip_venv
-
-:launch
-%PYTHON% launch.py
-pause
-exit /b
-
-:show_stdout_stderr
-
-echo.
-echo exit code: %errorlevel%
-
-for /f %%i in ("tmp\stdout.txt") do set size=%%~zi
-if %size% equ 0 goto :show_stderr
-echo.
-echo stdout:
-type tmp\stdout.txt
-
-:show_stderr
-for /f %%i in ("tmp\stderr.txt") do set size=%%~zi
-if %size% equ 0 goto :show_stderr
-echo.
-echo stderr:
-type tmp\stderr.txt
-
-:endofscript
-
-echo.
-echo Launch unsuccessful. Exiting.
-pause
diff --git a/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources/README.md b/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources/README.md
deleted file mode 100644
index adb334d1f7e115a26f293c4be7d9c547fd6077cd..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Data-Synthesizer-Synthesize-From-Multiple-Sources/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Data Synthesizer Synthesize From Multiple Sources
-emoji: ⚡
-colorFrom: gray
-colorTo: green
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/TTS-STT-Blocks/app.py b/spaces/awacke1/TTS-STT-Blocks/app.py
deleted file mode 100644
index 15ed8ec721c4864341852b0c946f4812bb390294..0000000000000000000000000000000000000000
--- a/spaces/awacke1/TTS-STT-Blocks/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import streamlit as st
-import datetime
-from transformers import pipeline
-import gradio as gr
-
-import tempfile
-from typing import Optional
-import numpy as np
-from TTS.utils.manage import ModelManager
-from TTS.utils.synthesizer import Synthesizer
-
-# PersistDataset -----
-import os
-import csv
-import gradio as gr
-from gradio import inputs, outputs
-import huggingface_hub
-from huggingface_hub import Repository, hf_hub_download, upload_file
-from datetime import datetime
-
-# created new dataset as awacke1/MindfulStory.csv
-DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/MindfulStory.csv"
-DATASET_REPO_ID = "awacke1/MindfulStory.csv"
-DATA_FILENAME = "MindfulStory.csv"
-DATA_FILE = os.path.join("data", DATA_FILENAME)
-HF_TOKEN = os.environ.get("HF_TOKEN")
-
-# Download dataset repo using hub download
-try:
- hf_hub_download(
- repo_id=DATASET_REPO_ID,
- filename=DATA_FILENAME,
- cache_dir=DATA_DIRNAME,
- force_filename=DATA_FILENAME
- )
-except:
- print("file not found")
-
-def AIMemory(name: str, message: str):
- if name and message:
- with open(DATA_FILE, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"])
- writer.writerow({"name": name, "message": message, "time": str(datetime.now())})
- commit_url = repo.push_to_hub()
- return {"name": name, "message": message, "time": str(datetime.now())}
-
-with open('Mindfulness.txt', 'r') as file:
- context = file.read()
-
-# Set up cloned dataset from repo for operations
-repo = Repository( local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN)
-
-# set up ASR
-asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h")
-
-# set up TTS
-MODEL_NAMES = [
- "en/ljspeech/tacotron2-DDC",
- "en/ljspeech/glow-tts",
- "en/ljspeech/speedy-speech-wn",
- "en/ljspeech/vits",
- "en/sam/tacotron-DDC",
- "fr/mai/tacotron2-DDC",
- "de/thorsten/tacotron2-DCA",
-]
-
-# Use Model Manager to load vocoders
-MODELS = {}
-manager = ModelManager()
-for MODEL_NAME in MODEL_NAMES:
- print(f"downloading {MODEL_NAME}")
- model_path, config_path, model_item = manager.download_model(f"tts_models/{MODEL_NAME}")
- vocoder_name: Optional[str] = model_item["default_vocoder"]
- vocoder_path = None
- vocoder_config_path = None
- if vocoder_name is not None:
- vocoder_path, vocoder_config_path, _ = manager.download_model(vocoder_name)
-
- synthesizer = Synthesizer(
- model_path, config_path, None, vocoder_path, vocoder_config_path,
- )
- MODELS[MODEL_NAME] = synthesizer
-
-# transcribe
-def transcribe(audio):
- text = asr(audio)["text"]
- return text
-
-#text classifier
-classifier = pipeline("text-classification")
-
-
-def speech_to_text(speech):
- text = asr(speech)["text"]
- #rMem = AIMemory("STT", text)
- return text
-
-def text_to_sentiment(text):
- sentiment = classifier(text)[0]["label"]
- #rMem = AIMemory(text, sentiment)
- return sentiment
-
-def upsert(text):
- date_time =str(datetime.datetime.today())
- doc_ref = db.collection('Text2SpeechSentimentSave').document(date_time)
- doc_ref.set({u'firefield': 'Recognize Speech', u'first': 'https://huggingface.co/spaces/awacke1/TTS-STT-Blocks/', u'last': text, u'born': date_time,})
- saved = select('TTS-STT', date_time)
- return saved
-
-def select(collection, document):
- doc_ref = db.collection(collection).document(document)
- doc = doc_ref.get()
- docid = ("The id is: ", doc.id)
- contents = ("The contents are: ", doc.to_dict())
- return contents
-
-def selectall(text):
- docs = db.collection('Text2SpeechSentimentSave').stream()
- doclist=''
- for doc in docs:
- r=(f'{doc.id} => {doc.to_dict()}')
- doclist += r
- return doclist
-
-def tts(text: str, model_name: str):
- print(text, model_name)
- synthesizer = MODELS.get(model_name, None)
- if synthesizer is None:
- raise NameError("model not found")
- wavs = synthesizer.tts(text)
- with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
- synthesizer.save_wav(wavs, fp)
-
- #rMem = AIMemory("TTS", text + model_name)
-
- return fp.name
-
-demo = gr.Blocks()
-with demo:
- audio_file = gr.inputs.Audio(source="microphone", type="filepath")
- text = gr.Textbox(label="Speech to Text")
- #label = gr.Label()
- #saved = gr.Textbox(label="Saved")
- #savedAll = gr.Textbox(label="SavedAll")
- TTSchoice = gr.inputs.Radio( label="Pick a Text to Speech Model", choices=MODEL_NAMES, )
- audio = gr.Audio(label="Output", interactive=False)
-
- b1 = gr.Button("Recognize Speech")
- #b2 = gr.Button("Classify Sentiment")
- #b3 = gr.Button("Save Speech to Text")
- #b4 = gr.Button("Retrieve All")
- b5 = gr.Button("Read It Back Aloud")
-
- b1.click(speech_to_text, inputs=audio_file, outputs=text)
- #b2.click(text_to_sentiment, inputs=text, outputs=label)
- #b3.click(upsert, inputs=text, outputs=saved)
- #b4.click(selectall, inputs=text, outputs=savedAll)
- b5.click(tts, inputs=[text,TTSchoice], outputs=audio)
-
-demo.launch(share=True)
\ No newline at end of file
diff --git a/spaces/awacke1/Text-to-Image-stabilityai-stable-diffusion-2-1/README.md b/spaces/awacke1/Text-to-Image-stabilityai-stable-diffusion-2-1/README.md
deleted file mode 100644
index f144fedec8ccd81268adaf0174e4fccbb07f549d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Text-to-Image-stabilityai-stable-diffusion-2-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text To Image Stabilityai Stable Diffusion 2 1
-emoji: 💩
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Text-to-Speech-facebook-fastspeech2-en-ljspeech/README.md b/spaces/awacke1/Text-to-Speech-facebook-fastspeech2-en-ljspeech/README.md
deleted file mode 100644
index a0d71f74e874568736c0f41dff1bbb8436243beb..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Text-to-Speech-facebook-fastspeech2-en-ljspeech/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text To Speech Facebook Fastspeech2 En Ljspeech
-emoji: 🔥
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Vector3.js b/spaces/banana-projects/web3d/node_modules/three/src/math/Vector3.js
deleted file mode 100644
index aba02fea00a0ccfb57fd05b7a2a1f134fe072175..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/math/Vector3.js
+++ /dev/null
@@ -1,727 +0,0 @@
-import { _Math } from './Math.js';
-import { Quaternion } from './Quaternion.js';
-
-/**
- * @author mrdoob / http://mrdoob.com/
- * @author kile / http://kile.stravaganza.org/
- * @author philogb / http://blog.thejit.org/
- * @author mikael emtinger / http://gomo.se/
- * @author egraether / http://egraether.com/
- * @author WestLangley / http://github.com/WestLangley
- */
-
-function Vector3( x, y, z ) {
-
- this.x = x || 0;
- this.y = y || 0;
- this.z = z || 0;
-
-}
-
-Object.assign( Vector3.prototype, {
-
- isVector3: true,
-
- set: function ( x, y, z ) {
-
- this.x = x;
- this.y = y;
- this.z = z;
-
- return this;
-
- },
-
- setScalar: function ( scalar ) {
-
- this.x = scalar;
- this.y = scalar;
- this.z = scalar;
-
- return this;
-
- },
-
- setX: function ( x ) {
-
- this.x = x;
-
- return this;
-
- },
-
- setY: function ( y ) {
-
- this.y = y;
-
- return this;
-
- },
-
- setZ: function ( z ) {
-
- this.z = z;
-
- return this;
-
- },
-
- setComponent: function ( index, value ) {
-
- switch ( index ) {
-
- case 0: this.x = value; break;
- case 1: this.y = value; break;
- case 2: this.z = value; break;
- default: throw new Error( 'index is out of range: ' + index );
-
- }
-
- return this;
-
- },
-
- getComponent: function ( index ) {
-
- switch ( index ) {
-
- case 0: return this.x;
- case 1: return this.y;
- case 2: return this.z;
- default: throw new Error( 'index is out of range: ' + index );
-
- }
-
- },
-
- clone: function () {
-
- return new this.constructor( this.x, this.y, this.z );
-
- },
-
- copy: function ( v ) {
-
- this.x = v.x;
- this.y = v.y;
- this.z = v.z;
-
- return this;
-
- },
-
- add: function ( v, w ) {
-
- if ( w !== undefined ) {
-
- console.warn( 'THREE.Vector3: .add() now only accepts one argument. Use .addVectors( a, b ) instead.' );
- return this.addVectors( v, w );
-
- }
-
- this.x += v.x;
- this.y += v.y;
- this.z += v.z;
-
- return this;
-
- },
-
- addScalar: function ( s ) {
-
- this.x += s;
- this.y += s;
- this.z += s;
-
- return this;
-
- },
-
- addVectors: function ( a, b ) {
-
- this.x = a.x + b.x;
- this.y = a.y + b.y;
- this.z = a.z + b.z;
-
- return this;
-
- },
-
- addScaledVector: function ( v, s ) {
-
- this.x += v.x * s;
- this.y += v.y * s;
- this.z += v.z * s;
-
- return this;
-
- },
-
- sub: function ( v, w ) {
-
- if ( w !== undefined ) {
-
- console.warn( 'THREE.Vector3: .sub() now only accepts one argument. Use .subVectors( a, b ) instead.' );
- return this.subVectors( v, w );
-
- }
-
- this.x -= v.x;
- this.y -= v.y;
- this.z -= v.z;
-
- return this;
-
- },
-
- subScalar: function ( s ) {
-
- this.x -= s;
- this.y -= s;
- this.z -= s;
-
- return this;
-
- },
-
- subVectors: function ( a, b ) {
-
- this.x = a.x - b.x;
- this.y = a.y - b.y;
- this.z = a.z - b.z;
-
- return this;
-
- },
-
- multiply: function ( v, w ) {
-
- if ( w !== undefined ) {
-
- console.warn( 'THREE.Vector3: .multiply() now only accepts one argument. Use .multiplyVectors( a, b ) instead.' );
- return this.multiplyVectors( v, w );
-
- }
-
- this.x *= v.x;
- this.y *= v.y;
- this.z *= v.z;
-
- return this;
-
- },
-
- multiplyScalar: function ( scalar ) {
-
- this.x *= scalar;
- this.y *= scalar;
- this.z *= scalar;
-
- return this;
-
- },
-
- multiplyVectors: function ( a, b ) {
-
- this.x = a.x * b.x;
- this.y = a.y * b.y;
- this.z = a.z * b.z;
-
- return this;
-
- },
-
- applyEuler: function () {
-
- var quaternion = new Quaternion();
-
- return function applyEuler( euler ) {
-
- if ( ! ( euler && euler.isEuler ) ) {
-
- console.error( 'THREE.Vector3: .applyEuler() now expects an Euler rotation rather than a Vector3 and order.' );
-
- }
-
- return this.applyQuaternion( quaternion.setFromEuler( euler ) );
-
- };
-
- }(),
-
- applyAxisAngle: function () {
-
- var quaternion = new Quaternion();
-
- return function applyAxisAngle( axis, angle ) {
-
- return this.applyQuaternion( quaternion.setFromAxisAngle( axis, angle ) );
-
- };
-
- }(),
-
- applyMatrix3: function ( m ) {
-
- var x = this.x, y = this.y, z = this.z;
- var e = m.elements;
-
- this.x = e[ 0 ] * x + e[ 3 ] * y + e[ 6 ] * z;
- this.y = e[ 1 ] * x + e[ 4 ] * y + e[ 7 ] * z;
- this.z = e[ 2 ] * x + e[ 5 ] * y + e[ 8 ] * z;
-
- return this;
-
- },
-
- applyMatrix4: function ( m ) {
-
- var x = this.x, y = this.y, z = this.z;
- var e = m.elements;
-
- var w = 1 / ( e[ 3 ] * x + e[ 7 ] * y + e[ 11 ] * z + e[ 15 ] );
-
- this.x = ( e[ 0 ] * x + e[ 4 ] * y + e[ 8 ] * z + e[ 12 ] ) * w;
- this.y = ( e[ 1 ] * x + e[ 5 ] * y + e[ 9 ] * z + e[ 13 ] ) * w;
- this.z = ( e[ 2 ] * x + e[ 6 ] * y + e[ 10 ] * z + e[ 14 ] ) * w;
-
- return this;
-
- },
-
- applyQuaternion: function ( q ) {
-
- var x = this.x, y = this.y, z = this.z;
- var qx = q.x, qy = q.y, qz = q.z, qw = q.w;
-
- // calculate quat * vector
-
- var ix = qw * x + qy * z - qz * y;
- var iy = qw * y + qz * x - qx * z;
- var iz = qw * z + qx * y - qy * x;
- var iw = - qx * x - qy * y - qz * z;
-
- // calculate result * inverse quat
-
- this.x = ix * qw + iw * - qx + iy * - qz - iz * - qy;
- this.y = iy * qw + iw * - qy + iz * - qx - ix * - qz;
- this.z = iz * qw + iw * - qz + ix * - qy - iy * - qx;
-
- return this;
-
- },
-
- project: function ( camera ) {
-
- return this.applyMatrix4( camera.matrixWorldInverse ).applyMatrix4( camera.projectionMatrix );
-
- },
-
- unproject: function ( camera ) {
-
- return this.applyMatrix4( camera.projectionMatrixInverse ).applyMatrix4( camera.matrixWorld );
-
- },
-
- transformDirection: function ( m ) {
-
- // input: THREE.Matrix4 affine matrix
- // vector interpreted as a direction
-
- var x = this.x, y = this.y, z = this.z;
- var e = m.elements;
-
- this.x = e[ 0 ] * x + e[ 4 ] * y + e[ 8 ] * z;
- this.y = e[ 1 ] * x + e[ 5 ] * y + e[ 9 ] * z;
- this.z = e[ 2 ] * x + e[ 6 ] * y + e[ 10 ] * z;
-
- return this.normalize();
-
- },
-
- divide: function ( v ) {
-
- this.x /= v.x;
- this.y /= v.y;
- this.z /= v.z;
-
- return this;
-
- },
-
- divideScalar: function ( scalar ) {
-
- return this.multiplyScalar( 1 / scalar );
-
- },
-
- min: function ( v ) {
-
- this.x = Math.min( this.x, v.x );
- this.y = Math.min( this.y, v.y );
- this.z = Math.min( this.z, v.z );
-
- return this;
-
- },
-
- max: function ( v ) {
-
- this.x = Math.max( this.x, v.x );
- this.y = Math.max( this.y, v.y );
- this.z = Math.max( this.z, v.z );
-
- return this;
-
- },
-
- clamp: function ( min, max ) {
-
- // assumes min < max, componentwise
-
- this.x = Math.max( min.x, Math.min( max.x, this.x ) );
- this.y = Math.max( min.y, Math.min( max.y, this.y ) );
- this.z = Math.max( min.z, Math.min( max.z, this.z ) );
-
- return this;
-
- },
-
- clampScalar: function () {
-
- var min = new Vector3();
- var max = new Vector3();
-
- return function clampScalar( minVal, maxVal ) {
-
- min.set( minVal, minVal, minVal );
- max.set( maxVal, maxVal, maxVal );
-
- return this.clamp( min, max );
-
- };
-
- }(),
-
- clampLength: function ( min, max ) {
-
- var length = this.length();
-
- return this.divideScalar( length || 1 ).multiplyScalar( Math.max( min, Math.min( max, length ) ) );
-
- },
-
- floor: function () {
-
- this.x = Math.floor( this.x );
- this.y = Math.floor( this.y );
- this.z = Math.floor( this.z );
-
- return this;
-
- },
-
- ceil: function () {
-
- this.x = Math.ceil( this.x );
- this.y = Math.ceil( this.y );
- this.z = Math.ceil( this.z );
-
- return this;
-
- },
-
- round: function () {
-
- this.x = Math.round( this.x );
- this.y = Math.round( this.y );
- this.z = Math.round( this.z );
-
- return this;
-
- },
-
- roundToZero: function () {
-
- this.x = ( this.x < 0 ) ? Math.ceil( this.x ) : Math.floor( this.x );
- this.y = ( this.y < 0 ) ? Math.ceil( this.y ) : Math.floor( this.y );
- this.z = ( this.z < 0 ) ? Math.ceil( this.z ) : Math.floor( this.z );
-
- return this;
-
- },
-
- negate: function () {
-
- this.x = - this.x;
- this.y = - this.y;
- this.z = - this.z;
-
- return this;
-
- },
-
- dot: function ( v ) {
-
- return this.x * v.x + this.y * v.y + this.z * v.z;
-
- },
-
- // TODO lengthSquared?
-
- lengthSq: function () {
-
- return this.x * this.x + this.y * this.y + this.z * this.z;
-
- },
-
- length: function () {
-
- return Math.sqrt( this.x * this.x + this.y * this.y + this.z * this.z );
-
- },
-
- manhattanLength: function () {
-
- return Math.abs( this.x ) + Math.abs( this.y ) + Math.abs( this.z );
-
- },
-
- normalize: function () {
-
- return this.divideScalar( this.length() || 1 );
-
- },
-
- setLength: function ( length ) {
-
- return this.normalize().multiplyScalar( length );
-
- },
-
- lerp: function ( v, alpha ) {
-
- this.x += ( v.x - this.x ) * alpha;
- this.y += ( v.y - this.y ) * alpha;
- this.z += ( v.z - this.z ) * alpha;
-
- return this;
-
- },
-
- lerpVectors: function ( v1, v2, alpha ) {
-
- return this.subVectors( v2, v1 ).multiplyScalar( alpha ).add( v1 );
-
- },
-
- cross: function ( v, w ) {
-
- if ( w !== undefined ) {
-
- console.warn( 'THREE.Vector3: .cross() now only accepts one argument. Use .crossVectors( a, b ) instead.' );
- return this.crossVectors( v, w );
-
- }
-
- return this.crossVectors( this, v );
-
- },
-
- crossVectors: function ( a, b ) {
-
- var ax = a.x, ay = a.y, az = a.z;
- var bx = b.x, by = b.y, bz = b.z;
-
- this.x = ay * bz - az * by;
- this.y = az * bx - ax * bz;
- this.z = ax * by - ay * bx;
-
- return this;
-
- },
-
- projectOnVector: function ( vector ) {
-
- var scalar = vector.dot( this ) / vector.lengthSq();
-
- return this.copy( vector ).multiplyScalar( scalar );
-
- },
-
- projectOnPlane: function () {
-
- var v1 = new Vector3();
-
- return function projectOnPlane( planeNormal ) {
-
- v1.copy( this ).projectOnVector( planeNormal );
-
- return this.sub( v1 );
-
- };
-
- }(),
-
- reflect: function () {
-
- // reflect incident vector off plane orthogonal to normal
- // normal is assumed to have unit length
-
- var v1 = new Vector3();
-
- return function reflect( normal ) {
-
- return this.sub( v1.copy( normal ).multiplyScalar( 2 * this.dot( normal ) ) );
-
- };
-
- }(),
-
- angleTo: function ( v ) {
-
- var theta = this.dot( v ) / ( Math.sqrt( this.lengthSq() * v.lengthSq() ) );
-
- // clamp, to handle numerical problems
-
- return Math.acos( _Math.clamp( theta, - 1, 1 ) );
-
- },
-
- distanceTo: function ( v ) {
-
- return Math.sqrt( this.distanceToSquared( v ) );
-
- },
-
- distanceToSquared: function ( v ) {
-
- var dx = this.x - v.x, dy = this.y - v.y, dz = this.z - v.z;
-
- return dx * dx + dy * dy + dz * dz;
-
- },
-
- manhattanDistanceTo: function ( v ) {
-
- return Math.abs( this.x - v.x ) + Math.abs( this.y - v.y ) + Math.abs( this.z - v.z );
-
- },
-
- setFromSpherical: function ( s ) {
-
- return this.setFromSphericalCoords( s.radius, s.phi, s.theta );
-
- },
-
- setFromSphericalCoords: function ( radius, phi, theta ) {
-
- var sinPhiRadius = Math.sin( phi ) * radius;
-
- this.x = sinPhiRadius * Math.sin( theta );
- this.y = Math.cos( phi ) * radius;
- this.z = sinPhiRadius * Math.cos( theta );
-
- return this;
-
- },
-
- setFromCylindrical: function ( c ) {
-
- return this.setFromCylindricalCoords( c.radius, c.theta, c.y );
-
- },
-
- setFromCylindricalCoords: function ( radius, theta, y ) {
-
- this.x = radius * Math.sin( theta );
- this.y = y;
- this.z = radius * Math.cos( theta );
-
- return this;
-
- },
-
- setFromMatrixPosition: function ( m ) {
-
- var e = m.elements;
-
- this.x = e[ 12 ];
- this.y = e[ 13 ];
- this.z = e[ 14 ];
-
- return this;
-
- },
-
- setFromMatrixScale: function ( m ) {
-
- var sx = this.setFromMatrixColumn( m, 0 ).length();
- var sy = this.setFromMatrixColumn( m, 1 ).length();
- var sz = this.setFromMatrixColumn( m, 2 ).length();
-
- this.x = sx;
- this.y = sy;
- this.z = sz;
-
- return this;
-
- },
-
- setFromMatrixColumn: function ( m, index ) {
-
- return this.fromArray( m.elements, index * 4 );
-
- },
-
- equals: function ( v ) {
-
- return ( ( v.x === this.x ) && ( v.y === this.y ) && ( v.z === this.z ) );
-
- },
-
- fromArray: function ( array, offset ) {
-
- if ( offset === undefined ) offset = 0;
-
- this.x = array[ offset ];
- this.y = array[ offset + 1 ];
- this.z = array[ offset + 2 ];
-
- return this;
-
- },
-
- toArray: function ( array, offset ) {
-
- if ( array === undefined ) array = [];
- if ( offset === undefined ) offset = 0;
-
- array[ offset ] = this.x;
- array[ offset + 1 ] = this.y;
- array[ offset + 2 ] = this.z;
-
- return array;
-
- },
-
- fromBufferAttribute: function ( attribute, index, offset ) {
-
- if ( offset !== undefined ) {
-
- console.warn( 'THREE.Vector3: offset has been removed from .fromBufferAttribute().' );
-
- }
-
- this.x = attribute.getX( index );
- this.y = attribute.getY( index );
- this.z = attribute.getZ( index );
-
- return this;
-
- }
-
-} );
-
-
-export { Vector3 };
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/scenes/Fog.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/scenes/Fog.d.ts
deleted file mode 100644
index ca8b3dbddb4100588f4fe574964e5a6d0856785c..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/scenes/Fog.d.ts
+++ /dev/null
@@ -1,36 +0,0 @@
-import { Color } from './../math/Color';
-
-export interface IFog {
- name: string;
- color: Color;
- clone(): this;
- toJSON(): any;
-}
-
-/**
- * This class contains the parameters that define linear fog, i.e., that grows linearly denser with the distance.
- */
-export class Fog implements IFog {
- constructor(hex: number, near?: number, far?: number);
-
- name: string;
-
- /**
- * Fog color.
- */
- color: Color;
-
- /**
- * The minimum distance to start applying fog. Objects that are less than 'near' units from the active camera won't be affected by fog.
- */
- near: number;
-
- /**
- * The maximum distance at which fog stops being calculated and applied. Objects that are more than 'far' units away from the active camera won't be affected by fog.
- * Default is 1000.
- */
- far: number;
-
- clone(): this;
- toJSON(): any;
-}
diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220620151050.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220620151050.py
deleted file mode 100644
index c0708c1851e350e44495b182f8b1cf78d3331731..0000000000000000000000000000000000000000
--- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220620151050.py
+++ /dev/null
@@ -1,40 +0,0 @@
-#-*- coding : utf-8-*-
-import pandas as pd
-import streamlit as st
-import os,base64,subprocess
-from subprocess import STDOUT #os process manipuation
-
-@st.cache
-def gh():
- """install ghostscript on the linux machine"""
- proc = subprocess.Popen('apt-get install -y ghostscript', shell=True, stdin=None, stdout=open(os.devnull,"wb"), stderr=STDOUT, executable="/bin/bash")
- proc.wait()
-
-gh()
-
-import camelot as cam
-
-st.title("PDF Table Extractor")
-
-input_pdf = st.file_uploader(label = "", type = 'pdf')
-
-page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1)
-
-if input_pdf is not None:
- # byte object into a PDF file
- with open("input.pdf", "wb") as f:
- base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8')
- f.write(base64.b64decode(base64_pdf))
- f.close()
-
- # read the pdf and parse it using stream
- tables = cam.read_pdf("input.pdf", pages=page_number)
- result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter')
- tables[0].to_excel(result,index=False)
- # for i in range(0,len(tables)):
- # table = tables[i].df
- # sheetname = str(i)
- # table.to_excel(result, sheetname,index=False)
-
- with open('result.xlsx','rb') as f:
- st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel")
\ No newline at end of file
diff --git a/spaces/bigPear/digitalWDF/src/finetune.py b/spaces/bigPear/digitalWDF/src/finetune.py
deleted file mode 100644
index 08fe9202c3b6d31a9f7250c3689e514dcc7377e3..0000000000000000000000000000000000000000
--- a/spaces/bigPear/digitalWDF/src/finetune.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# coding=utf-8
-# Implements several parameter-efficient supervised fine-tuning method for ChatGLM.
-# This code is inspired by https://github.com/THUDM/ChatGLM-6B/blob/main/ptuning/main.py
-
-
-from utils import (
- load_pretrained,
- prepare_args,
- prepare_data,
- preprocess_data,
- plot_loss,
- Seq2SeqDataCollatorForChatGLM,
- ComputeMetrics,
- Seq2SeqTrainerForChatGLM
-)
-
-
-def main():
-
- # Prepare pretrained model and dataset
- model_args, data_args, training_args, finetuning_args = prepare_args()
- dataset = prepare_data(model_args, data_args)
- model, tokenizer = load_pretrained(model_args, training_args, finetuning_args, training_args.do_train, stage="sft")
- dataset = preprocess_data(dataset, tokenizer, data_args, training_args, stage="sft")
- data_collator = Seq2SeqDataCollatorForChatGLM(
- tokenizer=tokenizer,
- model=model,
- ignore_pad_token_for_loss=data_args.ignore_pad_token_for_loss,
- inference_mode=(not training_args.do_train)
- )
-
- # Override the decoding parameters of Seq2SeqTrainer
- training_args.generation_max_length = training_args.generation_max_length if \
- training_args.generation_max_length is not None else data_args.max_target_length
- training_args.generation_num_beams = data_args.num_beams if \
- data_args.num_beams is not None else training_args.generation_num_beams
-
- # Initialize our Trainer
- trainer = Seq2SeqTrainerForChatGLM(
- finetuning_args=finetuning_args,
- model=model,
- args=training_args,
- train_dataset=dataset if training_args.do_train else None,
- eval_dataset=dataset if training_args.do_eval else None,
- tokenizer=tokenizer,
- data_collator=data_collator,
- compute_metrics=ComputeMetrics(tokenizer) if training_args.predict_with_generate else None
- )
-
- # Keyword arguments for `model.generate`
- gen_kwargs = {
- "do_sample": True,
- "top_p": 0.7,
- "max_length": 768,
- "temperature": 0.95
- }
-
- # Training
- if training_args.do_train:
- train_result = trainer.train()
- trainer.log_metrics("train", train_result.metrics)
- trainer.save_metrics("train", train_result.metrics)
- trainer.save_state()
- trainer.save_model()
- if trainer.is_world_process_zero() and finetuning_args.plot_loss:
- plot_loss(training_args)
-
- # Evaluation
- if training_args.do_eval:
- metrics = trainer.evaluate(metric_key_prefix="eval", **gen_kwargs)
- trainer.log_metrics("eval", metrics)
- trainer.save_metrics("eval", metrics)
-
- # Predict
- if training_args.do_predict:
- predict_results = trainer.predict(dataset, metric_key_prefix="predict", **gen_kwargs)
- trainer.log_metrics("predict", predict_results.metrics)
- trainer.save_metrics("predict", predict_results.metrics)
- trainer.save_predictions(predict_results, tokenizer)
-
-
-def _mp_fn(index):
- # For xla_spawn (TPUs)
- main()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/bingbing520/ChatGPT/modules/llama_func.py b/spaces/bingbing520/ChatGPT/modules/llama_func.py
deleted file mode 100644
index e1c513af1bf6d1569b071eb5fc0ce441d0692f83..0000000000000000000000000000000000000000
--- a/spaces/bingbing520/ChatGPT/modules/llama_func.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import os
-import logging
-
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-from modules.config import local_embedding
-
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filepath)[1]
- logging.info(f"loading file: {filename}")
- try:
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
-
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, "rb") as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_list = excel_to_string(filepath)
- for elem in text_list:
- documents.append(Document(elem))
- continue
- else:
- logging.debug("Loading text file...")
- with open(filepath, "r", encoding="utf-8") as f:
- text_raw = f.read()
- except Exception as e:
- logging.error(f"Error loading file: {filename}")
- pass
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" ",
-):
- from langchain.chat_models import ChatOpenAI
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding
-
- if api_key:
- os.environ["OPENAI_API_KEY"] = api_key
- else:
- # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
- os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- prompt_helper = PromptHelper(
- max_input_size=max_input_size,
- num_output=num_outputs,
- max_chunk_overlap=max_chunk_overlap,
- embedding_limit=embedding_limit,
- chunk_size_limit=600,
- separator=separator,
- )
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- if local_embedding:
- embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2"))
- else:
- embed_model = OpenAIEmbedding()
- logging.info("构建索引中……")
- with retrieve_proxy():
- service_context = ServiceContext.from_defaults(
- prompt_helper=prompt_helper,
- chunk_size_limit=chunk_size_limit,
- embed_model=embed_model,
- )
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/bioriAsaeru/text-to-voice/Advance Turbo Flasher Box Crack The Ultimate Tool for Flashing and Repairing.md b/spaces/bioriAsaeru/text-to-voice/Advance Turbo Flasher Box Crack The Ultimate Tool for Flashing and Repairing.md
deleted file mode 100644
index 341377bface047a6f775e7bfef6f3e638119a5d0..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Advance Turbo Flasher Box Crack The Ultimate Tool for Flashing and Repairing.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Download ATF-Advance Turbo Flasher Box latest setup installer free download for Windows PC. ATF (Advance Turbo Flasher) is all one solution for servicing Nokia phones. If you have a Nokia phone and you want to flash it or want to upgrade your phone firmware, then ATF Box is a great choice for you. Just download and install the new ATF box setup installer file on your Windows computer and start flashing or servicing your Nokia phone now. It is developed and uploaded by the advanced turbo box team.
ATF Box Setup 2020 v12.70/11.70 Free Download - Allflashfiles|The Home Of Firmware.. ... The latest setup file of turbo flasher has been released and a simple downloading link is available for download.. ... Box Name: Advance Turbo Flasher.
-
... filmato milano hotel sheraton golf digital converter box intolleranze alimentari ... grand theft auto advance rom ditta bartolini download apache tomcat for linux ias .. nuda italiana chi turbo hair straightener candice cardinelle pictures apartment .. Jaan Tere Naam full hd 1080p hindi movies
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/EthanMeteorHunterkeyserial.md b/spaces/bioriAsaeru/text-to-voice/EthanMeteorHunterkeyserial.md
deleted file mode 100644
index 14f95f38bafba578bdf76a76570a3c3a4774b9d1..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/EthanMeteorHunterkeyserial.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- Transcribe long-form YouTube videos or uploaded video inputs!
-
- Demo uses the fine-tuned checkpoint: {DEFAULT_MODEL_NAME} to transcribe video files of arbitrary length.
-
- Efficient inference is supported by [faster-whisper](https://github.com/guillaumekln/faster-whisper) and [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- """
- )
-
- yt_link_input = gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL")
- download_youtube_btn = gr.Button("Download Youtube video")
- downloaded_video_output = gr.Video(label="Video file", mirror_webcam=False)
- download_youtube_btn.click(download_video_from_youtube, inputs=[yt_link_input], outputs=[downloaded_video_output])
-
- with_timestamps_input3 = gr.Checkbox(label="With timestamps?", value=True)
- video_transcribe_btn = gr.Button("Transcribe video")
- text_output_df = gr.DataFrame(
- value=default_text_output_df,
- label="Transcription",
- row_count=(0, "dynamic"),
- max_rows=10,
- wrap=True,
- overflow_row_behaviour="paginate",
- )
-
- video_transcribe_btn.click(video_transcribe, inputs=[downloaded_video_output, with_timestamps_input3], outputs=[text_output_df])
-
-# demo.launch(server_name="0.0.0.0", debug=True)
-# demo.launch(server_name="0.0.0.0", debug=True, share=True)
-demo.launch(enable_queue=True)
diff --git a/spaces/brjathu/HMR2.0/upload_logs.py b/spaces/brjathu/HMR2.0/upload_logs.py
deleted file mode 100644
index 8ae9460d958aa7d4168eeddc7978355bf07b0d1d..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/upload_logs.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from huggingface_hub import HfApi
-api = HfApi()
-api.upload_folder(
- folder_path="logs",
- repo_id="brjathu/HMR",
- repo_type="space",
-)
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/dev/run_instant_tests.sh b/spaces/brjathu/HMR2.0/vendor/detectron2/dev/run_instant_tests.sh
deleted file mode 100644
index 9fd9ba0c239d3e982c17711c9db872de3730decf..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/dev/run_instant_tests.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-BIN="python tools/train_net.py"
-OUTPUT="instant_test_output"
-NUM_GPUS=2
-
-CFG_LIST=( "${@:1}" )
-if [ ${#CFG_LIST[@]} -eq 0 ]; then
- CFG_LIST=( ./configs/quick_schedules/*instant_test.yaml )
-fi
-
-echo "========================================================================"
-echo "Configs to run:"
-echo "${CFG_LIST[@]}"
-echo "========================================================================"
-
-for cfg in "${CFG_LIST[@]}"; do
- echo "========================================================================"
- echo "Running $cfg ..."
- echo "========================================================================"
- $BIN --num-gpus $NUM_GPUS --config-file "$cfg" \
- SOLVER.IMS_PER_BATCH $(($NUM_GPUS * 2)) \
- OUTPUT_DIR "$OUTPUT"
- rm -rf "$OUTPUT"
-done
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/evaluation/evaluator.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/evaluation/evaluator.py
deleted file mode 100644
index d5d1d789bbe4b8791aa8529518ba1b964d31daca..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/evaluation/evaluator.py
+++ /dev/null
@@ -1,421 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import contextlib
-import copy
-import io
-import itertools
-import logging
-import numpy as np
-import os
-from collections import OrderedDict
-from typing import Dict, Iterable, List, Optional
-import pycocotools.mask as mask_utils
-import torch
-from pycocotools.coco import COCO
-from tabulate import tabulate
-
-from detectron2.config import CfgNode
-from detectron2.data import MetadataCatalog
-from detectron2.evaluation import DatasetEvaluator
-from detectron2.structures import BoxMode
-from detectron2.utils.comm import gather, get_rank, is_main_process, synchronize
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import create_small_table
-
-from densepose.converters import ToChartResultConverter, ToMaskConverter
-from densepose.data.datasets.coco import maybe_filter_and_map_categories_cocoapi
-from densepose.structures import (
- DensePoseChartPredictorOutput,
- DensePoseEmbeddingPredictorOutput,
- quantize_densepose_chart_result,
-)
-
-from .densepose_coco_evaluation import DensePoseCocoEval, DensePoseEvalMode
-from .mesh_alignment_evaluator import MeshAlignmentEvaluator
-from .tensor_storage import (
- SingleProcessFileTensorStorage,
- SingleProcessRamTensorStorage,
- SingleProcessTensorStorage,
- SizeData,
- storage_gather,
-)
-
-
-class DensePoseCOCOEvaluator(DatasetEvaluator):
- def __init__(
- self,
- dataset_name,
- distributed,
- output_dir=None,
- evaluator_type: str = "iuv",
- min_iou_threshold: float = 0.5,
- storage: Optional[SingleProcessTensorStorage] = None,
- embedder=None,
- should_evaluate_mesh_alignment: bool = False,
- mesh_alignment_mesh_names: Optional[List[str]] = None,
- ):
- self._embedder = embedder
- self._distributed = distributed
- self._output_dir = output_dir
- self._evaluator_type = evaluator_type
- self._storage = storage
- self._should_evaluate_mesh_alignment = should_evaluate_mesh_alignment
-
- assert not (
- should_evaluate_mesh_alignment and embedder is None
- ), "Mesh alignment evaluation is activated, but no vertex embedder provided!"
- if should_evaluate_mesh_alignment:
- self._mesh_alignment_evaluator = MeshAlignmentEvaluator(
- embedder,
- mesh_alignment_mesh_names,
- )
-
- self._cpu_device = torch.device("cpu")
- self._logger = logging.getLogger(__name__)
-
- self._metadata = MetadataCatalog.get(dataset_name)
- self._min_threshold = min_iou_threshold
- json_file = PathManager.get_local_path(self._metadata.json_file)
- with contextlib.redirect_stdout(io.StringIO()):
- self._coco_api = COCO(json_file)
- maybe_filter_and_map_categories_cocoapi(dataset_name, self._coco_api)
-
- def reset(self):
- self._predictions = []
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
- It is a list of dict. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a COCO model. It is a list of dicts with key
- "instances" that contains :class:`Instances`.
- The :class:`Instances` object needs to have `densepose` field.
- """
- for input, output in zip(inputs, outputs):
- instances = output["instances"].to(self._cpu_device)
- if not instances.has("pred_densepose"):
- continue
- prediction_list = prediction_to_dict(
- instances,
- input["image_id"],
- self._embedder,
- self._metadata.class_to_mesh_name,
- self._storage is not None,
- )
- if self._storage is not None:
- for prediction_dict in prediction_list:
- dict_to_store = {}
- for field_name in self._storage.data_schema:
- dict_to_store[field_name] = prediction_dict[field_name]
- record_id = self._storage.put(dict_to_store)
- prediction_dict["record_id"] = record_id
- prediction_dict["rank"] = get_rank()
- for field_name in self._storage.data_schema:
- del prediction_dict[field_name]
- self._predictions.extend(prediction_list)
-
- def evaluate(self, img_ids=None):
- if self._distributed:
- synchronize()
- predictions = gather(self._predictions)
- predictions = list(itertools.chain(*predictions))
- else:
- predictions = self._predictions
-
- multi_storage = storage_gather(self._storage) if self._storage is not None else None
-
- if not is_main_process():
- return
- return copy.deepcopy(self._eval_predictions(predictions, multi_storage, img_ids))
-
- def _eval_predictions(self, predictions, multi_storage=None, img_ids=None):
- """
- Evaluate predictions on densepose.
- Return results with the metrics of the tasks.
- """
- self._logger.info("Preparing results for COCO format ...")
-
- if self._output_dir:
- PathManager.mkdirs(self._output_dir)
- file_path = os.path.join(self._output_dir, "coco_densepose_predictions.pth")
- with PathManager.open(file_path, "wb") as f:
- torch.save(predictions, f)
-
- self._logger.info("Evaluating predictions ...")
- res = OrderedDict()
- results_gps, results_gpsm, results_segm = _evaluate_predictions_on_coco(
- self._coco_api,
- predictions,
- multi_storage,
- self._embedder,
- class_names=self._metadata.get("thing_classes"),
- min_threshold=self._min_threshold,
- img_ids=img_ids,
- )
- res["densepose_gps"] = results_gps
- res["densepose_gpsm"] = results_gpsm
- res["densepose_segm"] = results_segm
- if self._should_evaluate_mesh_alignment:
- res["densepose_mesh_alignment"] = self._evaluate_mesh_alignment()
- return res
-
- def _evaluate_mesh_alignment(self):
- self._logger.info("Mesh alignment evaluation ...")
- mean_ge, mean_gps, per_mesh_metrics = self._mesh_alignment_evaluator.evaluate()
- results = {
- "GE": mean_ge * 100,
- "GPS": mean_gps * 100,
- }
- mesh_names = set()
- for metric_name in per_mesh_metrics:
- for mesh_name, value in per_mesh_metrics[metric_name].items():
- results[f"{metric_name}-{mesh_name}"] = value * 100
- mesh_names.add(mesh_name)
- self._print_mesh_alignment_results(results, mesh_names)
- return results
-
- def _print_mesh_alignment_results(self, results: Dict[str, float], mesh_names: Iterable[str]):
- self._logger.info("Evaluation results for densepose, mesh alignment:")
- self._logger.info(f'| {"Mesh":13s} | {"GErr":7s} | {"GPS":7s} |')
- self._logger.info("| :-----------: | :-----: | :-----: |")
- for mesh_name in mesh_names:
- ge_key = f"GE-{mesh_name}"
- ge_str = f"{results[ge_key]:.4f}" if ge_key in results else " "
- gps_key = f"GPS-{mesh_name}"
- gps_str = f"{results[gps_key]:.4f}" if gps_key in results else " "
- self._logger.info(f"| {mesh_name:13s} | {ge_str:7s} | {gps_str:7s} |")
- self._logger.info("| :-------------------------------: |")
- ge_key = "GE"
- ge_str = f"{results[ge_key]:.4f}" if ge_key in results else " "
- gps_key = "GPS"
- gps_str = f"{results[gps_key]:.4f}" if gps_key in results else " "
- self._logger.info(f'| {"MEAN":13s} | {ge_str:7s} | {gps_str:7s} |')
-
-
-def prediction_to_dict(instances, img_id, embedder, class_to_mesh_name, use_storage):
- """
- Args:
- instances (Instances): the output of the model
- img_id (str): the image id in COCO
-
- Returns:
- list[dict]: the results in densepose evaluation format
- """
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
- raw_boxes_xywh = BoxMode.convert(
- instances.pred_boxes.tensor.clone(), BoxMode.XYXY_ABS, BoxMode.XYWH_ABS
- )
-
- if isinstance(instances.pred_densepose, DensePoseEmbeddingPredictorOutput):
- results_densepose = densepose_cse_predictions_to_dict(
- instances, embedder, class_to_mesh_name, use_storage
- )
- elif isinstance(instances.pred_densepose, DensePoseChartPredictorOutput):
- if not use_storage:
- results_densepose = densepose_chart_predictions_to_dict(instances)
- else:
- results_densepose = densepose_chart_predictions_to_storage_dict(instances)
-
- results = []
- for k in range(len(instances)):
- result = {
- "image_id": img_id,
- "category_id": classes[k],
- "bbox": raw_boxes_xywh[k].tolist(),
- "score": scores[k],
- }
- results.append({**result, **results_densepose[k]})
- return results
-
-
-def densepose_chart_predictions_to_dict(instances):
- segmentations = ToMaskConverter.convert(
- instances.pred_densepose, instances.pred_boxes, instances.image_size
- )
-
- results = []
- for k in range(len(instances)):
- densepose_results_quantized = quantize_densepose_chart_result(
- ToChartResultConverter.convert(instances.pred_densepose[k], instances.pred_boxes[k])
- )
- densepose_results_quantized.labels_uv_uint8 = (
- densepose_results_quantized.labels_uv_uint8.cpu()
- )
- segmentation = segmentations.tensor[k]
- segmentation_encoded = mask_utils.encode(
- np.require(segmentation.numpy(), dtype=np.uint8, requirements=["F"])
- )
- segmentation_encoded["counts"] = segmentation_encoded["counts"].decode("utf-8")
- result = {
- "densepose": densepose_results_quantized,
- "segmentation": segmentation_encoded,
- }
- results.append(result)
- return results
-
-
-def densepose_chart_predictions_to_storage_dict(instances):
- results = []
- for k in range(len(instances)):
- densepose_predictor_output = instances.pred_densepose[k]
- result = {
- "coarse_segm": densepose_predictor_output.coarse_segm.squeeze(0).cpu(),
- "fine_segm": densepose_predictor_output.fine_segm.squeeze(0).cpu(),
- "u": densepose_predictor_output.u.squeeze(0).cpu(),
- "v": densepose_predictor_output.v.squeeze(0).cpu(),
- }
- results.append(result)
- return results
-
-
-def densepose_cse_predictions_to_dict(instances, embedder, class_to_mesh_name, use_storage):
- results = []
- for k in range(len(instances)):
- cse = instances.pred_densepose[k]
- results.append(
- {
- "coarse_segm": cse.coarse_segm[0].cpu(),
- "embedding": cse.embedding[0].cpu(),
- }
- )
- return results
-
-
-def _evaluate_predictions_on_coco(
- coco_gt,
- coco_results,
- multi_storage=None,
- embedder=None,
- class_names=None,
- min_threshold: float = 0.5,
- img_ids=None,
-):
- logger = logging.getLogger(__name__)
-
- densepose_metrics = _get_densepose_metrics(min_threshold)
- if len(coco_results) == 0: # cocoapi does not handle empty results very well
- logger.warn("No predictions from the model! Set scores to -1")
- results_gps = {metric: -1 for metric in densepose_metrics}
- results_gpsm = {metric: -1 for metric in densepose_metrics}
- results_segm = {metric: -1 for metric in densepose_metrics}
- return results_gps, results_gpsm, results_segm
-
- coco_dt = coco_gt.loadRes(coco_results)
-
- results = []
- for eval_mode_name in ["GPS", "GPSM", "IOU"]:
- eval_mode = getattr(DensePoseEvalMode, eval_mode_name)
- coco_eval = DensePoseCocoEval(
- coco_gt, coco_dt, "densepose", multi_storage, embedder, dpEvalMode=eval_mode
- )
- result = _derive_results_from_coco_eval(
- coco_eval, eval_mode_name, densepose_metrics, class_names, min_threshold, img_ids
- )
- results.append(result)
- return results
-
-
-def _get_densepose_metrics(min_threshold: float = 0.5):
- metrics = ["AP"]
- if min_threshold <= 0.201:
- metrics += ["AP20"]
- if min_threshold <= 0.301:
- metrics += ["AP30"]
- if min_threshold <= 0.401:
- metrics += ["AP40"]
- metrics.extend(["AP50", "AP75", "APm", "APl", "AR", "AR50", "AR75", "ARm", "ARl"])
- return metrics
-
-
-def _derive_results_from_coco_eval(
- coco_eval, eval_mode_name, metrics, class_names, min_threshold: float, img_ids
-):
- if img_ids is not None:
- coco_eval.params.imgIds = img_ids
- coco_eval.params.iouThrs = np.linspace(
- min_threshold, 0.95, int(np.round((0.95 - min_threshold) / 0.05)) + 1, endpoint=True
- )
- coco_eval.evaluate()
- coco_eval.accumulate()
- coco_eval.summarize()
- results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)}
- logger = logging.getLogger(__name__)
- logger.info(
- f"Evaluation results for densepose, {eval_mode_name} metric: \n"
- + create_small_table(results)
- )
- if class_names is None or len(class_names) <= 1:
- return results
-
- # Compute per-category AP, the same way as it is done in D2
- # (see detectron2/evaluation/coco_evaluation.py):
- precisions = coco_eval.eval["precision"]
- # precision has dims (iou, recall, cls, area range, max dets)
- assert len(class_names) == precisions.shape[2]
-
- results_per_category = []
- for idx, name in enumerate(class_names):
- # area range index 0: all area ranges
- # max dets index -1: typically 100 per image
- precision = precisions[:, :, idx, 0, -1]
- precision = precision[precision > -1]
- ap = np.mean(precision) if precision.size else float("nan")
- results_per_category.append((f"{name}", float(ap * 100)))
-
- # tabulate it
- n_cols = min(6, len(results_per_category) * 2)
- results_flatten = list(itertools.chain(*results_per_category))
- results_2d = itertools.zip_longest(*[results_flatten[i::n_cols] for i in range(n_cols)])
- table = tabulate(
- results_2d,
- tablefmt="pipe",
- floatfmt=".3f",
- headers=["category", "AP"] * (n_cols // 2),
- numalign="left",
- )
- logger.info(f"Per-category {eval_mode_name} AP: \n" + table)
-
- results.update({"AP-" + name: ap for name, ap in results_per_category})
- return results
-
-
-def build_densepose_evaluator_storage(cfg: CfgNode, output_folder: str):
- storage_spec = cfg.DENSEPOSE_EVALUATION.STORAGE
- if storage_spec == "none":
- return None
- evaluator_type = cfg.DENSEPOSE_EVALUATION.TYPE
- # common output tensor sizes
- hout = cfg.MODEL.ROI_DENSEPOSE_HEAD.HEATMAP_SIZE
- wout = cfg.MODEL.ROI_DENSEPOSE_HEAD.HEATMAP_SIZE
- n_csc = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS
- # specific output tensors
- if evaluator_type == "iuv":
- n_fsc = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES + 1
- schema = {
- "coarse_segm": SizeData(dtype="float32", shape=(n_csc, hout, wout)),
- "fine_segm": SizeData(dtype="float32", shape=(n_fsc, hout, wout)),
- "u": SizeData(dtype="float32", shape=(n_fsc, hout, wout)),
- "v": SizeData(dtype="float32", shape=(n_fsc, hout, wout)),
- }
- elif evaluator_type == "cse":
- embed_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBED_SIZE
- schema = {
- "coarse_segm": SizeData(dtype="float32", shape=(n_csc, hout, wout)),
- "embedding": SizeData(dtype="float32", shape=(embed_size, hout, wout)),
- }
- else:
- raise ValueError(f"Unknown evaluator type: {evaluator_type}")
- # storage types
- if storage_spec == "ram":
- storage = SingleProcessRamTensorStorage(schema, io.BytesIO())
- elif storage_spec == "file":
- fpath = os.path.join(output_folder, f"DensePoseEvaluatorStorage.{get_rank()}.bin")
- PathManager.mkdirs(output_folder)
- storage = SingleProcessFileTensorStorage(schema, fpath, "wb")
- else:
- raise ValueError(f"Unknown storage specification: {storage_spec}")
- return storage
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/vis/densepose_results.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/vis/densepose_results.py
deleted file mode 100644
index ce8a7c0e207f5b3b6e755c759a59f5bed9965cef..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/vis/densepose_results.py
+++ /dev/null
@@ -1,355 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import numpy as np
-from typing import List, Optional, Tuple
-import cv2
-import torch
-
-from densepose.structures import DensePoseDataRelative
-
-from ..structures import DensePoseChartResult
-from .base import Boxes, Image, MatrixVisualizer
-
-
-class DensePoseResultsVisualizer(object):
- def visualize(
- self,
- image_bgr: Image,
- results_and_boxes_xywh: Tuple[Optional[List[DensePoseChartResult]], Optional[Boxes]],
- ) -> Image:
- densepose_result, boxes_xywh = results_and_boxes_xywh
- if densepose_result is None or boxes_xywh is None:
- return image_bgr
-
- boxes_xywh = boxes_xywh.cpu().numpy()
- context = self.create_visualization_context(image_bgr)
- for i, result in enumerate(densepose_result):
- iuv_array = torch.cat(
- (result.labels[None].type(torch.float32), result.uv * 255.0)
- ).type(torch.uint8)
- self.visualize_iuv_arr(context, iuv_array.cpu().numpy(), boxes_xywh[i])
- image_bgr = self.context_to_image_bgr(context)
- return image_bgr
-
- def create_visualization_context(self, image_bgr: Image):
- return image_bgr
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh) -> None:
- pass
-
- def context_to_image_bgr(self, context):
- return context
-
- def get_image_bgr_from_context(self, context):
- return context
-
-
-class DensePoseMaskedColormapResultsVisualizer(DensePoseResultsVisualizer):
- def __init__(
- self,
- data_extractor,
- segm_extractor,
- inplace=True,
- cmap=cv2.COLORMAP_PARULA,
- alpha=0.7,
- val_scale=1.0,
- **kwargs,
- ):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=val_scale, alpha=alpha
- )
- self.data_extractor = data_extractor
- self.segm_extractor = segm_extractor
-
- def context_to_image_bgr(self, context):
- return context
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh) -> None:
- image_bgr = self.get_image_bgr_from_context(context)
- matrix = self.data_extractor(iuv_arr)
- segm = self.segm_extractor(iuv_arr)
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[segm > 0] = 1
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh)
-
-
-def _extract_i_from_iuvarr(iuv_arr):
- return iuv_arr[0, :, :]
-
-
-def _extract_u_from_iuvarr(iuv_arr):
- return iuv_arr[1, :, :]
-
-
-def _extract_v_from_iuvarr(iuv_arr):
- return iuv_arr[2, :, :]
-
-
-class DensePoseResultsMplContourVisualizer(DensePoseResultsVisualizer):
- def __init__(self, levels=10, **kwargs):
- self.levels = levels
- self.plot_args = kwargs
-
- def create_visualization_context(self, image_bgr: Image):
- import matplotlib.pyplot as plt
- from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
-
- context = {}
- context["image_bgr"] = image_bgr
- dpi = 100
- height_inches = float(image_bgr.shape[0]) / dpi
- width_inches = float(image_bgr.shape[1]) / dpi
- fig = plt.figure(figsize=(width_inches, height_inches), dpi=dpi)
- plt.axes([0, 0, 1, 1])
- plt.axis("off")
- context["fig"] = fig
- canvas = FigureCanvas(fig)
- context["canvas"] = canvas
- extent = (0, image_bgr.shape[1], image_bgr.shape[0], 0)
- plt.imshow(image_bgr[:, :, ::-1], extent=extent)
- return context
-
- def context_to_image_bgr(self, context):
- fig = context["fig"]
- w, h = map(int, fig.get_size_inches() * fig.get_dpi())
- canvas = context["canvas"]
- canvas.draw()
- image_1d = np.fromstring(canvas.tostring_rgb(), dtype="uint8")
- image_rgb = image_1d.reshape(h, w, 3)
- image_bgr = image_rgb[:, :, ::-1].copy()
- return image_bgr
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: Boxes) -> None:
- import matplotlib.pyplot as plt
-
- u = _extract_u_from_iuvarr(iuv_arr).astype(float) / 255.0
- v = _extract_v_from_iuvarr(iuv_arr).astype(float) / 255.0
- extent = (
- bbox_xywh[0],
- bbox_xywh[0] + bbox_xywh[2],
- bbox_xywh[1],
- bbox_xywh[1] + bbox_xywh[3],
- )
- plt.contour(u, self.levels, extent=extent, **self.plot_args)
- plt.contour(v, self.levels, extent=extent, **self.plot_args)
-
-
-class DensePoseResultsCustomContourVisualizer(DensePoseResultsVisualizer):
- """
- Contour visualization using marching squares
- """
-
- def __init__(self, levels=10, **kwargs):
- # TODO: colormap is hardcoded
- cmap = cv2.COLORMAP_PARULA
- if isinstance(levels, int):
- self.levels = np.linspace(0, 1, levels)
- else:
- self.levels = levels
- if "linewidths" in kwargs:
- self.linewidths = kwargs["linewidths"]
- else:
- self.linewidths = [1] * len(self.levels)
- self.plot_args = kwargs
- img_colors_bgr = cv2.applyColorMap((self.levels * 255).astype(np.uint8), cmap)
- self.level_colors_bgr = [
- [int(v) for v in img_color_bgr.ravel()] for img_color_bgr in img_colors_bgr
- ]
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: Boxes) -> None:
- image_bgr = self.get_image_bgr_from_context(context)
- segm = _extract_i_from_iuvarr(iuv_arr)
- u = _extract_u_from_iuvarr(iuv_arr).astype(float) / 255.0
- v = _extract_v_from_iuvarr(iuv_arr).astype(float) / 255.0
- self._contours(image_bgr, u, segm, bbox_xywh)
- self._contours(image_bgr, v, segm, bbox_xywh)
-
- def _contours(self, image_bgr, arr, segm, bbox_xywh):
- for part_idx in range(1, DensePoseDataRelative.N_PART_LABELS + 1):
- mask = segm == part_idx
- if not np.any(mask):
- continue
- arr_min = np.amin(arr[mask])
- arr_max = np.amax(arr[mask])
- I, J = np.nonzero(mask)
- i0 = np.amin(I)
- i1 = np.amax(I) + 1
- j0 = np.amin(J)
- j1 = np.amax(J) + 1
- if (j1 == j0 + 1) or (i1 == i0 + 1):
- continue
- Nw = arr.shape[1] - 1
- Nh = arr.shape[0] - 1
- for level_idx, level in enumerate(self.levels):
- if (level < arr_min) or (level > arr_max):
- continue
- vp = arr[i0:i1, j0:j1] >= level
- bin_codes = vp[:-1, :-1] + vp[1:, :-1] * 2 + vp[1:, 1:] * 4 + vp[:-1, 1:] * 8
- mp = mask[i0:i1, j0:j1]
- bin_mask_codes = mp[:-1, :-1] + mp[1:, :-1] * 2 + mp[1:, 1:] * 4 + mp[:-1, 1:] * 8
- it = np.nditer(bin_codes, flags=["multi_index"])
- color_bgr = self.level_colors_bgr[level_idx]
- linewidth = self.linewidths[level_idx]
- while not it.finished:
- if (it[0] != 0) and (it[0] != 15):
- i, j = it.multi_index
- if bin_mask_codes[i, j] != 0:
- self._draw_line(
- image_bgr,
- arr,
- mask,
- level,
- color_bgr,
- linewidth,
- it[0],
- it.multi_index,
- bbox_xywh,
- Nw,
- Nh,
- (i0, j0),
- )
- it.iternext()
-
- def _draw_line(
- self,
- image_bgr,
- arr,
- mask,
- v,
- color_bgr,
- linewidth,
- bin_code,
- multi_idx,
- bbox_xywh,
- Nw,
- Nh,
- offset,
- ):
- lines = self._bin_code_2_lines(arr, v, bin_code, multi_idx, Nw, Nh, offset)
- x0, y0, w, h = bbox_xywh
- x1 = x0 + w
- y1 = y0 + h
- for line in lines:
- x0r, y0r = line[0]
- x1r, y1r = line[1]
- pt0 = (int(x0 + x0r * (x1 - x0)), int(y0 + y0r * (y1 - y0)))
- pt1 = (int(x0 + x1r * (x1 - x0)), int(y0 + y1r * (y1 - y0)))
- cv2.line(image_bgr, pt0, pt1, color_bgr, linewidth)
-
- def _bin_code_2_lines(self, arr, v, bin_code, multi_idx, Nw, Nh, offset):
- i0, j0 = offset
- i, j = multi_idx
- i += i0
- j += j0
- v0, v1, v2, v3 = arr[i, j], arr[i + 1, j], arr[i + 1, j + 1], arr[i, j + 1]
- x0i = float(j) / Nw
- y0j = float(i) / Nh
- He = 1.0 / Nh
- We = 1.0 / Nw
- if (bin_code == 1) or (bin_code == 14):
- a = (v - v0) / (v1 - v0)
- b = (v - v0) / (v3 - v0)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + b * We, y0j)
- return [(pt1, pt2)]
- elif (bin_code == 2) or (bin_code == 13):
- a = (v - v0) / (v1 - v0)
- b = (v - v1) / (v2 - v1)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + b * We, y0j + He)
- return [(pt1, pt2)]
- elif (bin_code == 3) or (bin_code == 12):
- a = (v - v0) / (v3 - v0)
- b = (v - v1) / (v2 - v1)
- pt1 = (x0i + a * We, y0j)
- pt2 = (x0i + b * We, y0j + He)
- return [(pt1, pt2)]
- elif (bin_code == 4) or (bin_code == 11):
- a = (v - v1) / (v2 - v1)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i + a * We, y0j + He)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif (bin_code == 6) or (bin_code == 9):
- a = (v - v0) / (v1 - v0)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif (bin_code == 7) or (bin_code == 8):
- a = (v - v0) / (v3 - v0)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i + a * We, y0j)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif bin_code == 5:
- a1 = (v - v0) / (v1 - v0)
- b1 = (v - v1) / (v2 - v1)
- pt11 = (x0i, y0j + a1 * He)
- pt12 = (x0i + b1 * We, y0j + He)
- a2 = (v - v0) / (v3 - v0)
- b2 = (v - v3) / (v2 - v3)
- pt21 = (x0i + a2 * We, y0j)
- pt22 = (x0i + We, y0j + b2 * He)
- return [(pt11, pt12), (pt21, pt22)]
- elif bin_code == 10:
- a1 = (v - v0) / (v3 - v0)
- b1 = (v - v0) / (v1 - v0)
- pt11 = (x0i + a1 * We, y0j)
- pt12 = (x0i, y0j + b1 * He)
- a2 = (v - v1) / (v2 - v1)
- b2 = (v - v3) / (v2 - v3)
- pt21 = (x0i + a2 * We, y0j + He)
- pt22 = (x0i + We, y0j + b2 * He)
- return [(pt11, pt12), (pt21, pt22)]
- return []
-
-
-try:
- import matplotlib
-
- matplotlib.use("Agg")
- DensePoseResultsContourVisualizer = DensePoseResultsMplContourVisualizer
-except ModuleNotFoundError:
- logger = logging.getLogger(__name__)
- logger.warning("Could not import matplotlib, using custom contour visualizer")
- DensePoseResultsContourVisualizer = DensePoseResultsCustomContourVisualizer
-
-
-class DensePoseResultsFineSegmentationVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, **kwargs):
- super(DensePoseResultsFineSegmentationVisualizer, self).__init__(
- _extract_i_from_iuvarr,
- _extract_i_from_iuvarr,
- inplace,
- cmap,
- alpha,
- val_scale=255.0 / DensePoseDataRelative.N_PART_LABELS,
- **kwargs,
- )
-
-
-class DensePoseResultsUVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, **kwargs):
- super(DensePoseResultsUVisualizer, self).__init__(
- _extract_u_from_iuvarr,
- _extract_i_from_iuvarr,
- inplace,
- cmap,
- alpha,
- val_scale=1.0,
- **kwargs,
- )
-
-
-class DensePoseResultsVVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, **kwargs):
- super(DensePoseResultsVVisualizer, self).__init__(
- _extract_v_from_iuvarr,
- _extract_i_from_iuvarr,
- inplace,
- cmap,
- alpha,
- val_scale=1.0,
- **kwargs,
- )
diff --git a/spaces/cchaun/music_tagging/app.py b/spaces/cchaun/music_tagging/app.py
deleted file mode 100644
index ab1cab0c9b48e2ea3005cc4a8266f6f1e45809c5..0000000000000000000000000000000000000000
--- a/spaces/cchaun/music_tagging/app.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# -*- coding: UTF-8 -*-
-import gradio as gr
-import torch, torchaudio
-from timeit import default_timer as timer
-from torchaudio.transforms import Resample
-from models.model import HarmonicCNN
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-SAMPLE_RATE = 16000
-AUDIO_LEN = 2.90
-
-model = HarmonicCNN()
-S = torch.load('models/best_model.pth', map_location=torch.device('cpu'))
-model.load_state_dict(S)
-
-LABELS = [
- "alternative",
- "ambient",
- "atmospheric",
- "chillout",
- "classical",
- "dance",
- "downtempo",
- "easylistening",
- "electronic",
- "experimental",
- "folk",
- "funk",
- "hiphop",
- "house",
- "indie",
- "instrumentalpop",
- "jazz",
- "lounge",
- "metal",
- "newage",
- "orchestral",
- "pop",
- "popfolk",
- "poprock",
- "reggae",
- "rock",
- "soundtrack",
- "techno",
- "trance",
- "triphop",
- "world",
- "acousticguitar",
- "bass",
- "computer",
- "drummachine",
- "drums",
- "electricguitar",
- "electricpiano",
- "guitar",
- "keyboard",
- "piano",
- "strings",
- "synthesizer",
- "violin",
- "voice",
- "emotional",
- "energetic",
- "film",
- "happy",
- "relaxing"
-]
-
-example_list = [
- "samples/guitar_acoustic.wav",
- "samples/guitar_electric.wav",
- "samples/piano.wav",
- "samples/violin.wav",
- "samples/flute.wav"
-]
-
-def predict(audio_path):
- start_time = timer()
- wav, sample_rate = torchaudio.load(audio_path)
- if sample_rate > SAMPLE_RATE:
- resampler = Resample(sample_rate, SAMPLE_RATE)
- wav = resampler(wav)
- if wav.shape[0] >= 2:
- wav = torch.mean(wav, dim=0)
- wav = wav.unsqueeze(0)
- model.eval()
- with torch.inference_mode():
- pred_probs = model(wav)
- pred_labels_and_probs = {LABELS[i]: float(pred_probs[0][i]) for i in range(len(LABELS))}
- pred_time = round(timer() - start_time, 5)
- return pred_labels_and_probs, pred_time
-
-
-title = "Music Tagging"
-
-demo = gr.Interface(fn=predict,
- inputs=gr.Audio(type="filepath"),
- outputs=[gr.Label(num_top_classes=10, label="Predictions"),
- gr.Number(label="Prediction time (s)")],
- examples=example_list,
- title=title)
-
-demo.launch(debug=False)
\ No newline at end of file
diff --git a/spaces/ceckenrode/Docker-FlanT5-TextGeneratorTranslator/static/style.css b/spaces/ceckenrode/Docker-FlanT5-TextGeneratorTranslator/static/style.css
deleted file mode 100644
index 7b50df8f6904c75f560224034d8aadd76656c6f8..0000000000000000000000000000000000000000
--- a/spaces/ceckenrode/Docker-FlanT5-TextGeneratorTranslator/static/style.css
+++ /dev/null
@@ -1,45 +0,0 @@
-body {
- --text: hsl(0 0% 15%);
- padding: 2.5rem;
- font-family: sans-serif;
- color: var(--text);
-}
-
-body.dark-theme {
- --text: hsl(0 0% 90%);
- background-color: hsl(223 39% 7%);
-}
-
-main {
- max-width: 80rem;
- text-align: center;
-}
-
-section {
- display: flex;
- flex-direction: column;
- align-items: center;
-}
-
-a {
- color: var(--text);
-}
-
-form {
- width: 30rem;
- margin: 0 auto;
-}
-
-input {
- width: 100%;
-}
-
-button {
- cursor: pointer;
-}
-
-.text-gen-output {
- min-height: 1.2rem;
- margin: 1rem;
- border: 0.5px solid grey;
-}
diff --git a/spaces/chendl/compositional_test/transformers/examples/tensorflow/question-answering/run_qa.py b/spaces/chendl/compositional_test/transformers/examples/tensorflow/question-answering/run_qa.py
deleted file mode 100644
index ef5f3b3e373a5db31d229e46f5ca9816278a972a..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/tensorflow/question-answering/run_qa.py
+++ /dev/null
@@ -1,799 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2020 The HuggingFace Team All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Fine-tuning the library models for question answering.
-"""
-# You can also adapt this script on your own question answering task. Pointers for this are left as comments.
-
-import json
-import logging
-import os
-import sys
-from dataclasses import dataclass, field
-from pathlib import Path
-from typing import Optional
-
-import evaluate
-import tensorflow as tf
-from datasets import load_dataset
-from utils_qa import postprocess_qa_predictions
-
-import transformers
-from transformers import (
- AutoConfig,
- AutoTokenizer,
- EvalPrediction,
- HfArgumentParser,
- PreTrainedTokenizerFast,
- PushToHubCallback,
- TFAutoModelForQuestionAnswering,
- TFTrainingArguments,
- create_optimizer,
- set_seed,
-)
-from transformers.utils import CONFIG_NAME, TF2_WEIGHTS_NAME, check_min_version, send_example_telemetry
-
-
-# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
-check_min_version("4.28.0")
-
-logger = logging.getLogger(__name__)
-
-
-# region Arguments
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
-
- model_name_or_path: str = field(
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Path to directory to store the pretrained models downloaded from huggingface.co"},
- )
- model_revision: str = field(
- default="main",
- metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
- )
- use_auth_token: bool = field(
- default=False,
- metadata={
- "help": (
- "Will use the token generated when running `huggingface-cli login` (necessary to use this script "
- "with private models)."
- )
- },
- )
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
- """
-
- dataset_name: Optional[str] = field(
- default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
- )
- dataset_config_name: Optional[str] = field(
- default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
- )
- train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
- validation_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
- )
- test_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input test data file to evaluate the perplexity on (a text file)."},
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
- )
- preprocessing_num_workers: Optional[int] = field(
- default=None,
- metadata={"help": "The number of processes to use for the preprocessing."},
- )
- max_seq_length: int = field(
- default=384,
- metadata={
- "help": (
- "The maximum total input sequence length after tokenization. Sequences longer "
- "than this will be truncated, sequences shorter will be padded."
- )
- },
- )
- pad_to_max_length: bool = field(
- default=False,
- metadata={
- "help": (
- "Whether to pad all samples to `max_seq_length`. If False, will pad the samples dynamically when"
- " batching to the maximum length in the batch (which can be faster on GPU but will be slower on TPU)."
- )
- },
- )
- max_train_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of training examples to this "
- "value if set."
- )
- },
- )
- max_eval_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
- "value if set."
- )
- },
- )
- max_predict_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of prediction examples to this "
- "value if set."
- )
- },
- )
- version_2_with_negative: bool = field(
- default=False, metadata={"help": "If true, some of the examples do not have an answer."}
- )
- null_score_diff_threshold: float = field(
- default=0.0,
- metadata={
- "help": (
- "The threshold used to select the null answer: if the best answer has a score that is less than "
- "the score of the null answer minus this threshold, the null answer is selected for this example. "
- "Only useful when `version_2_with_negative=True`."
- )
- },
- )
- doc_stride: int = field(
- default=128,
- metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."},
- )
- n_best_size: int = field(
- default=20,
- metadata={"help": "The total number of n-best predictions to generate when looking for an answer."},
- )
- max_answer_length: int = field(
- default=30,
- metadata={
- "help": (
- "The maximum length of an answer that can be generated. This is needed because the start "
- "and end predictions are not conditioned on one another."
- )
- },
- )
-
- def __post_init__(self):
- if (
- self.dataset_name is None
- and self.train_file is None
- and self.validation_file is None
- and self.test_file is None
- ):
- raise ValueError("Need either a dataset name or a training/validation file/test_file.")
- else:
- if self.train_file is not None:
- extension = self.train_file.split(".")[-1]
- assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
- if self.validation_file is not None:
- extension = self.validation_file.split(".")[-1]
- assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
- if self.test_file is not None:
- extension = self.test_file.split(".")[-1]
- assert extension in ["csv", "json"], "`test_file` should be a csv or a json file."
-
-
-# endregion
-
-
-# region Helper classes
-class SavePretrainedCallback(tf.keras.callbacks.Callback):
- # Hugging Face models have a save_pretrained() method that saves both the weights and the necessary
- # metadata to allow them to be loaded as a pretrained model in future. This is a simple Keras callback
- # that saves the model with this method after each epoch.
- def __init__(self, output_dir, **kwargs):
- super().__init__()
- self.output_dir = output_dir
-
- def on_epoch_end(self, epoch, logs=None):
- self.model.save_pretrained(self.output_dir)
-
-
-# endregion
-
-
-def main():
- # region Argument parsing
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
- if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
- # If we pass only one argument to the script and it's the path to a json file,
- # let's parse it to get our arguments.
- model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
- else:
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
-
- # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
- # information sent is the one passed as arguments along with your Python/PyTorch versions.
- send_example_telemetry("run_qa", model_args, data_args, framework="tensorflow")
-
- output_dir = Path(training_args.output_dir)
- output_dir.mkdir(parents=True, exist_ok=True)
- # endregion
-
- # region Checkpoints
- checkpoint = None
- if len(os.listdir(training_args.output_dir)) > 0 and not training_args.overwrite_output_dir:
- if (output_dir / CONFIG_NAME).is_file() and (output_dir / TF2_WEIGHTS_NAME).is_file():
- checkpoint = output_dir
- logger.info(
- f"Checkpoint detected, resuming training from checkpoint in {training_args.output_dir}. To avoid this"
- " behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
- )
- else:
- raise ValueError(
- f"Output directory ({training_args.output_dir}) already exists and is not empty. "
- "Use --overwrite_output_dir to continue regardless."
- )
- # endregion
-
- # region Logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- handlers=[logging.StreamHandler(sys.stdout)],
- )
- logger.setLevel(logging.INFO if training_args.should_log else logging.WARN)
-
- # Set the verbosity to info of the Transformers logger (on main process only):
- if training_args.should_log:
- transformers.utils.logging.set_verbosity_info()
- transformers.utils.logging.enable_default_handler()
- transformers.utils.logging.enable_explicit_format()
- logger.info(f"Training/evaluation parameters {training_args}")
- # endregion
-
- # Set seed before initializing model.
- set_seed(training_args.seed)
-
- # region Load Data
- # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
- # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
- # (the dataset will be downloaded automatically from the datasets Hub).
- #
- # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
- # 'text' is found. You can easily tweak this behavior (see below).
- #
- # In distributed training, the load_dataset function guarantee that only one local process can concurrently
- # download the dataset.
- if data_args.dataset_name is not None:
- # Downloading and loading a dataset from the hub.
- datasets = load_dataset(
- data_args.dataset_name,
- data_args.dataset_config_name,
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- else:
- data_files = {}
- if data_args.train_file is not None:
- data_files["train"] = data_args.train_file
- extension = data_args.train_file.split(".")[-1]
-
- if data_args.validation_file is not None:
- data_files["validation"] = data_args.validation_file
- extension = data_args.validation_file.split(".")[-1]
- if data_args.test_file is not None:
- data_files["test"] = data_args.test_file
- extension = data_args.test_file.split(".")[-1]
- datasets = load_dataset(
- extension,
- data_files=data_files,
- field="data",
- cache_dir=model_args.cache_dir,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
- # https://huggingface.co/docs/datasets/loading_datasets.html.
- # endregion
-
- # region Load pretrained model and tokenizer
- #
- # Distributed training:
- # The .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
- config = AutoConfig.from_pretrained(
- model_args.config_name if model_args.config_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- tokenizer = AutoTokenizer.from_pretrained(
- model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- use_fast=True,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- # endregion
-
- # region Tokenizer check: this script requires a fast tokenizer.
- if not isinstance(tokenizer, PreTrainedTokenizerFast):
- raise ValueError(
- "This example script only works for models that have a fast tokenizer. Checkout the big table of models at"
- " https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet"
- " this requirement"
- )
- # endregion
-
- # region Preprocessing the datasets
- # Preprocessing is slightly different for training and evaluation.
- if training_args.do_train:
- column_names = datasets["train"].column_names
- elif training_args.do_eval:
- column_names = datasets["validation"].column_names
- else:
- column_names = datasets["test"].column_names
- question_column_name = "question" if "question" in column_names else column_names[0]
- context_column_name = "context" if "context" in column_names else column_names[1]
- answer_column_name = "answers" if "answers" in column_names else column_names[2]
-
- # Padding side determines if we do (question|context) or (context|question).
- pad_on_right = tokenizer.padding_side == "right"
-
- if data_args.max_seq_length > tokenizer.model_max_length:
- logger.warning(
- f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the"
- f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
- )
- max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
-
- if data_args.pad_to_max_length or isinstance(training_args.strategy, tf.distribute.TPUStrategy):
- logger.info("Padding all batches to max length because argument was set or we're on TPU.")
- padding = "max_length"
- else:
- padding = False
-
- # Training preprocessing
- def prepare_train_features(examples):
- # Some of the questions have lots of whitespace on the left, which is not useful and will make the
- # truncation of the context fail (the tokenized question will take a lots of space). So we remove that
- # left whitespace
- examples[question_column_name] = [q.lstrip() for q in examples[question_column_name]]
-
- # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
- # in one example possible giving several features when a context is long, each of those features having a
- # context that overlaps a bit the context of the previous feature.
- tokenized_examples = tokenizer(
- examples[question_column_name if pad_on_right else context_column_name],
- examples[context_column_name if pad_on_right else question_column_name],
- truncation="only_second" if pad_on_right else "only_first",
- max_length=max_seq_length,
- stride=data_args.doc_stride,
- return_overflowing_tokens=True,
- return_offsets_mapping=True,
- padding=padding,
- )
-
- # Since one example might give us several features if it has a long context, we need a map from a feature to
- # its corresponding example. This key gives us just that.
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
- # The offset mappings will give us a map from token to character position in the original context. This will
- # help us compute the start_positions and end_positions.
- offset_mapping = tokenized_examples.pop("offset_mapping")
-
- # Let's label those examples!
- tokenized_examples["start_positions"] = []
- tokenized_examples["end_positions"] = []
-
- for i, offsets in enumerate(offset_mapping):
- # We will label impossible answers with the index of the CLS token.
- input_ids = tokenized_examples["input_ids"][i]
- cls_index = input_ids.index(tokenizer.cls_token_id)
-
- # Grab the sequence corresponding to that example (to know what is the context and what is the question).
- sequence_ids = tokenized_examples.sequence_ids(i)
-
- # One example can give several spans, this is the index of the example containing this span of text.
- sample_index = sample_mapping[i]
- answers = examples[answer_column_name][sample_index]
- # If no answers are given, set the cls_index as answer.
- if len(answers["answer_start"]) == 0:
- tokenized_examples["start_positions"].append(cls_index)
- tokenized_examples["end_positions"].append(cls_index)
- else:
- # Start/end character index of the answer in the text.
- start_char = answers["answer_start"][0]
- end_char = start_char + len(answers["text"][0])
-
- # Start token index of the current span in the text.
- token_start_index = 0
- while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
- token_start_index += 1
-
- # End token index of the current span in the text.
- token_end_index = len(input_ids) - 1
- while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
- token_end_index -= 1
-
- # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
- if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
- tokenized_examples["start_positions"].append(cls_index)
- tokenized_examples["end_positions"].append(cls_index)
- else:
- # Otherwise move the token_start_index and token_end_index to the two ends of the answer.
- # Note: we could go after the last offset if the answer is the last word (edge case).
- while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
- token_start_index += 1
- tokenized_examples["start_positions"].append(token_start_index - 1)
- while offsets[token_end_index][1] >= end_char:
- token_end_index -= 1
- tokenized_examples["end_positions"].append(token_end_index + 1)
-
- return tokenized_examples
-
- processed_datasets = {}
- if training_args.do_train:
- if "train" not in datasets:
- raise ValueError("--do_train requires a train dataset")
- train_dataset = datasets["train"]
- if data_args.max_train_samples is not None:
- # We will select sample from whole data if agument is specified
- max_train_samples = min(len(train_dataset), data_args.max_train_samples)
- train_dataset = train_dataset.select(range(max_train_samples))
- # Create train feature from dataset
- train_dataset = train_dataset.map(
- prepare_train_features,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not data_args.overwrite_cache,
- )
- if data_args.max_train_samples is not None:
- # Number of samples might increase during Feature Creation, We select only specified max samples
- max_train_samples = min(len(train_dataset), data_args.max_train_samples)
- train_dataset = train_dataset.select(range(max_train_samples))
- processed_datasets["train"] = train_dataset
-
- # Validation preprocessing
- def prepare_validation_features(examples):
- # Some of the questions have lots of whitespace on the left, which is not useful and will make the
- # truncation of the context fail (the tokenized question will take a lots of space). So we remove that
- # left whitespace
- examples[question_column_name] = [q.lstrip() for q in examples[question_column_name]]
-
- # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
- # in one example possible giving several features when a context is long, each of those features having a
- # context that overlaps a bit the context of the previous feature.
- tokenized_examples = tokenizer(
- examples[question_column_name if pad_on_right else context_column_name],
- examples[context_column_name if pad_on_right else question_column_name],
- truncation="only_second" if pad_on_right else "only_first",
- max_length=max_seq_length,
- stride=data_args.doc_stride,
- return_overflowing_tokens=True,
- return_offsets_mapping=True,
- padding=padding,
- )
-
- # Since one example might give us several features if it has a long context, we need a map from a feature to
- # its corresponding example. This key gives us just that.
- sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
-
- # For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
- # corresponding example_id and we will store the offset mappings.
- tokenized_examples["example_id"] = []
-
- for i in range(len(tokenized_examples["input_ids"])):
- # Grab the sequence corresponding to that example (to know what is the context and what is the question).
- sequence_ids = tokenized_examples.sequence_ids(i)
- context_index = 1 if pad_on_right else 0
-
- # One example can give several spans, this is the index of the example containing this span of text.
- sample_index = sample_mapping[i]
- tokenized_examples["example_id"].append(examples["id"][sample_index])
-
- # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
- # position is part of the context or not.
- tokenized_examples["offset_mapping"][i] = [
- (o if sequence_ids[k] == context_index else None)
- for k, o in enumerate(tokenized_examples["offset_mapping"][i])
- ]
-
- return tokenized_examples
-
- if training_args.do_eval:
- if "validation" not in datasets:
- raise ValueError("--do_eval requires a validation dataset")
- eval_examples = datasets["validation"]
- if data_args.max_eval_samples is not None:
- # We will select sample from whole data
- max_eval_samples = min(len(eval_examples), data_args.max_eval_samples)
- eval_examples = eval_examples.select(range(max_eval_samples))
- # Validation Feature Creation
- eval_dataset = eval_examples.map(
- prepare_validation_features,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not data_args.overwrite_cache,
- )
- if data_args.max_eval_samples is not None:
- # During Feature creation dataset samples might increase, we will select required samples again
- max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)
- eval_dataset = eval_dataset.select(range(max_eval_samples))
- processed_datasets["validation"] = eval_dataset
-
- if training_args.do_predict:
- if "test" not in datasets:
- raise ValueError("--do_predict requires a test dataset")
- predict_examples = datasets["test"]
- if data_args.max_predict_samples is not None:
- # We will select sample from whole data
- predict_examples = predict_examples.select(range(data_args.max_predict_samples))
- # Predict Feature Creation
- predict_dataset = predict_examples.map(
- prepare_validation_features,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- remove_columns=column_names,
- load_from_cache_file=not data_args.overwrite_cache,
- )
- if data_args.max_predict_samples is not None:
- # During Feature creation dataset samples might increase, we will select required samples again
- max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)
- predict_dataset = predict_dataset.select(range(max_predict_samples))
- processed_datasets["test"] = predict_dataset
- # endregion
-
- # region Metrics and Post-processing:
- def post_processing_function(examples, features, predictions, stage="eval"):
- # Post-processing: we match the start logits and end logits to answers in the original context.
- predictions = postprocess_qa_predictions(
- examples=examples,
- features=features,
- predictions=predictions,
- version_2_with_negative=data_args.version_2_with_negative,
- n_best_size=data_args.n_best_size,
- max_answer_length=data_args.max_answer_length,
- null_score_diff_threshold=data_args.null_score_diff_threshold,
- output_dir=training_args.output_dir,
- prefix=stage,
- )
- # Format the result to the format the metric expects.
- if data_args.version_2_with_negative:
- formatted_predictions = [
- {"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items()
- ]
- else:
- formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()]
-
- references = [{"id": ex["id"], "answers": ex[answer_column_name]} for ex in examples]
- return EvalPrediction(predictions=formatted_predictions, label_ids=references)
-
- metric = evaluate.load("squad_v2" if data_args.version_2_with_negative else "squad")
-
- def compute_metrics(p: EvalPrediction):
- return metric.compute(predictions=p.predictions, references=p.label_ids)
-
- # endregion
-
- with training_args.strategy.scope():
- dataset_options = tf.data.Options()
- dataset_options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
- num_replicas = training_args.strategy.num_replicas_in_sync
-
- # region Load model and prepare datasets
- if checkpoint is None:
- model_path = model_args.model_name_or_path
- else:
- model_path = checkpoint
- model = TFAutoModelForQuestionAnswering.from_pretrained(
- model_path,
- config=config,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None,
- )
- if training_args.do_train:
- training_dataset = model.prepare_tf_dataset(
- processed_datasets["train"],
- shuffle=True,
- batch_size=training_args.per_device_train_batch_size * num_replicas,
- tokenizer=tokenizer,
- )
-
- training_dataset = training_dataset.with_options(dataset_options)
-
- num_train_steps = len(training_dataset) * training_args.num_train_epochs
- if training_args.warmup_steps > 0:
- num_warmup_steps = training_args.warmup_steps
- elif training_args.warmup_ratio > 0:
- num_warmup_steps = int(num_train_steps * training_args.warmup_ratio)
- else:
- num_warmup_steps = 0
-
- optimizer, schedule = create_optimizer(
- init_lr=training_args.learning_rate,
- num_train_steps=len(training_dataset) * training_args.num_train_epochs,
- num_warmup_steps=num_warmup_steps,
- adam_beta1=training_args.adam_beta1,
- adam_beta2=training_args.adam_beta2,
- adam_epsilon=training_args.adam_epsilon,
- weight_decay_rate=training_args.weight_decay,
- adam_global_clipnorm=training_args.max_grad_norm,
- )
-
- # no user-specified loss = will use the model internal loss
- model.compile(optimizer=optimizer, jit_compile=training_args.xla, metrics=["accuracy"])
-
- else:
- model.compile(optimizer=None, jit_compile=training_args.xla, metrics=["accuracy"])
- training_dataset = None
-
- if training_args.do_eval:
- eval_dataset = model.prepare_tf_dataset(
- processed_datasets["validation"],
- shuffle=False,
- batch_size=training_args.per_device_train_batch_size * num_replicas,
- tokenizer=tokenizer,
- )
- eval_dataset = eval_dataset.with_options(dataset_options)
- else:
- eval_dataset = None
-
- if training_args.do_predict:
- predict_dataset = model.prepare_tf_dataset(
- processed_datasets["test"],
- shuffle=False,
- batch_size=training_args.per_device_eval_batch_size * num_replicas,
- tokenizer=tokenizer,
- )
- predict_dataset = predict_dataset.with_options(dataset_options)
- else:
- predict_dataset = None
-
- # endregion
-
- # region Preparing push_to_hub and model card
- push_to_hub_model_id = training_args.push_to_hub_model_id
- model_name = model_args.model_name_or_path.split("/")[-1]
- if not push_to_hub_model_id:
- if data_args.dataset_name is not None:
- push_to_hub_model_id = f"{model_name}-finetuned-{data_args.dataset_name}"
- else:
- push_to_hub_model_id = f"{model_name}-finetuned-question-answering"
-
- model_card_kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "question-answering"}
- if data_args.dataset_name is not None:
- model_card_kwargs["dataset_tags"] = data_args.dataset_name
- if data_args.dataset_config_name is not None:
- model_card_kwargs["dataset_args"] = data_args.dataset_config_name
- model_card_kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
- else:
- model_card_kwargs["dataset"] = data_args.dataset_name
-
- if training_args.push_to_hub:
- callbacks = [
- PushToHubCallback(
- output_dir=training_args.output_dir,
- hub_model_id=push_to_hub_model_id,
- hub_token=training_args.push_to_hub_token,
- tokenizer=tokenizer,
- **model_card_kwargs,
- )
- ]
- else:
- callbacks = []
- # endregion
-
- # region Training and Evaluation
-
- if training_args.do_train:
- # Note that the validation and test datasets have been processed in a different way to the
- # training datasets in this example, and so they don't have the same label structure.
- # As such, we don't pass them directly to Keras, but instead get model predictions to evaluate
- # after training.
- model.fit(training_dataset, epochs=int(training_args.num_train_epochs), callbacks=callbacks)
-
- if training_args.do_eval:
- logger.info("*** Evaluation ***")
-
- # In this example, we compute advanced metrics at the end of training, but
- # if you'd like to compute metrics every epoch that are too complex to be written as
- # standard Keras metrics, you can use our KerasMetricCallback. See
- # https://huggingface.co/docs/transformers/main/en/main_classes/keras_callbacks
-
- eval_predictions = model.predict(eval_dataset)
- if isinstance(eval_predictions.start_logits, tf.RaggedTensor):
- # If predictions are RaggedTensor, we densify them. Since they are logits, padding with 0 is a bad idea!
- # The reason is that a logit of 0 can often end up as quite a high probability value, sometimes even
- # the highest probability in a sample. Instead, we use a large negative value, which ensures that the
- # padding positions are correctly masked.
- eval_start_logits = eval_predictions.start_logits.to_tensor(default_value=-1000).numpy()
- eval_end_logits = eval_predictions.end_logits.to_tensor(default_value=-1000).numpy()
- else:
- eval_start_logits = eval_predictions.start_logits
- eval_end_logits = eval_predictions.end_logits
-
- post_processed_eval = post_processing_function(
- datasets["validation"],
- processed_datasets["validation"],
- (eval_start_logits, eval_end_logits),
- )
- metrics = compute_metrics(post_processed_eval)
- logging.info("Evaluation metrics:")
- for metric, value in metrics.items():
- logging.info(f"{metric}: {value:.3f}")
- if training_args.output_dir is not None:
- output_eval_file = os.path.join(training_args.output_dir, "all_results.json")
- with open(output_eval_file, "w") as writer:
- writer.write(json.dumps(metrics))
- # endregion
-
- # region Prediction
- if training_args.do_predict:
- logger.info("*** Predict ***")
-
- test_predictions = model.predict(predict_dataset)
- if isinstance(test_predictions.start_logits, tf.RaggedTensor):
- # If predictions are RaggedTensor, we densify them. Since they are logits, padding with 0 is a bad idea!
- # The reason is that a logit of 0 can often end up as quite a high probability value, sometimes even
- # the highest probability in a sample. Instead, we use a large negative value, which ensures that the
- # padding positions are correctly masked.
- test_start_logits = test_predictions.start_logits.to_tensor(default_value=-1000).numpy()
- test_end_logits = test_predictions.end_logits.to_tensor(default_value=-1000).numpy()
- else:
- test_start_logits = test_predictions.start_logits
- test_end_logits = test_predictions.end_logits
- post_processed_test = post_processing_function(
- datasets["test"],
- processed_datasets["test"],
- (test_start_logits, test_end_logits),
- )
- metrics = compute_metrics(post_processed_test)
-
- logging.info("Test metrics:")
- for metric, value in metrics.items():
- logging.info(f"{metric}: {value:.3f}")
- # endregion
-
- if training_args.output_dir is not None and not training_args.push_to_hub:
- # If we're not pushing to hub, at least save a local copy when we're done
- model.save_pretrained(training_args.output_dir)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/benchmark/benchmark.py b/spaces/chendl/compositional_test/transformers/src/transformers/benchmark/benchmark.py
deleted file mode 100644
index 3c5c877a454e63e9472ad80ea75d155be346a887..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/src/transformers/benchmark/benchmark.py
+++ /dev/null
@@ -1,271 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The HuggingFace Inc. team.
-# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
- Benchmarking the library on inference and training in PyTorch.
-"""
-
-
-import timeit
-from typing import Callable, Optional
-
-from ..configuration_utils import PretrainedConfig
-from ..models.auto.modeling_auto import MODEL_MAPPING, MODEL_WITH_LM_HEAD_MAPPING
-from ..utils import is_py3nvml_available, is_torch_available, logging
-from .benchmark_utils import (
- Benchmark,
- Memory,
- MemorySummary,
- measure_peak_memory_cpu,
- start_memory_tracing,
- stop_memory_tracing,
-)
-
-
-if is_torch_available():
- import torch
-
- from .benchmark_args import PyTorchBenchmarkArguments
-
-
-if is_py3nvml_available():
- import py3nvml.py3nvml as nvml
-
-
-logger = logging.get_logger(__name__)
-
-
-class PyTorchBenchmark(Benchmark):
- args: PyTorchBenchmarkArguments
- configs: PretrainedConfig
- framework: str = "PyTorch"
-
- @property
- def framework_version(self):
- return torch.__version__
-
- def _inference_speed(self, model_name: str, batch_size: int, sequence_length: int) -> float:
- _inference = self._prepare_inference_func(model_name, batch_size, sequence_length)
- return self._measure_speed(_inference)
-
- def _inference_memory(
- self, model_name: str, batch_size: int, sequence_length: int
- ) -> [Memory, Optional[MemorySummary]]:
- _inference = self._prepare_inference_func(model_name, batch_size, sequence_length)
- return self._measure_memory(_inference)
-
- def _train_speed(self, model_name: str, batch_size: int, sequence_length: int) -> float:
- _train = self._prepare_train_func(model_name, batch_size, sequence_length)
- return self._measure_speed(_train)
-
- def _train_memory(
- self, model_name: str, batch_size: int, sequence_length: int
- ) -> [Memory, Optional[MemorySummary]]:
- _train = self._prepare_train_func(model_name, batch_size, sequence_length)
- return self._measure_memory(_train)
-
- def _prepare_inference_func(self, model_name: str, batch_size: int, sequence_length: int) -> Callable[[], None]:
- config = self.config_dict[model_name]
-
- if self.args.torchscript:
- config.torchscript = True
-
- has_model_class_in_config = (
- hasattr(config, "architectures")
- and isinstance(config.architectures, list)
- and len(config.architectures) > 0
- )
- if not self.args.only_pretrain_model and has_model_class_in_config:
- try:
- model_class = config.architectures[0]
- transformers_module = __import__("transformers", fromlist=[model_class])
- model_cls = getattr(transformers_module, model_class)
- model = model_cls(config)
- except ImportError:
- raise ImportError(
- f"{model_class} does not exist. If you just want to test the pretrained model, you might want to"
- " set `--only_pretrain_model` or `args.only_pretrain_model=True`."
- )
- else:
- model = MODEL_MAPPING[config.__class__](config)
-
- model.eval()
- model.to(self.args.device)
-
- # encoder-decoder has vocab size saved differently
- vocab_size = config.vocab_size if hasattr(config, "vocab_size") else config.encoder.vocab_size
- input_ids = torch.randint(vocab_size, (batch_size, sequence_length), dtype=torch.long, device=self.args.device)
-
- if self.args.fp16:
- logger.info("Running training in Mixed Precision...")
- if not self.args.is_gpu:
- raise ValueError("Mixed precision is possible only for GPU.")
- # amp seems to have memory leaks so that memory usage
- # is measured using .half() for now https://github.com/NVIDIA/apex/issues/439
- model.half()
-
- if self.args.torchscript:
- with torch.no_grad():
- inference_model = torch.jit.trace(model, input_ids)
- else:
- inference_model = model
-
- def encoder_decoder_forward():
- with torch.no_grad():
- outputs = inference_model(input_ids, decoder_input_ids=input_ids)
- return outputs
-
- def encoder_forward():
- with torch.no_grad():
- outputs = inference_model(input_ids)
- return outputs
-
- _forward = encoder_decoder_forward if config.is_encoder_decoder else encoder_forward
- return _forward
-
- def _prepare_train_func(self, model_name: str, batch_size: int, sequence_length: int) -> Callable[[], None]:
- config = self.config_dict[model_name]
-
- has_model_class_in_config = (
- hasattr(config, "architectures")
- and isinstance(config.architectures, list)
- and len(config.architectures) > 0
- )
- if not self.args.only_pretrain_model and has_model_class_in_config:
- try:
- model_class = config.architectures[0]
- transformers_module = __import__("transformers", fromlist=[model_class])
- model_cls = getattr(transformers_module, model_class)
- model = model_cls(config)
- except ImportError:
- raise ImportError(
- f"{model_class} does not exist. If you just want to test the pretrained model, you might want to"
- " set `--only_pretrain_model` or `args.only_pretrain_model=True`."
- )
- else:
- model = MODEL_WITH_LM_HEAD_MAPPING[config.__class__](config)
-
- if self.args.torchscript:
- raise NotImplementedError("Training for torchscript is currently not implemented")
- else:
- train_model = model
-
- model.train()
- model.to(self.args.device)
-
- # encoder-decoder has vocab size saved differently
- vocab_size = config.vocab_size if hasattr(config, "vocab_size") else config.encoder.vocab_size
- input_ids = torch.randint(vocab_size, (batch_size, sequence_length), dtype=torch.long, device=self.args.device)
-
- if self.args.fp16:
- logger.info("Running training in Mixed Precision...")
- if not self.args.is_gpu:
- raise ValueError("Mixed precision is possible only for GPU.")
-
- # amp seems to have memory leaks so that memory usage
- # is measured using .half() for now https://github.com/NVIDIA/apex/issues/439
- model.half()
-
- def compute_loss_and_backprob_encoder():
- loss = train_model(input_ids, labels=input_ids)[0]
- loss.backward()
- return loss
-
- def compute_loss_and_backprob_encoder_decoder():
- loss = train_model(input_ids, decoder_input_ids=input_ids, labels=input_ids)[0]
- loss.backward()
- return loss
-
- _train = (
- compute_loss_and_backprob_encoder_decoder
- if config.is_encoder_decoder
- else compute_loss_and_backprob_encoder
- )
- return _train
-
- def _measure_speed(self, func) -> float:
- try:
- if self.args.is_tpu or self.args.torchscript:
- # run additional 10 times to stabilize compilation for tpu and torchscript
- logger.info("Do inference on TPU or torchscript. Running model 5 times to stabilize compilation")
- timeit.repeat(
- func,
- repeat=1,
- number=5,
- )
-
- # as written in https://docs.python.org/2/library/timeit.html#timeit.Timer.repeat, min should be taken rather than the average
- runtimes = timeit.repeat(
- func,
- repeat=self.args.repeat,
- number=10,
- )
-
- if self.args.is_tpu and self.args.torch_xla_tpu_print_metrics:
- import torch_xla.debug.metrics as met
-
- self.print_fn(met.metrics_report())
-
- return min(runtimes) / 10.0
- except RuntimeError as e:
- self.print_fn(f"Doesn't fit on GPU. {e}")
- return "N/A"
-
- def _measure_memory(self, func: Callable[[], None]) -> [Memory, MemorySummary]:
- try:
- if self.args.trace_memory_line_by_line:
- trace = start_memory_tracing("transformers")
-
- if self.args.is_tpu:
- # tpu
- raise NotImplementedError(
- "Memory Benchmarking is currently not implemented for TPU. Please disable memory benchmarking with"
- " `--no-memory` or `args.memory=False`"
- )
- elif self.args.is_gpu:
- if not is_py3nvml_available():
- logger.warning(
- "py3nvml not installed, we won't log GPU memory usage. "
- "Install py3nvml (pip install py3nvml) to log information about GPU."
- )
- memory = "N/A"
- else:
- logger.info(
- "Measuring total GPU usage on GPU device. Make sure to not have additional processes running"
- " on the same GPU."
- )
- # init nvml
- nvml.nvmlInit()
- func()
- handle = nvml.nvmlDeviceGetHandleByIndex(self.args.device_idx)
- meminfo = nvml.nvmlDeviceGetMemoryInfo(handle)
- max_bytes_in_use = meminfo.used
- memory = Memory(max_bytes_in_use)
- # shutdown nvml
- nvml.nvmlShutdown()
- else:
- # cpu
- memory_bytes = measure_peak_memory_cpu(func)
- memory = Memory(memory_bytes) if isinstance(memory_bytes, int) else memory_bytes
-
- if self.args.trace_memory_line_by_line:
- summary = stop_memory_tracing(trace)
- else:
- summary = None
-
- return memory, summary
- except RuntimeError as e:
- self.print_fn(f"Doesn't fit on GPU. {e}")
- return "N/A", None
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PalmImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PalmImagePlugin.py
deleted file mode 100644
index a88a907917dce5dace64fd1e38df86246c8e0305..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PalmImagePlugin.py
+++ /dev/null
@@ -1,225 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-
-##
-# Image plugin for Palm pixmap images (output only).
-##
-
-from . import Image, ImageFile
-from ._binary import o8
-from ._binary import o16be as o16b
-
-# fmt: off
-_Palm8BitColormapValues = (
- (255, 255, 255), (255, 204, 255), (255, 153, 255), (255, 102, 255),
- (255, 51, 255), (255, 0, 255), (255, 255, 204), (255, 204, 204),
- (255, 153, 204), (255, 102, 204), (255, 51, 204), (255, 0, 204),
- (255, 255, 153), (255, 204, 153), (255, 153, 153), (255, 102, 153),
- (255, 51, 153), (255, 0, 153), (204, 255, 255), (204, 204, 255),
- (204, 153, 255), (204, 102, 255), (204, 51, 255), (204, 0, 255),
- (204, 255, 204), (204, 204, 204), (204, 153, 204), (204, 102, 204),
- (204, 51, 204), (204, 0, 204), (204, 255, 153), (204, 204, 153),
- (204, 153, 153), (204, 102, 153), (204, 51, 153), (204, 0, 153),
- (153, 255, 255), (153, 204, 255), (153, 153, 255), (153, 102, 255),
- (153, 51, 255), (153, 0, 255), (153, 255, 204), (153, 204, 204),
- (153, 153, 204), (153, 102, 204), (153, 51, 204), (153, 0, 204),
- (153, 255, 153), (153, 204, 153), (153, 153, 153), (153, 102, 153),
- (153, 51, 153), (153, 0, 153), (102, 255, 255), (102, 204, 255),
- (102, 153, 255), (102, 102, 255), (102, 51, 255), (102, 0, 255),
- (102, 255, 204), (102, 204, 204), (102, 153, 204), (102, 102, 204),
- (102, 51, 204), (102, 0, 204), (102, 255, 153), (102, 204, 153),
- (102, 153, 153), (102, 102, 153), (102, 51, 153), (102, 0, 153),
- (51, 255, 255), (51, 204, 255), (51, 153, 255), (51, 102, 255),
- (51, 51, 255), (51, 0, 255), (51, 255, 204), (51, 204, 204),
- (51, 153, 204), (51, 102, 204), (51, 51, 204), (51, 0, 204),
- (51, 255, 153), (51, 204, 153), (51, 153, 153), (51, 102, 153),
- (51, 51, 153), (51, 0, 153), (0, 255, 255), (0, 204, 255),
- (0, 153, 255), (0, 102, 255), (0, 51, 255), (0, 0, 255),
- (0, 255, 204), (0, 204, 204), (0, 153, 204), (0, 102, 204),
- (0, 51, 204), (0, 0, 204), (0, 255, 153), (0, 204, 153),
- (0, 153, 153), (0, 102, 153), (0, 51, 153), (0, 0, 153),
- (255, 255, 102), (255, 204, 102), (255, 153, 102), (255, 102, 102),
- (255, 51, 102), (255, 0, 102), (255, 255, 51), (255, 204, 51),
- (255, 153, 51), (255, 102, 51), (255, 51, 51), (255, 0, 51),
- (255, 255, 0), (255, 204, 0), (255, 153, 0), (255, 102, 0),
- (255, 51, 0), (255, 0, 0), (204, 255, 102), (204, 204, 102),
- (204, 153, 102), (204, 102, 102), (204, 51, 102), (204, 0, 102),
- (204, 255, 51), (204, 204, 51), (204, 153, 51), (204, 102, 51),
- (204, 51, 51), (204, 0, 51), (204, 255, 0), (204, 204, 0),
- (204, 153, 0), (204, 102, 0), (204, 51, 0), (204, 0, 0),
- (153, 255, 102), (153, 204, 102), (153, 153, 102), (153, 102, 102),
- (153, 51, 102), (153, 0, 102), (153, 255, 51), (153, 204, 51),
- (153, 153, 51), (153, 102, 51), (153, 51, 51), (153, 0, 51),
- (153, 255, 0), (153, 204, 0), (153, 153, 0), (153, 102, 0),
- (153, 51, 0), (153, 0, 0), (102, 255, 102), (102, 204, 102),
- (102, 153, 102), (102, 102, 102), (102, 51, 102), (102, 0, 102),
- (102, 255, 51), (102, 204, 51), (102, 153, 51), (102, 102, 51),
- (102, 51, 51), (102, 0, 51), (102, 255, 0), (102, 204, 0),
- (102, 153, 0), (102, 102, 0), (102, 51, 0), (102, 0, 0),
- (51, 255, 102), (51, 204, 102), (51, 153, 102), (51, 102, 102),
- (51, 51, 102), (51, 0, 102), (51, 255, 51), (51, 204, 51),
- (51, 153, 51), (51, 102, 51), (51, 51, 51), (51, 0, 51),
- (51, 255, 0), (51, 204, 0), (51, 153, 0), (51, 102, 0),
- (51, 51, 0), (51, 0, 0), (0, 255, 102), (0, 204, 102),
- (0, 153, 102), (0, 102, 102), (0, 51, 102), (0, 0, 102),
- (0, 255, 51), (0, 204, 51), (0, 153, 51), (0, 102, 51),
- (0, 51, 51), (0, 0, 51), (0, 255, 0), (0, 204, 0),
- (0, 153, 0), (0, 102, 0), (0, 51, 0), (17, 17, 17),
- (34, 34, 34), (68, 68, 68), (85, 85, 85), (119, 119, 119),
- (136, 136, 136), (170, 170, 170), (187, 187, 187), (221, 221, 221),
- (238, 238, 238), (192, 192, 192), (128, 0, 0), (128, 0, 128),
- (0, 128, 0), (0, 128, 128), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0),
- (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0))
-# fmt: on
-
-
-# so build a prototype image to be used for palette resampling
-def build_prototype_image():
- image = Image.new("L", (1, len(_Palm8BitColormapValues)))
- image.putdata(list(range(len(_Palm8BitColormapValues))))
- palettedata = ()
- for colormapValue in _Palm8BitColormapValues:
- palettedata += colormapValue
- palettedata += (0, 0, 0) * (256 - len(_Palm8BitColormapValues))
- image.putpalette(palettedata)
- return image
-
-
-Palm8BitColormapImage = build_prototype_image()
-
-# OK, we now have in Palm8BitColormapImage,
-# a "P"-mode image with the right palette
-#
-# --------------------------------------------------------------------
-
-_FLAGS = {"custom-colormap": 0x4000, "is-compressed": 0x8000, "has-transparent": 0x2000}
-
-_COMPRESSION_TYPES = {"none": 0xFF, "rle": 0x01, "scanline": 0x00}
-
-
-#
-# --------------------------------------------------------------------
-
-##
-# (Internal) Image save plugin for the Palm format.
-
-
-def _save(im, fp, filename):
- if im.mode == "P":
- # we assume this is a color Palm image with the standard colormap,
- # unless the "info" dict has a "custom-colormap" field
-
- rawmode = "P"
- bpp = 8
- version = 1
-
- elif im.mode == "L":
- if im.encoderinfo.get("bpp") in (1, 2, 4):
- # this is 8-bit grayscale, so we shift it to get the high-order bits,
- # and invert it because
- # Palm does greyscale from white (0) to black (1)
- bpp = im.encoderinfo["bpp"]
- im = im.point(
- lambda x, shift=8 - bpp, maxval=(1 << bpp) - 1: maxval - (x >> shift)
- )
- elif im.info.get("bpp") in (1, 2, 4):
- # here we assume that even though the inherent mode is 8-bit grayscale,
- # only the lower bpp bits are significant.
- # We invert them to match the Palm.
- bpp = im.info["bpp"]
- im = im.point(lambda x, maxval=(1 << bpp) - 1: maxval - (x & maxval))
- else:
- msg = f"cannot write mode {im.mode} as Palm"
- raise OSError(msg)
-
- # we ignore the palette here
- im.mode = "P"
- rawmode = "P;" + str(bpp)
- version = 1
-
- elif im.mode == "1":
- # monochrome -- write it inverted, as is the Palm standard
- rawmode = "1;I"
- bpp = 1
- version = 0
-
- else:
- msg = f"cannot write mode {im.mode} as Palm"
- raise OSError(msg)
-
- #
- # make sure image data is available
- im.load()
-
- # write header
-
- cols = im.size[0]
- rows = im.size[1]
-
- rowbytes = int((cols + (16 // bpp - 1)) / (16 // bpp)) * 2
- transparent_index = 0
- compression_type = _COMPRESSION_TYPES["none"]
-
- flags = 0
- if im.mode == "P" and "custom-colormap" in im.info:
- flags = flags & _FLAGS["custom-colormap"]
- colormapsize = 4 * 256 + 2
- colormapmode = im.palette.mode
- colormap = im.getdata().getpalette()
- else:
- colormapsize = 0
-
- if "offset" in im.info:
- offset = (rowbytes * rows + 16 + 3 + colormapsize) // 4
- else:
- offset = 0
-
- fp.write(o16b(cols) + o16b(rows) + o16b(rowbytes) + o16b(flags))
- fp.write(o8(bpp))
- fp.write(o8(version))
- fp.write(o16b(offset))
- fp.write(o8(transparent_index))
- fp.write(o8(compression_type))
- fp.write(o16b(0)) # reserved by Palm
-
- # now write colormap if necessary
-
- if colormapsize > 0:
- fp.write(o16b(256))
- for i in range(256):
- fp.write(o8(i))
- if colormapmode == "RGB":
- fp.write(
- o8(colormap[3 * i])
- + o8(colormap[3 * i + 1])
- + o8(colormap[3 * i + 2])
- )
- elif colormapmode == "RGBA":
- fp.write(
- o8(colormap[4 * i])
- + o8(colormap[4 * i + 1])
- + o8(colormap[4 * i + 2])
- )
-
- # now convert data to raw form
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, rowbytes, 1))])
-
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-#
-# --------------------------------------------------------------------
-
-Image.register_save("Palm", _save)
-
-Image.register_extension("Palm", ".palm")
-
-Image.register_mime("Palm", "image/palm")
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/buffer.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/buffer.py
deleted file mode 100644
index b50b9bb678226947a5dbc57b648bb7e99858c2a1..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/buffer.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import sys
-import array
-from typing import Any, Iterable
-
-from clickhouse_connect.driver.exceptions import StreamCompleteException
-from clickhouse_connect.driver.types import ByteSource
-
-must_swap = sys.byteorder == 'big'
-
-
-class ResponseBuffer(ByteSource):
- slots = 'slice_sz', 'buf_loc', 'end', 'gen', 'buffer', 'slice'
-
- def __init__(self, source):
- self.slice_sz = 4096
- self.buf_loc = 0
- self.buf_sz = 0
- self.source = source
- self.gen = source.gen
- self.buffer = bytes()
-
- def read_bytes(self, sz: int):
- if self.buf_loc + sz <= self.buf_sz:
- self.buf_loc += sz
- return self.buffer[self.buf_loc - sz: self.buf_loc]
- # Create a temporary buffer that bridges two or more source chunks
- bridge = bytearray(self.buffer[self.buf_loc: self.buf_sz])
- self.buf_loc = 0
- self.buf_sz = 0
- while len(bridge) < sz:
- chunk = next(self.gen, None)
- if not chunk:
- raise StreamCompleteException
- x = len(chunk)
- if len(bridge) + x <= sz:
- bridge.extend(chunk)
- else:
- tail = sz - len(bridge)
- bridge.extend(chunk[:tail])
- self.buffer = chunk
- self.buf_sz = x
- self.buf_loc = tail
- return bridge
-
- def read_byte(self) -> int:
- if self.buf_loc < self.buf_sz:
- self.buf_loc += 1
- return self.buffer[self.buf_loc - 1]
- self.buf_sz = 0
- self.buf_loc = 0
- chunk = next(self.gen, None)
- if not chunk:
- raise StreamCompleteException
- x = len(chunk)
- if x > 1:
- self.buffer = chunk
- self.buf_loc = 1
- self.buf_sz = x
- return chunk[0]
-
- def read_leb128(self) -> int:
- sz = 0
- shift = 0
- while True:
- b = self.read_byte()
- sz += ((b & 0x7f) << shift)
- if (b & 0x80) == 0:
- return sz
- shift += 7
-
- def read_leb128_str(self) -> str:
- sz = self.read_leb128()
- return self.read_bytes(sz).decode()
-
- def read_uint64(self) -> int:
- return int.from_bytes(self.read_bytes(8), 'little', signed=False)
-
- def read_str_col(self,
- num_rows: int,
- encoding: str,
- nullable: bool = False,
- null_obj: Any = None) -> Iterable[str]:
- column = []
- app = column.append
- null_map = self.read_bytes(num_rows) if nullable else None
- for ix in range(num_rows):
- sz = 0
- shift = 0
- while True:
- b = self.read_byte()
- sz += ((b & 0x7f) << shift)
- if (b & 0x80) == 0:
- break
- shift += 7
- x = self.read_bytes(sz)
- if null_map and null_map[ix]:
- app(null_obj)
- elif encoding:
- try:
- app(x.decode(encoding))
- except UnicodeDecodeError:
- app(x.hex())
- else:
- app(x)
- return column
-
- def read_bytes_col(self, sz: int, num_rows: int) -> Iterable[bytes]:
- source = self.read_bytes(sz * num_rows)
- return [bytes(source[x:x+sz]) for x in range(0, sz * num_rows, sz)]
-
- def read_fixed_str_col(self, sz: int, num_rows: int, encoding: str) -> Iterable[str]:
- source = self.read_bytes(sz * num_rows)
- column = []
- app = column.append
- for ix in range(0, sz * num_rows, sz):
- try:
- app(str(source[ix: ix + sz], encoding).rstrip('\x00'))
- except UnicodeDecodeError:
- app(source[ix: ix + sz].hex())
- return column
-
- def read_array(self, array_type: str, num_rows: int) -> Iterable[Any]:
- column = array.array(array_type)
- sz = column.itemsize * num_rows
- b = self.read_bytes(sz)
- column.frombytes(b)
- if must_swap:
- column.byteswap()
- return column
-
- @property
- def last_message(self):
- if len(self.buffer) == 0:
- return None
- return self.buffer.decode()
-
- def close(self):
- if self.source:
- self.source.close()
- self.source = None
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/loggingTools.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/loggingTools.py
deleted file mode 100644
index 78704f5a9aa4811db98aa3132ed3f12ee0853ee2..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/loggingTools.py
+++ /dev/null
@@ -1,543 +0,0 @@
-import sys
-import logging
-import timeit
-from functools import wraps
-from collections.abc import Mapping, Callable
-import warnings
-from logging import PercentStyle
-
-
-# default logging level used by Timer class
-TIME_LEVEL = logging.DEBUG
-
-# per-level format strings used by the default formatter
-# (the level name is not printed for INFO and DEBUG messages)
-DEFAULT_FORMATS = {
- "*": "%(levelname)s: %(message)s",
- "INFO": "%(message)s",
- "DEBUG": "%(message)s",
-}
-
-
-class LevelFormatter(logging.Formatter):
- """Log formatter with level-specific formatting.
-
- Formatter class which optionally takes a dict of logging levels to
- format strings, allowing to customise the log records appearance for
- specific levels.
-
-
- Attributes:
- fmt: A dictionary mapping logging levels to format strings.
- The ``*`` key identifies the default format string.
- datefmt: As per py:class:`logging.Formatter`
- style: As per py:class:`logging.Formatter`
-
- >>> import sys
- >>> handler = logging.StreamHandler(sys.stdout)
- >>> formatter = LevelFormatter(
- ... fmt={
- ... '*': '[%(levelname)s] %(message)s',
- ... 'DEBUG': '%(name)s [%(levelname)s] %(message)s',
- ... 'INFO': '%(message)s',
- ... })
- >>> handler.setFormatter(formatter)
- >>> log = logging.getLogger('test')
- >>> log.setLevel(logging.DEBUG)
- >>> log.addHandler(handler)
- >>> log.debug('this uses a custom format string')
- test [DEBUG] this uses a custom format string
- >>> log.info('this also uses a custom format string')
- this also uses a custom format string
- >>> log.warning("this one uses the default format string")
- [WARNING] this one uses the default format string
- """
-
- def __init__(self, fmt=None, datefmt=None, style="%"):
- if style != "%":
- raise ValueError(
- "only '%' percent style is supported in both python 2 and 3"
- )
- if fmt is None:
- fmt = DEFAULT_FORMATS
- if isinstance(fmt, str):
- default_format = fmt
- custom_formats = {}
- elif isinstance(fmt, Mapping):
- custom_formats = dict(fmt)
- default_format = custom_formats.pop("*", None)
- else:
- raise TypeError("fmt must be a str or a dict of str: %r" % fmt)
- super(LevelFormatter, self).__init__(default_format, datefmt)
- self.default_format = self._fmt
- self.custom_formats = {}
- for level, fmt in custom_formats.items():
- level = logging._checkLevel(level)
- self.custom_formats[level] = fmt
-
- def format(self, record):
- if self.custom_formats:
- fmt = self.custom_formats.get(record.levelno, self.default_format)
- if self._fmt != fmt:
- self._fmt = fmt
- # for python >= 3.2, _style needs to be set if _fmt changes
- if PercentStyle:
- self._style = PercentStyle(fmt)
- return super(LevelFormatter, self).format(record)
-
-
-def configLogger(**kwargs):
- """A more sophisticated logging system configuation manager.
-
- This is more or less the same as :py:func:`logging.basicConfig`,
- with some additional options and defaults.
-
- The default behaviour is to create a ``StreamHandler`` which writes to
- sys.stderr, set a formatter using the ``DEFAULT_FORMATS`` strings, and add
- the handler to the top-level library logger ("fontTools").
-
- A number of optional keyword arguments may be specified, which can alter
- the default behaviour.
-
- Args:
-
- logger: Specifies the logger name or a Logger instance to be
- configured. (Defaults to "fontTools" logger). Unlike ``basicConfig``,
- this function can be called multiple times to reconfigure a logger.
- If the logger or any of its children already exists before the call is
- made, they will be reset before the new configuration is applied.
- filename: Specifies that a ``FileHandler`` be created, using the
- specified filename, rather than a ``StreamHandler``.
- filemode: Specifies the mode to open the file, if filename is
- specified. (If filemode is unspecified, it defaults to ``a``).
- format: Use the specified format string for the handler. This
- argument also accepts a dictionary of format strings keyed by
- level name, to allow customising the records appearance for
- specific levels. The special ``'*'`` key is for 'any other' level.
- datefmt: Use the specified date/time format.
- level: Set the logger level to the specified level.
- stream: Use the specified stream to initialize the StreamHandler. Note
- that this argument is incompatible with ``filename`` - if both
- are present, ``stream`` is ignored.
- handlers: If specified, this should be an iterable of already created
- handlers, which will be added to the logger. Any handler in the
- list which does not have a formatter assigned will be assigned the
- formatter created in this function.
- filters: If specified, this should be an iterable of already created
- filters. If the ``handlers`` do not already have filters assigned,
- these filters will be added to them.
- propagate: All loggers have a ``propagate`` attribute which determines
- whether to continue searching for handlers up the logging hierarchy.
- If not provided, the "propagate" attribute will be set to ``False``.
- """
- # using kwargs to enforce keyword-only arguments in py2.
- handlers = kwargs.pop("handlers", None)
- if handlers is None:
- if "stream" in kwargs and "filename" in kwargs:
- raise ValueError(
- "'stream' and 'filename' should not be " "specified together"
- )
- else:
- if "stream" in kwargs or "filename" in kwargs:
- raise ValueError(
- "'stream' or 'filename' should not be "
- "specified together with 'handlers'"
- )
- if handlers is None:
- filename = kwargs.pop("filename", None)
- mode = kwargs.pop("filemode", "a")
- if filename:
- h = logging.FileHandler(filename, mode)
- else:
- stream = kwargs.pop("stream", None)
- h = logging.StreamHandler(stream)
- handlers = [h]
- # By default, the top-level library logger is configured.
- logger = kwargs.pop("logger", "fontTools")
- if not logger or isinstance(logger, str):
- # empty "" or None means the 'root' logger
- logger = logging.getLogger(logger)
- # before (re)configuring, reset named logger and its children (if exist)
- _resetExistingLoggers(parent=logger.name)
- # use DEFAULT_FORMATS if 'format' is None
- fs = kwargs.pop("format", None)
- dfs = kwargs.pop("datefmt", None)
- # XXX: '%' is the only format style supported on both py2 and 3
- style = kwargs.pop("style", "%")
- fmt = LevelFormatter(fs, dfs, style)
- filters = kwargs.pop("filters", [])
- for h in handlers:
- if h.formatter is None:
- h.setFormatter(fmt)
- if not h.filters:
- for f in filters:
- h.addFilter(f)
- logger.addHandler(h)
- if logger.name != "root":
- # stop searching up the hierarchy for handlers
- logger.propagate = kwargs.pop("propagate", False)
- # set a custom severity level
- level = kwargs.pop("level", None)
- if level is not None:
- logger.setLevel(level)
- if kwargs:
- keys = ", ".join(kwargs.keys())
- raise ValueError("Unrecognised argument(s): %s" % keys)
-
-
-def _resetExistingLoggers(parent="root"):
- """Reset the logger named 'parent' and all its children to their initial
- state, if they already exist in the current configuration.
- """
- root = logging.root
- # get sorted list of all existing loggers
- existing = sorted(root.manager.loggerDict.keys())
- if parent == "root":
- # all the existing loggers are children of 'root'
- loggers_to_reset = [parent] + existing
- elif parent not in existing:
- # nothing to do
- return
- elif parent in existing:
- loggers_to_reset = [parent]
- # collect children, starting with the entry after parent name
- i = existing.index(parent) + 1
- prefixed = parent + "."
- pflen = len(prefixed)
- num_existing = len(existing)
- while i < num_existing:
- if existing[i][:pflen] == prefixed:
- loggers_to_reset.append(existing[i])
- i += 1
- for name in loggers_to_reset:
- if name == "root":
- root.setLevel(logging.WARNING)
- for h in root.handlers[:]:
- root.removeHandler(h)
- for f in root.filters[:]:
- root.removeFilters(f)
- root.disabled = False
- else:
- logger = root.manager.loggerDict[name]
- logger.level = logging.NOTSET
- logger.handlers = []
- logger.filters = []
- logger.propagate = True
- logger.disabled = False
-
-
-class Timer(object):
- """Keeps track of overall time and split/lap times.
-
- >>> import time
- >>> timer = Timer()
- >>> time.sleep(0.01)
- >>> print("First lap:", timer.split())
- First lap: ...
- >>> time.sleep(0.02)
- >>> print("Second lap:", timer.split())
- Second lap: ...
- >>> print("Overall time:", timer.time())
- Overall time: ...
-
- Can be used as a context manager inside with-statements.
-
- >>> with Timer() as t:
- ... time.sleep(0.01)
- >>> print("%0.3f seconds" % t.elapsed)
- 0... seconds
-
- If initialised with a logger, it can log the elapsed time automatically
- upon exiting the with-statement.
-
- >>> import logging
- >>> log = logging.getLogger("my-fancy-timer-logger")
- >>> configLogger(logger=log, level="DEBUG", format="%(message)s", stream=sys.stdout)
- >>> with Timer(log, 'do something'):
- ... time.sleep(0.01)
- Took ... to do something
-
- The same Timer instance, holding a reference to a logger, can be reused
- in multiple with-statements, optionally with different messages or levels.
-
- >>> timer = Timer(log)
- >>> with timer():
- ... time.sleep(0.01)
- elapsed time: ...s
- >>> with timer('redo it', level=logging.INFO):
- ... time.sleep(0.02)
- Took ... to redo it
-
- It can also be used as a function decorator to log the time elapsed to run
- the decorated function.
-
- >>> @timer()
- ... def test1():
- ... time.sleep(0.01)
- >>> @timer('run test 2', level=logging.INFO)
- ... def test2():
- ... time.sleep(0.02)
- >>> test1()
- Took ... to run 'test1'
- >>> test2()
- Took ... to run test 2
- """
-
- # timeit.default_timer choses the most accurate clock for each platform
- _time = timeit.default_timer
- default_msg = "elapsed time: %(time).3fs"
- default_format = "Took %(time).3fs to %(msg)s"
-
- def __init__(self, logger=None, msg=None, level=None, start=None):
- self.reset(start)
- if logger is None:
- for arg in ("msg", "level"):
- if locals().get(arg) is not None:
- raise ValueError("'%s' can't be specified without a 'logger'" % arg)
- self.logger = logger
- self.level = level if level is not None else TIME_LEVEL
- self.msg = msg
-
- def reset(self, start=None):
- """Reset timer to 'start_time' or the current time."""
- if start is None:
- self.start = self._time()
- else:
- self.start = start
- self.last = self.start
- self.elapsed = 0.0
-
- def time(self):
- """Return the overall time (in seconds) since the timer started."""
- return self._time() - self.start
-
- def split(self):
- """Split and return the lap time (in seconds) in between splits."""
- current = self._time()
- self.elapsed = current - self.last
- self.last = current
- return self.elapsed
-
- def formatTime(self, msg, time):
- """Format 'time' value in 'msg' and return formatted string.
- If 'msg' contains a '%(time)' format string, try to use that.
- Otherwise, use the predefined 'default_format'.
- If 'msg' is empty or None, fall back to 'default_msg'.
- """
- if not msg:
- msg = self.default_msg
- if msg.find("%(time)") < 0:
- msg = self.default_format % {"msg": msg, "time": time}
- else:
- try:
- msg = msg % {"time": time}
- except (KeyError, ValueError):
- pass # skip if the format string is malformed
- return msg
-
- def __enter__(self):
- """Start a new lap"""
- self.last = self._time()
- self.elapsed = 0.0
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- """End the current lap. If timer has a logger, log the time elapsed,
- using the format string in self.msg (or the default one).
- """
- time = self.split()
- if self.logger is None or exc_type:
- # if there's no logger attached, or if any exception occurred in
- # the with-statement, exit without logging the time
- return
- message = self.formatTime(self.msg, time)
- # Allow log handlers to see the individual parts to facilitate things
- # like a server accumulating aggregate stats.
- msg_parts = {"msg": self.msg, "time": time}
- self.logger.log(self.level, message, msg_parts)
-
- def __call__(self, func_or_msg=None, **kwargs):
- """If the first argument is a function, return a decorator which runs
- the wrapped function inside Timer's context manager.
- Otherwise, treat the first argument as a 'msg' string and return an updated
- Timer instance, referencing the same logger.
- A 'level' keyword can also be passed to override self.level.
- """
- if isinstance(func_or_msg, Callable):
- func = func_or_msg
- # use the function name when no explicit 'msg' is provided
- if not self.msg:
- self.msg = "run '%s'" % func.__name__
-
- @wraps(func)
- def wrapper(*args, **kwds):
- with self:
- return func(*args, **kwds)
-
- return wrapper
- else:
- msg = func_or_msg or kwargs.get("msg")
- level = kwargs.get("level", self.level)
- return self.__class__(self.logger, msg, level)
-
- def __float__(self):
- return self.elapsed
-
- def __int__(self):
- return int(self.elapsed)
-
- def __str__(self):
- return "%.3f" % self.elapsed
-
-
-class ChannelsFilter(logging.Filter):
- """Provides a hierarchical filter for log entries based on channel names.
-
- Filters out records emitted from a list of enabled channel names,
- including their children. It works the same as the ``logging.Filter``
- class, but allows the user to specify multiple channel names.
-
- >>> import sys
- >>> handler = logging.StreamHandler(sys.stdout)
- >>> handler.setFormatter(logging.Formatter("%(message)s"))
- >>> filter = ChannelsFilter("A.B", "C.D")
- >>> handler.addFilter(filter)
- >>> root = logging.getLogger()
- >>> root.addHandler(handler)
- >>> root.setLevel(level=logging.DEBUG)
- >>> logging.getLogger('A.B').debug('this record passes through')
- this record passes through
- >>> logging.getLogger('A.B.C').debug('records from children also pass')
- records from children also pass
- >>> logging.getLogger('C.D').debug('this one as well')
- this one as well
- >>> logging.getLogger('A.B.').debug('also this one')
- also this one
- >>> logging.getLogger('A.F').debug('but this one does not!')
- >>> logging.getLogger('C.DE').debug('neither this one!')
- """
-
- def __init__(self, *names):
- self.names = names
- self.num = len(names)
- self.lengths = {n: len(n) for n in names}
-
- def filter(self, record):
- if self.num == 0:
- return True
- for name in self.names:
- nlen = self.lengths[name]
- if name == record.name:
- return True
- elif record.name.find(name, 0, nlen) == 0 and record.name[nlen] == ".":
- return True
- return False
-
-
-class CapturingLogHandler(logging.Handler):
- def __init__(self, logger, level):
- super(CapturingLogHandler, self).__init__(level=level)
- self.records = []
- if isinstance(logger, str):
- self.logger = logging.getLogger(logger)
- else:
- self.logger = logger
-
- def __enter__(self):
- self.original_disabled = self.logger.disabled
- self.original_level = self.logger.level
- self.original_propagate = self.logger.propagate
-
- self.logger.addHandler(self)
- self.logger.setLevel(self.level)
- self.logger.disabled = False
- self.logger.propagate = False
-
- return self
-
- def __exit__(self, type, value, traceback):
- self.logger.removeHandler(self)
- self.logger.setLevel(self.original_level)
- self.logger.disabled = self.original_disabled
- self.logger.propagate = self.original_propagate
-
- return self
-
- def emit(self, record):
- self.records.append(record)
-
- def assertRegex(self, regexp, msg=None):
- import re
-
- pattern = re.compile(regexp)
- for r in self.records:
- if pattern.search(r.getMessage()):
- return True
- if msg is None:
- msg = "Pattern '%s' not found in logger records" % regexp
- assert 0, msg
-
-
-class LogMixin(object):
- """Mixin class that adds logging functionality to another class.
-
- You can define a new class that subclasses from ``LogMixin`` as well as
- other base classes through multiple inheritance.
- All instances of that class will have a ``log`` property that returns
- a ``logging.Logger`` named after their respective ``.``.
-
- For example:
-
- >>> class BaseClass(object):
- ... pass
- >>> class MyClass(LogMixin, BaseClass):
- ... pass
- >>> a = MyClass()
- >>> isinstance(a.log, logging.Logger)
- True
- >>> print(a.log.name)
- fontTools.misc.loggingTools.MyClass
- >>> class AnotherClass(MyClass):
- ... pass
- >>> b = AnotherClass()
- >>> isinstance(b.log, logging.Logger)
- True
- >>> print(b.log.name)
- fontTools.misc.loggingTools.AnotherClass
- """
-
- @property
- def log(self):
- if not hasattr(self, "_log"):
- name = ".".join((self.__class__.__module__, self.__class__.__name__))
- self._log = logging.getLogger(name)
- return self._log
-
-
-def deprecateArgument(name, msg, category=UserWarning):
- """Raise a warning about deprecated function argument 'name'."""
- warnings.warn("%r is deprecated; %s" % (name, msg), category=category, stacklevel=3)
-
-
-def deprecateFunction(msg, category=UserWarning):
- """Decorator to raise a warning when a deprecated function is called."""
-
- def decorator(func):
- @wraps(func)
- def wrapper(*args, **kwargs):
- warnings.warn(
- "%r is deprecated; %s" % (func.__name__, msg),
- category=category,
- stacklevel=2,
- )
- return func(*args, **kwargs)
-
- return wrapper
-
- return decorator
-
-
-if __name__ == "__main__":
- import doctest
-
- sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/mtiLib/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/mtiLib/__main__.py
deleted file mode 100644
index 29c802bcc83b3ca35bbd0e6521f47a368b5f9092..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/mtiLib/__main__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import sys
-from fontTools.mtiLib import main
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/util/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/util/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/cihyFjudo/fairness-paper-search/Astroboy movie free download hd What critics and audiences are saying.md b/spaces/cihyFjudo/fairness-paper-search/Astroboy movie free download hd What critics and audiences are saying.md
deleted file mode 100644
index 0db4d15b5d7889d989416486b1d4e9158ed55c8d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Astroboy movie free download hd What critics and audiences are saying.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Choose your favorite from thousands of beautiful vertical pictures Astro Boy in the highest quality, click download to your phone or computer. Now you can set a new wallpaper for your screen saver or lock screen. All Astro Boy wallpapers are free and can be downloaded in any popular resolutions: 2160x3840, 1440x2560, 1366x768, 1080x1920, 1024x600, 960x544, 800x1280, 800x600, 720x1280, 540x960, 480x854, 480x800, 360x640, 320x480, 320x240, 240x400, etc. . both to a computer and to a mobile phone via mob.org. The catalog is constantly updated with new beautiful photos Astro Boy" and original pictures.
Attention! All wallpapers of Astro Boy on the site were found freely distributed on the Internet or downloaded by our users and are presented for informational purposes only. By downloading free pictures Astro Boy to your phone on our website, you agree to review and remove the screensaver from your phone.
-
Every Tuesday, Sony drops a bunch of new stuff onto the PlayStation Network. Those with a PlayStation 3, Vita or PSP can download these goodies, which include PSN games, movies, themes and more. While the Official PlayStation Blog outlines these updates in full each week, we thought we'd help truncate the good news into something more digestible.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Digi 003 Driver Mac The Best Way to Connect Your Hardware and Software.md b/spaces/cihyFjudo/fairness-paper-search/Download Digi 003 Driver Mac The Best Way to Connect Your Hardware and Software.md
deleted file mode 100644
index 0cbaed7ac3565fc433d9dede8c765389b5ef8b83..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Download Digi 003 Driver Mac The Best Way to Connect Your Hardware and Software.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
You can not install PTLE 7 on your mac since it is 100% not compatible and you don't have to to get LX to work with you hardware. Download this installer and you are good to go. Here it is, download the 11.0.0 driver.
-
Edit - I am using a Mac Pro Tower so I can't speak to the latest iMac, but a thunderbolt to firewire adapter should do the trick in that case. As a precaution you could install the latest 002/003 drivers and make sure you can access the device that way before dropping the money for Logic.
DriverGuide maintains an extensive archive of Windows drivers available for free download. We employ a team from around the world which adds hundreds of new drivers to our site every day. How to Install Drivers Once you download your new driver, then you need to install it. To install a driver in Windows, you will need to use a built-in utility called Device Manager. It allows you to see all of the devices recognized by your system, and the drivers associated with them.
-
Many device drivers are not updated through the Microsoft Windows Update service. If you are having trouble finding the right driver, stop searching and fix driver problems faster with the Automatic Driver Update Utility. Automatic updates could save you hours of time.
-
The Driver Update Utility automatically finds, downloads and installs the right driver for your hardware and operating system. It will Update all of your drivers in just a few clicks, and even backup your drivers before making any changes.
-
Many computer problems are caused by missing or outdated device drivers, especially in Windows 11. If your desktop or laptop is running slow, or keeps crashing or hanging, there is a good chance that updating your drivers will fix the problem.
-
Mac OS 10.4 (Tiger) does not included Stuffit Expander. Mac downloads (.bin .hqx .sea .sit .sitx) require Stuffit Expander or other decoding utility. Newer Mac downloads require Stuffit Expander version 5.1.2 or higher. Download the free Aladdin Stuffit Expander for Mac (included with Mac OS X 10.0-10.3, but not with 10.4).
-
A download form is required to access some Pro Tools downloads. Completion of the download form is not related to registration of the software, hardware, or any other product. For help with plug-in downloads, please see Download Help FAQ #1.
I did what you suggested in the previous post and all seemed to be going well. Then I installed your driver for the 003 rack and again everything seemed to be working great. At the end, however, I got this message:
-
Hi Damo, Yes, i Currently have Ubuntu 12.04 running on a 2007 mac mini with a dual boot of osx. And for the most part the 003 Rack is working great. I did some troubleshooting and have discovered that the problem is playback channel 1. If I send all the audio through Playback channel 2 in Ardour then the sound is crystal clear. Of course then I am only hearing the sound through the right headphone speaker. When I engage channel 1 on the master channel strip as the output in order to get a stereo sound it hisses and crackles with each sound input. This occurs for live monitoring from any channel (1-4) and even with prerecorded sounds. Again, playback channel 2 works great but when playback channel 1 is activated I get hiss and crackle in both ears. is there some kind of interference occurring? Could it have anything to do with Ubuntu or is it a problem within the internal routing of jack/Ardour/003 driver? I am still figuring out how to configure a loopback sound device. Do you have any recommendations on a good walkthrough for this available online? Thank you again! -Lucas
-
TIDE: It sounds like your internal sound card is being selected in JACK instead of the 003, perhaps you need to select the correct hw:X device in the settings. Assuming you have the driver installed correctly, that is all I can suggest. Good luck.
-
hi damo, I just managed successfully to install a digi 002r with your driver. thanks a lot. The only thing I am wondering about, is that Ardour or Jack is crashing after a while. Ardour freezes and tells me, that it is not able to reconnect to server. In qjackctl I disabled the dbus server and I tried to kill and restart jack but it refuses to work. the only thing I can do, is to restart my Laptop. Now I am wondering about deinstalling the Ffado repos, but I am not sure, if I am messing up the whole system. Maybe you can give me some advice. Greetz Tim
-
I hate to beat a dead horse but needing direction to whom who has the knowledge with incorporating both logic and digi 002. I have been doing some massive researching saw many users able to make it work. But the posts that I have been reading has been more than 2-3 years ago and need a little more recent posts.
-
I have attempted to do it myself and failing miserably. Long story short, I have switched over to Logic and I am done with pro tools. I can't afford the upgrades and such. But what I would like to see happen is to salvage the digi 002 for as long as I can. So not sure what I am doing wrong.
-
Long answer: If AVID's core audio drivers aren't compatible with your system then there's nothing you can do. If they are listed as compatible with your version of macOS but don't appear in Logic's preferences, then they're either not properly installed, or something is wrong with the drivers, and you need to contact AVID about it.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/How Eugene Tejada Alleged Scandal.flvl Exposed His Dark Secret of Killing a Supermarket Supervisor.md b/spaces/cihyFjudo/fairness-paper-search/How Eugene Tejada Alleged Scandal.flvl Exposed His Dark Secret of Killing a Supermarket Supervisor.md
deleted file mode 100644
index 23b2fa83be65366a899b4131e4f8053af8098f88..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/How Eugene Tejada Alleged Scandal.flvl Exposed His Dark Secret of Killing a Supermarket Supervisor.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/colakin/video-generater/public/ffmpeg/doc/texidep.pl b/spaces/colakin/video-generater/public/ffmpeg/doc/texidep.pl
deleted file mode 100644
index 099690378e6911de871cbd3ca0c90a67de56154b..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/doc/texidep.pl
+++ /dev/null
@@ -1,32 +0,0 @@
-#! /usr/bin/env perl
-
-# This script will print the dependency of a Texinfo file to stdout.
-# texidep.pl
-
-use warnings;
-use strict;
-
-die unless @ARGV == 3;
-
-my ($src_path, $root, $target) = @ARGV;
-
-sub print_deps {
- my ($file, $deps) = @_;
- $deps->{$file} = 1;
-
- open(my $fh, "<", "$file") or die "Cannot open file '$file': $!";
- while (<$fh>) {
- if (my ($i) = /^\@(?:verbatim)?include\s+(\S+)/) {
- die "Circular dependency found in file $root\n" if exists $deps->{"doc/$1"};
- print "$target: doc/$1\n";
-
- # skip looking for config.texi dependencies, since it has
- # none, and is not located in the source tree
- if ("$1" ne "config.texi") {
- print_deps("$src_path/doc/$1", {%$deps});
- }
- }
- }
-}
-
-print_deps($root, {});
diff --git a/spaces/colakin/video-generater/public/ffmpeg/ffbuild/pkgconfig_generate.sh b/spaces/colakin/video-generater/public/ffmpeg/ffbuild/pkgconfig_generate.sh
deleted file mode 100644
index e5de6716d28b5367bab75cc7efa68566a930755c..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/ffbuild/pkgconfig_generate.sh
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/bin/sh
-
-. ffbuild/config.sh
-
-if test "$shared" = "yes"; then
- shared=true
-else
- shared=false
-fi
-
-shortname=$1
-name=lib${shortname}
-fullname=${name}${build_suffix}
-comment=$2
-libs=$(eval echo \$extralibs_${shortname})
-deps=$(eval echo \$${shortname}_deps)
-
-for dep in $deps; do
- depname=lib${dep}
- fulldepname=${depname}${build_suffix}
- . ${depname}/${depname}.version
- depversion=$(eval echo \$${depname}_VERSION)
- requires="$requires ${fulldepname} >= ${depversion}, "
-done
-requires=${requires%, }
-
-version=$(grep ${name}_VERSION= $name/${name}.version | cut -d= -f2)
-
-cat < $name/$fullname.pc
-prefix=$prefix
-exec_prefix=\${prefix}
-libdir=$libdir
-includedir=$incdir
-
-Name: $fullname
-Description: $comment
-Version: $version
-Requires: $($shared || echo $requires)
-Requires.private: $($shared && echo $requires)
-Conflicts:
-Libs: -L\${libdir} $rpath -l${fullname#lib} $($shared || echo $libs)
-Libs.private: $($shared && echo $libs)
-Cflags: -I\${includedir}
-EOF
-
-mkdir -p doc/examples/pc-uninstalled
-includedir=${source_path}
-[ "$includedir" = . ] && includedir="\${pcfiledir}/../../.."
- cat < doc/examples/pc-uninstalled/${name}-uninstalled.pc
-prefix=
-exec_prefix=
-libdir=\${pcfiledir}/../../../$name
-includedir=${source_path}
-
-Name: $fullname
-Description: $comment
-Version: $version
-Requires: $requires
-Conflicts:
-Libs: -L\${libdir} -Wl,-rpath,\${libdir} -l${fullname#lib} $($shared || echo $libs)
-Cflags: -I\${includedir}
-EOF
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_quantization_misc.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_quantization_misc.h
deleted file mode 100644
index c789754f4f1221a4cbb64dab2d433735e52049b9..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_quantization_misc.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * AAC encoder quantization
- * Copyright (C) 2015 Claudio Freire
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AAC encoder quantization misc reusable function templates
- * @author Claudio Freire ( klaussfreire gmail com )
- */
-
-#ifndef AVCODEC_AACENC_QUANTIZATION_MISC_H
-#define AVCODEC_AACENC_QUANTIZATION_MISC_H
-
-static inline float quantize_band_cost_cached(struct AACEncContext *s, int w, int g, const float *in,
- const float *scaled, int size, int scale_idx,
- int cb, const float lambda, const float uplim,
- int *bits, float *energy, int rtz)
-{
- AACQuantizeBandCostCacheEntry *entry;
- av_assert1(scale_idx >= 0 && scale_idx < 256);
- entry = &s->quantize_band_cost_cache[scale_idx][w*16+g];
- if (entry->generation != s->quantize_band_cost_cache_generation || entry->cb != cb || entry->rtz != rtz) {
- entry->rd = quantize_band_cost(s, in, scaled, size, scale_idx,
- cb, lambda, uplim, &entry->bits, &entry->energy);
- entry->cb = cb;
- entry->rtz = rtz;
- entry->generation = s->quantize_band_cost_cache_generation;
- }
- if (bits)
- *bits = entry->bits;
- if (energy)
- *energy = entry->energy;
- return entry->rd;
-}
-
-#endif /* AVCODEC_AACENC_QUANTIZATION_MISC_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexenc.c
deleted file mode 100644
index 9fdb247863b424a3c0333696e327612fe2c63eff..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libspeexenc.c
+++ /dev/null
@@ -1,366 +0,0 @@
-/*
- * Copyright (C) 2009 Justin Ruggles
- * Copyright (c) 2009 Xuggle Incorporated
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * libspeex Speex audio encoder
- *
- * Usage Guide
- * This explains the values that need to be set prior to initialization in
- * order to control various encoding parameters.
- *
- * Channels
- * Speex only supports mono or stereo, so avctx->ch_layout.nb_channels must
- * be set to 1 or 2.
- *
- * Sample Rate / Encoding Mode
- * Speex has 3 modes, each of which uses a specific sample rate.
- * narrowband : 8 kHz
- * wideband : 16 kHz
- * ultra-wideband : 32 kHz
- * avctx->sample_rate must be set to one of these 3 values. This will be
- * used to set the encoding mode.
- *
- * Rate Control
- * VBR mode is turned on by setting AV_CODEC_FLAG_QSCALE in avctx->flags.
- * avctx->global_quality is used to set the encoding quality.
- * For CBR mode, avctx->bit_rate can be used to set the constant bitrate.
- * Alternatively, the 'cbr_quality' option can be set from 0 to 10 to set
- * a constant bitrate based on quality.
- * For ABR mode, set avctx->bit_rate and set the 'abr' option to 1.
- * Approx. Bitrate Range:
- * narrowband : 2400 - 25600 bps
- * wideband : 4000 - 43200 bps
- * ultra-wideband : 4400 - 45200 bps
- *
- * Complexity
- * Encoding complexity is controlled by setting avctx->compression_level.
- * The valid range is 0 to 10. A higher setting gives generally better
- * quality at the expense of encoding speed. This does not affect the
- * bit rate.
- *
- * Frames-per-Packet
- * The encoder defaults to using 1 frame-per-packet. However, it is
- * sometimes desirable to use multiple frames-per-packet to reduce the
- * amount of container overhead. This can be done by setting the
- * 'frames_per_packet' option to a value 1 to 8.
- *
- *
- * Optional features
- * Speex encoder supports several optional features, which can be useful
- * for some conditions.
- *
- * Voice Activity Detection
- * When enabled, voice activity detection detects whether the audio
- * being encoded is speech or silence/background noise. VAD is always
- * implicitly activated when encoding in VBR, so the option is only useful
- * in non-VBR operation. In this case, Speex detects non-speech periods and
- * encodes them with just enough bits to reproduce the background noise.
- *
- * Discontinuous Transmission (DTX)
- * DTX is an addition to VAD/VBR operation, that makes it possible to stop transmitting
- * completely when the background noise is stationary.
- * In file-based operation only 5 bits are used for such frames.
- */
-
-#include
-#include
-#include
-
-#include "libavutil/channel_layout.h"
-#include "libavutil/common.h"
-#include "libavutil/opt.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "encode.h"
-#include "audio_frame_queue.h"
-
-/* TODO: Think about converting abr, vad, dtx and such flags to a bit field */
-typedef struct LibSpeexEncContext {
- AVClass *class; ///< AVClass for private options
- SpeexBits bits; ///< libspeex bitwriter context
- SpeexHeader header; ///< libspeex header struct
- void *enc_state; ///< libspeex encoder state
- int frames_per_packet; ///< number of frames to encode in each packet
- float vbr_quality; ///< VBR quality 0.0 to 10.0
- int cbr_quality; ///< CBR quality 0 to 10
- int abr; ///< flag to enable ABR
- int vad; ///< flag to enable VAD
- int dtx; ///< flag to enable DTX
- int pkt_frame_count; ///< frame count for the current packet
- AudioFrameQueue afq; ///< frame queue
-} LibSpeexEncContext;
-
-static av_cold void print_enc_params(AVCodecContext *avctx,
- LibSpeexEncContext *s)
-{
- const char *mode_str = "unknown";
-
- av_log(avctx, AV_LOG_DEBUG, "channels: %d\n", avctx->ch_layout.nb_channels);
- switch (s->header.mode) {
- case SPEEX_MODEID_NB: mode_str = "narrowband"; break;
- case SPEEX_MODEID_WB: mode_str = "wideband"; break;
- case SPEEX_MODEID_UWB: mode_str = "ultra-wideband"; break;
- }
- av_log(avctx, AV_LOG_DEBUG, "mode: %s\n", mode_str);
- if (s->header.vbr) {
- av_log(avctx, AV_LOG_DEBUG, "rate control: VBR\n");
- av_log(avctx, AV_LOG_DEBUG, " quality: %f\n", s->vbr_quality);
- } else if (s->abr) {
- av_log(avctx, AV_LOG_DEBUG, "rate control: ABR\n");
- av_log(avctx, AV_LOG_DEBUG, " bitrate: %"PRId64" bps\n", avctx->bit_rate);
- } else {
- av_log(avctx, AV_LOG_DEBUG, "rate control: CBR\n");
- av_log(avctx, AV_LOG_DEBUG, " bitrate: %"PRId64" bps\n", avctx->bit_rate);
- }
- av_log(avctx, AV_LOG_DEBUG, "complexity: %d\n",
- avctx->compression_level);
- av_log(avctx, AV_LOG_DEBUG, "frame size: %d samples\n",
- avctx->frame_size);
- av_log(avctx, AV_LOG_DEBUG, "frames per packet: %d\n",
- s->frames_per_packet);
- av_log(avctx, AV_LOG_DEBUG, "packet size: %d\n",
- avctx->frame_size * s->frames_per_packet);
- av_log(avctx, AV_LOG_DEBUG, "voice activity detection: %d\n", s->vad);
- av_log(avctx, AV_LOG_DEBUG, "discontinuous transmission: %d\n", s->dtx);
-}
-
-static av_cold int encode_init(AVCodecContext *avctx)
-{
- LibSpeexEncContext *s = avctx->priv_data;
- int channels = avctx->ch_layout.nb_channels;
- const SpeexMode *mode;
- uint8_t *header_data;
- int header_size;
- int32_t complexity;
-
- /* sample rate and encoding mode */
- switch (avctx->sample_rate) {
- case 8000: mode = speex_lib_get_mode(SPEEX_MODEID_NB); break;
- case 16000: mode = speex_lib_get_mode(SPEEX_MODEID_WB); break;
- case 32000: mode = speex_lib_get_mode(SPEEX_MODEID_UWB); break;
- default:
- av_log(avctx, AV_LOG_ERROR, "Sample rate of %d Hz is not supported. "
- "Resample to 8, 16, or 32 kHz.\n", avctx->sample_rate);
- return AVERROR(EINVAL);
- }
-
- /* initialize libspeex */
- s->enc_state = speex_encoder_init(mode);
- if (!s->enc_state) {
- av_log(avctx, AV_LOG_ERROR, "Error initializing libspeex\n");
- return -1;
- }
- speex_init_header(&s->header, avctx->sample_rate, channels, mode);
-
- /* rate control method and parameters */
- if (avctx->flags & AV_CODEC_FLAG_QSCALE) {
- /* VBR */
- s->header.vbr = 1;
- s->vad = 1; /* VAD is always implicitly activated for VBR */
- speex_encoder_ctl(s->enc_state, SPEEX_SET_VBR, &s->header.vbr);
- s->vbr_quality = av_clipf(avctx->global_quality / (float)FF_QP2LAMBDA,
- 0.0f, 10.0f);
- speex_encoder_ctl(s->enc_state, SPEEX_SET_VBR_QUALITY, &s->vbr_quality);
- } else {
- s->header.bitrate = avctx->bit_rate;
- if (avctx->bit_rate > 0) {
- /* CBR or ABR by bitrate */
- if (s->abr) {
- speex_encoder_ctl(s->enc_state, SPEEX_SET_ABR,
- &s->header.bitrate);
- speex_encoder_ctl(s->enc_state, SPEEX_GET_ABR,
- &s->header.bitrate);
- } else {
- speex_encoder_ctl(s->enc_state, SPEEX_SET_BITRATE,
- &s->header.bitrate);
- speex_encoder_ctl(s->enc_state, SPEEX_GET_BITRATE,
- &s->header.bitrate);
- }
- } else {
- /* CBR by quality */
- speex_encoder_ctl(s->enc_state, SPEEX_SET_QUALITY,
- &s->cbr_quality);
- speex_encoder_ctl(s->enc_state, SPEEX_GET_BITRATE,
- &s->header.bitrate);
- }
- /* stereo side information adds about 800 bps to the base bitrate */
- /* TODO: this should be calculated exactly */
- avctx->bit_rate = s->header.bitrate + (channels == 2 ? 800 : 0);
- }
-
- /* VAD is activated with VBR or can be turned on by itself */
- if (s->vad)
- speex_encoder_ctl(s->enc_state, SPEEX_SET_VAD, &s->vad);
-
- /* Activating Discontinuous Transmission */
- if (s->dtx) {
- speex_encoder_ctl(s->enc_state, SPEEX_SET_DTX, &s->dtx);
- if (!(s->abr || s->vad || s->header.vbr))
- av_log(avctx, AV_LOG_WARNING, "DTX is not much of use without ABR, VAD or VBR\n");
- }
-
- /* set encoding complexity */
- if (avctx->compression_level > FF_COMPRESSION_DEFAULT) {
- complexity = av_clip(avctx->compression_level, 0, 10);
- speex_encoder_ctl(s->enc_state, SPEEX_SET_COMPLEXITY, &complexity);
- }
- speex_encoder_ctl(s->enc_state, SPEEX_GET_COMPLEXITY, &complexity);
- avctx->compression_level = complexity;
-
- /* set packet size */
- avctx->frame_size = s->header.frame_size;
- s->header.frames_per_packet = s->frames_per_packet;
-
- /* set encoding delay */
- speex_encoder_ctl(s->enc_state, SPEEX_GET_LOOKAHEAD, &avctx->initial_padding);
- ff_af_queue_init(avctx, &s->afq);
-
- /* create header packet bytes from header struct */
- /* note: libspeex allocates the memory for header_data, which is freed
- below with speex_header_free() */
- header_data = speex_header_to_packet(&s->header, &header_size);
-
- /* allocate extradata */
- avctx->extradata = av_malloc(header_size + AV_INPUT_BUFFER_PADDING_SIZE);
- if (!avctx->extradata) {
- speex_header_free(header_data);
- speex_encoder_destroy(s->enc_state);
- av_log(avctx, AV_LOG_ERROR, "memory allocation error\n");
- return AVERROR(ENOMEM);
- }
-
- /* copy header packet to extradata */
- memcpy(avctx->extradata, header_data, header_size);
- avctx->extradata_size = header_size;
- speex_header_free(header_data);
-
- /* init libspeex bitwriter */
- speex_bits_init(&s->bits);
-
- print_enc_params(avctx, s);
- return 0;
-}
-
-static int encode_frame(AVCodecContext *avctx, AVPacket *avpkt,
- const AVFrame *frame, int *got_packet_ptr)
-{
- LibSpeexEncContext *s = avctx->priv_data;
- int16_t *samples = frame ? (int16_t *)frame->data[0] : NULL;
- int ret;
-
- if (samples) {
- /* encode Speex frame */
- if (avctx->ch_layout.nb_channels == 2)
- speex_encode_stereo_int(samples, s->header.frame_size, &s->bits);
- speex_encode_int(s->enc_state, samples, &s->bits);
- s->pkt_frame_count++;
- if ((ret = ff_af_queue_add(&s->afq, frame)) < 0)
- return ret;
- } else {
- /* handle end-of-stream */
- if (!s->pkt_frame_count)
- return 0;
- /* add extra terminator codes for unused frames in last packet */
- while (s->pkt_frame_count < s->frames_per_packet) {
- speex_bits_pack(&s->bits, 15, 5);
- s->pkt_frame_count++;
- }
- }
-
- /* write output if all frames for the packet have been encoded */
- if (s->pkt_frame_count == s->frames_per_packet) {
- s->pkt_frame_count = 0;
- if ((ret = ff_alloc_packet(avctx, avpkt, speex_bits_nbytes(&s->bits))) < 0)
- return ret;
- ret = speex_bits_write(&s->bits, avpkt->data, avpkt->size);
- speex_bits_reset(&s->bits);
-
- /* Get the next frame pts/duration */
- ff_af_queue_remove(&s->afq, s->frames_per_packet * avctx->frame_size,
- &avpkt->pts, &avpkt->duration);
-
- avpkt->size = ret;
- *got_packet_ptr = 1;
- return 0;
- }
- return 0;
-}
-
-static av_cold int encode_close(AVCodecContext *avctx)
-{
- LibSpeexEncContext *s = avctx->priv_data;
-
- speex_bits_destroy(&s->bits);
- speex_encoder_destroy(s->enc_state);
-
- ff_af_queue_close(&s->afq);
-
- return 0;
-}
-
-#define OFFSET(x) offsetof(LibSpeexEncContext, x)
-#define AE AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
-static const AVOption options[] = {
- { "abr", "Use average bit rate", OFFSET(abr), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, AE },
- { "cbr_quality", "Set quality value (0 to 10) for CBR", OFFSET(cbr_quality), AV_OPT_TYPE_INT, { .i64 = 8 }, 0, 10, AE },
- { "frames_per_packet", "Number of frames to encode in each packet", OFFSET(frames_per_packet), AV_OPT_TYPE_INT, { .i64 = 1 }, 1, 8, AE },
- { "vad", "Voice Activity Detection", OFFSET(vad), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, AE },
- { "dtx", "Discontinuous Transmission", OFFSET(dtx), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, AE },
- { NULL },
-};
-
-static const AVClass speex_class = {
- .class_name = "libspeex",
- .item_name = av_default_item_name,
- .option = options,
- .version = LIBAVUTIL_VERSION_INT,
-};
-
-static const FFCodecDefault defaults[] = {
- { "b", "0" },
- { "compression_level", "3" },
- { NULL },
-};
-
-const FFCodec ff_libspeex_encoder = {
- .p.name = "libspeex",
- CODEC_LONG_NAME("libspeex Speex"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_SPEEX,
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY,
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE,
- .priv_data_size = sizeof(LibSpeexEncContext),
- .init = encode_init,
- FF_CODEC_ENCODE_CB(encode_frame),
- .close = encode_close,
- .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_S16,
- AV_SAMPLE_FMT_NONE },
- CODEC_OLD_CHANNEL_LAYOUTS(AV_CH_LAYOUT_MONO, AV_CH_LAYOUT_STEREO)
- .p.ch_layouts = (const AVChannelLayout[]) { AV_CHANNEL_LAYOUT_MONO,
- AV_CHANNEL_LAYOUT_STEREO,
- { 0 },
- },
- .p.supported_samplerates = (const int[]){ 8000, 16000, 32000, 0 },
- .p.priv_class = &speex_class,
- .defaults = defaults,
- .p.wrapper_name = "libspeex",
-};
diff --git a/spaces/coldlarry/lr_pdf/app.py b/spaces/coldlarry/lr_pdf/app.py
deleted file mode 100644
index 3ad4eb057cdfd93c1df0f3a3cfefbe176f2abce4..0000000000000000000000000000000000000000
--- a/spaces/coldlarry/lr_pdf/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import gradio as gr
-import openai
-# from gpt_reader.pdf_reader import PaperReader
-# from gpt_reader.prompt import BASE_POINTS
-from Document_QA import QA
-from Document_QA import create_embeddings
-from Document_QA import Paper
-from PyPDF2 import PdfReader
-
-class GUI:
- def __init__(self):
- self.api_key = ""
- self.session = ""
- self.all_embedding =None
- self.tokens = 0
- #load pdf and create all embedings
- def pdf_init(self, api_key, pdf_path):
- openai.api_key = api_key
- pdf_reader = PdfReader(pdf_path.name)
- paper = Paper(pdf_reader)
- all_texts = paper.get_texts()
- self.all_embedding, self.tokens = create_embeddings(all_texts)
- print("全部文本消耗 {} tokens".format(self.tokens))
-
- def get_answer(self, question):
- qa = QA(self.all_embedding)
- answer,context = qa(question)
- return answer.strip()
-
-with gr.Blocks() as demo:
- gr.Markdown(
- """
- # CHATGPT-PAPER-READER
- [点击此处以支付 $5 成为我们的会员](https://checkout.stripe.com/c/pay/cs_live_a1TwwqhUpsfstnbyiAvbMoXvMzoaII5vskE8tz1cIsMSYUt9hJvoHK2qOK#fidkdWxOYHwnPyd1blppbHNgWjA0TlZXUHNAck9nTWNdXVc1TDRxTXIzQGo9b383N11yfDBhMzBvZ0pAMlNURDBBVWpiMHJObkhkSUZQSktwaWZ9S1dqUzFRRDw0f1dSa0dAQmp%2FYk5TS2tQNTVHa1F1RlVvPCcpJ3VpbGtuQH11anZgYUxhJz8nZEBQZko9MWRMPDxEYUNOZkhIJ3gl)
- """)
-
- with gr.Tab("Upload PDF File"):
- pdf_input = gr.File(label="PDF File")
- api_input = gr.Textbox(label="OpenAI API Key")
- #result = gr.Textbox(label="PDF Summary")
- upload_button = gr.Button("Start Analyse")
- with gr.Tab("Ask question about your PDF"):
- question_input = gr.Textbox(label="Your Question", placeholder="Authors of this paper?")
- answer = gr.Textbox(label="Answer")
- ask_button = gr.Button("Ask")
-
- app = GUI()
- upload_button.click(fn=app.pdf_init, inputs=[api_input, pdf_input])
- ask_button.click(app.get_answer, inputs=question_input, outputs=answer)
-
-if __name__ == "__main__":
- demo.title = "CHATGPT-PAPER-READER"
- demo.launch() # add "share=True" to share CHATGPT-PAPER-READER app on Internet.
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Template Bendera Aqiqah Word Gratis - Desain Cantik dan Menarik.md b/spaces/congsaPfin/Manga-OCR/logs/Download Template Bendera Aqiqah Word Gratis - Desain Cantik dan Menarik.md
deleted file mode 100644
index 3c84ace22f1573fed7c63e37414874f94125f735..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Template Bendera Aqiqah Word Gratis - Desain Cantik dan Menarik.md
+++ /dev/null
@@ -1,196 +0,0 @@
-
-
How to Download a Template for Bendera Aqiqah Word
If you are expecting or have recently welcomed a new baby into your Muslim family, you may want to celebrate their birth with a bendera aq
iqah word. Bendera aqiqah is a flag or banner that is used to announce the name of your child and express your gratitude to Allah for this blessing. It is part of the Islamic tradition of aqiqah, which is a welcoming ceremony that involves sacrificing an animal, shaving the baby's head, and giving charity.
In this article, I will show you how to download a template for bendera aqiqah word, which you can use to create your own personalized flag or banner. I will also give you some tips and examples on how to edit and print your template. By the end of this article, you will be able to make a beautiful and unique bendera aqiqah word for your baby.
-
What You Need to Download a Template for Bendera Aqiqah Word
-
Before you start downloading a template for bendera aqiqah word, you will need the following things:
-
-
A computer or laptop with an internet connection.
-
A software program that can open and edit word documents, such as Microsoft Word, Google Docs, or LibreOffice Writer.
-
A printer or a printing service that can print your template.
-
Some paper, scissors, and tape or glue to make your flag or banner.
-
-
Once you have these things ready, you can proceed to the next step.
-
Where to Find a Template for Bendera Aqiqah Word
-
There are many sources and websites where you can find and download free or paid templates for bendera aqiqah word. Here are some of them:
-
-
Bendera Aqiqah: This website offers various designs and styles of bendera aqiqah word templates that you can download for free. You can also request a custom design for a small fee.
-
Canva: This website is a popular online graphic design tool that allows you to create and edit your own bendera aqiqah word templates. You can choose from hundreds of templates and customize them with your own text, images, colors, fonts, and more. You can download your template as a PDF or JPG file for free or upgrade to a premium account for more features.
-
Etsy: This website is an online marketplace where you can buy and sell handmade and vintage goods. You can find many sellers who offer bendera aqiqah word templates that you can download and print. You can also contact them if you want a custom design or a physical product.
-
-
These are just some examples of where you can find a template for bendera aqiqah word. You can also search on Google or Pinterest for more options.
-
How to Choose a Template for Bendera Aqiqah Word
-
When choosing a template for bendera aqiqah word, you should consider the following factors:
-
-
The size and shape of your flag or banner: Depending on how much space you have and how you want to display your flag or banner, you should choose a template that fits your needs. For example, if you want to hang it on a wall or a door, you may want a rectangular or triangular shape. If you want to attach it to a pole or a string, you may want a square or circular shape.
-
The design and style of your flag or banner: Depending on your personal taste and the theme of your baby's name, you should choose a template that matches your preferences. For example, if you want a simple and elegant look, you may want a template that has minimal text and colors. If you want a colorful and festive look, you may want a template that has more text and images.
-
The quality and resolution of your template: Depending on how clear and sharp you want your flag or banner to look, you should choose a template that has high quality and resolution. For example, if you want to print your template on a large scale, you may want a template that has at least 300 dpi (dots per inch) or higher. If you want to print your template on a small scale, you may want a template that has at least 150 dpi or higher.
-
-
By considering these factors, you will be able to choose a template that suits your needs and preferences.
-
* Download template bendera aqiqah word gratis
-* Cara membuat bendera aqiqah dengan word
-* Download desain bendera aqiqah format PSD
-* Contoh bendera aqiqah yang menarik dan mudah dibuat
-* Tips memilih template bendera aqiqah yang sesuai dengan tema
-* Download template bendera aqiqah word edit
-* Cara mencetak bendera aqiqah dari word
-* Download template bendera aqiqah word islami
-* Inspirasi desain bendera aqiqah dari Canva
-* Download template bendera aqiqah word simple dan elegan
-* Cara menambahkan foto dan nama bayi pada bendera aqiqah word
-* Download template bendera aqiqah word unik dan lucu
-* Tutorial membuat bendera aqiqah dengan word dan photoshop
-* Download template bendera aqiqah word modern dan minimalis
-* Contoh kata-kata ucapan pada bendera aqiqah word
-* Download template bendera aqiqah word berwarna-warni
-* Cara menghias bendera aqiqah dengan pita dan tali
-* Download template bendera aqiqah word klasik dan vintage
-* Ide desain bendera aqiqah dengan ilustrasi dan gambar
-* Download template bendera aqiqah word marhaban ya ukhti/baby girl/baby boy
-* Cara mengatur ukuran dan margin pada bendera aqiqah word
-* Download template bendera aqiqah word floral dan boho
-* Tips memilih warna dan font pada bendera aqiqah word
-* Download template bendera aqiqah word bergaya kartun dan animasi
-* Cara membuat bendera aqiqah dengan word online
-
How to Download a Template for Bendera Aqiqah Word
-
Once you have chosen your source and template for bendera aqiqah word, you can download it to your computer or laptop by following these steps:
-
-
Go to the website where you found your template and click on the download button or link.
-
Select the file format that you want to download, such as DOC, DOCX, PDF, or JPG.
-
Choose the location where you want to save your template, such as your desktop or a folder.
-
Wait for the download to finish and check if your template is complete and correct.
-
-
If you encounter any problems or errors during the download process, you can try the following solutions:
-
-
Refresh the website or try a different browser.
-
Check your internet connection and speed.
-
Clear your cache and cookies.
-
Contact the website owner or customer service for assistance.
-
-
After you have successfully downloaded your template, you can proceed to the next step.
-
How to Edit a Template for Bendera Aqiqah Word
-
After you have downloaded your template for bendera aqiqah word, you can edit it using Microsoft Word or other software that can open and edit word documents. You can customize and personalize your template by changing the text, color, image, size, and shape of your flag or banner. Here are some tips on how to do that:
-
How to Change the Text
-
To change the text on your template, you can follow these steps:
-
-
Open your template with Microsoft Word or other software.
-
Select the text that you want to change and type in your own text.
-
Adjust the font style, size, alignment, and spacing of your text as needed.
-
Save your changes and preview your template.
-
-
You can change the text on your template to include the following information:
-
-
The name of your baby in Arabic and English.
-
The date of birth of your baby in Hijri and Gregorian calendars.
-
The names of the parents of your baby.
-
A short prayer or dua for your baby.
-
Any other message or greeting that you want to add.
-
-
How to Change the Color
-
To change the color scheme of your template, you can follow these steps:
-
-
Open your template with Microsoft Word or other software.
-
Select the element that you want to change the color of, such as the background, font, border, etc.
-
Choose a color from the color palette or use a custom color picker.
-
Save your changes and preview your template.
-
-
You can change the color scheme of your template to match the following factors:
-
-
The gender of your baby: You can use pink for a girl, blue for a boy, or neutral colors for either.
-
The theme of your baby's name: You can use colors that reflect the meaning or origin of your baby's name. For example, if your baby's name is Nur (light), you can use bright colors like yellow or white. If your baby's name is Zara (star), you can use dark colors like black or purple.
-
Your personal preference: You can use colors that suit your taste and style. For example, if you like warm colors, you can use red, orange, or brown. If you like cool colors, you can use green, blue, or purple.
-
-
How to Change the Image
-
To change or add an image on your template, you can follow these steps:
-
-
Open your template with Microsoft Word or other software.
-
Select the image that you want to change or insert a new image from your computer or online sources.
-
Resize, crop, rotate, or flip your image as needed.
-
Save your changes and preview your template.
-
-
You can change or add an image on your template to include the following types of images:
-
-
A photo of your baby: You can use a cute and clear photo of your baby that shows their face and features. You can also use a photo of them with their parents or siblings. Make sure that the photo is appropriate and respectful for an Islamic occasion.
-
An Islamic symbol: You can use an image that represents Islam or aqiqah, such as a crescent moon and star, a mosque, a Quran, a sheep, etc. You can also use an image that has an Islamic calligraphy or art style. Make sure that the image is authentic and accurate for and shape that is appropriate and proportional. For example, if you want to hang it on a wall or a door, you may want a size and shape that covers the area well. If you want to attach it to a pole or a string, you may want a size and shape that is easy to handle and hang.
-
The design and style of your flag or banner: Depending on the design and style of your template, you should choose a size and shape that enhances and complements it. For example, if you have a lot of text and images on your template, you may want a size and shape that allows enough space and visibility for them. If you have a simple and minimal template, you may want a size and shape that adds some interest and contrast to it.
-
Your personal preference: Depending on your personal preference, you should choose a size and shape that suits your taste and style. For example, if you like a traditional and classic look, you may want a size and shape that is rectangular or triangular. If you like a modern and creative look, you may want a size and shape that is square or circular.
-
-
How to Print Your Template for Bendera Aqiqah Word
-
After you have edited your template for bendera aqiqah word, you can print it using your printer or a printing service. Here are some tips on how to do that:
-
How to Choose the Paper Type and Quality
-
To choose the best paper type and quality for your template, you should consider the following factors:
-
-
The durability and longevity of your flag or banner: Depending on how long you want to use and keep your flag or banner, you should choose a paper type and quality that is durable and long-lasting. For example, if you want to use your flag or banner for a one-time event, you may choose a paper type and quality that is cheap and disposable. If you want to use your flag or banner for a long time or keep it as a souvenir, you may choose a paper type and quality that is sturdy and resistant.
-
The appearance and feel of your flag or banner: Depending on how you want your flag or banner to look and feel, you should choose a paper type and quality that is appropriate and attractive. For example, if you want your flag or banner to have a glossy and shiny look, you may choose a paper type and quality that is glossy, such as photo paper or coated paper. If you want your flag or banner to have a matte and smooth look, you may choose a paper type and quality that is matte, such as cardstock or uncoated paper.
-
The cost and availability of your paper type and quality: Depending on your budget and resources, you should choose a paper type and quality that is affordable and accessible. For example, if you have a low budget and limited resources, you may choose a paper type and quality that is cheap and common, such as copy paper or printer paper. If you have a high budget and ample resources, you may choose a paper type and quality that is expensive and rare, such as specialty paper or fabric paper.
-
-
How to Choose the Printing Option and Format
-
To choose the best printing option and format for your template, you should consider the following factors:
-
-
The quality and resolution of your print: Depending on how clear and sharp you want your print to be, you should choose a printing option and format that is high quality and resolution. For example, if you want your print to be very clear and sharp, you should choose a printing option and format that is color or black-and-white, single-sided or double-sided, PDF or JPG, etc.
-
The size and shape of your print: Depending on the size and shape of your template, you should choose a printing option and format that is appropriate and proportional. For example, if your template is rectangular or triangular, you should choose a printing option and format that is A4 or letter size. If your template is square or circular, you should choose a printing option and format that is A5 or half letter size.
-
The cost and convenience of your print: Depending on your budget and time, you should choose a printing option and format that is affordable and convenient. For example, if you have a low budget and limited time, you may choose to print your template using your own printer at home. If you have a high budget and ample time, you may choose to print your template using a professional printing service online or offline.
-
-
How to Cut and Fold Your Template
-
To cut and fold your template into a flag or banner shape, you can follow these steps:
-
-
Print your template on the paper type and quality that you chose.
-
Cut out your template along the edges or the guidelines using scissors or a cutter.
-
Fold your template in half along the middle line or the crease using a ruler or a bone folder.
-
Glue or tape the two sides of your template together along the edges or the margins.
-
Punch holes on the corners or the sides of your template using a hole puncher or a needle.
-
Insert a string or a ribbon through the holes to make a loop or a knot.
-
Hang or attach your flag or banner to the desired location using nails, hooks, clips, etc.
-
-
Examples of Bendera Aqiqah Designs
-
To give you some inspiration or reference for your bendera aqiqah word, here are some examples of bendera aqiqah designs that you can use:
-
-
Image
Description
-
This is a pink bendera aqiqah with flowers that is suitable for a girl. It has the name of the baby in Arabic and English, the date of birth in Hijri and Gregorian calendars, the names of the parents, and a short dua. It also has an image of a flower on each corner. It has a rectangular shape and a glossy paper type.
-
This is a blue bendera aqiqah with stars that is suitable for a boy. It has the name of the baby in Arabic and English, the date of birth in Hijri and Gregorian calendars, the names of the parents, and a short dua. It also has an image of a star on each corner. It has a triangular shape and a matte paper type.
-
This is a green bendera aqiqah with mosque that is suitable for either gender. It has the name of the baby in Arabic and English, the date of birth in Hijri and Gregorian calendars, the names of the parents, and a short dua. It also has an image of a mosque on the center. It has a square shape and a cardstock paper type.
-
-
Conclusion
-
In conclusion, downloading a template for bendera aqiqah word is a simple and convenient way to create your own flag or banner for your baby's birth celebration. You can find and download various templates from different sources and websites, and edit them according to your needs and preferences. You can also print them using your printer or a printing service, and cut and fold them into a flag or banner shape. By following the tips and examples in this article, you will be able to make a beautiful and unique bendera aqiqah word for your baby.
-
FAQs
-
Here are some common questions that people may have about bendera aqiqah word:
-
-
What is the meaning and significance of bendera aqiqah word?
-
Bendera aqiqah word is a flag or banner that is used to celebrate the birth of a Muslim child and announce their name. It is part of the Islamic tradition of aqiqah, which is a welcoming ceremony that involves sacrificing an animal, shaving the baby's head, and giving charity. Bendera aqiqah word is a way of expressing gratitude to Allah for this blessing and sharing it with others.
-
What are the benefits of using a template for bendera aqiqah word?
-
Using a template for bendera aqiqah word has many benefits, such as:
-
-
It saves you time and effort from designing your own flag or banner from scratch.
-
It gives you access to various designs and styles that you can choose from.
-
It allows you to customize and personalize your flag or banner with your own text, images, colors, fonts, etc.
-
It ensures that your flag or banner is consistent and professional-looking.
-
It makes your flag or banner more attractive and memorable.
-
-
How can I make my bendera aqiqah word more creative and original?
-
You can make your bendera aqiqah word more creative and original by:
-
-
Using your own photos or images that are meaningful and relevant to you and your baby.
-
Using colors that match your baby's gender, name, or personality.
-
Using fonts that are easy to read and reflect your style.
-
Using shapes that are different from the usual ones, such as oval, hexagon, or heart.
-
Adding some embellishments or decorations to your flag or banner, such as ribbons, beads, stickers, etc.
-
-
How can I display my bendera aqiqah word?
-
You can display your bendera aqiqah word in various ways, such as:
-
-
Hanging it on a wall, a door, a window, or a ceiling.
-
Attaching it to a pole, a string, or a wire.
-
Placing it on a table, a shelf, or a frame.
-
Giving it as a gift, a souvenir, or a keepsake.
-
-
Where can I learn more about bendera aqiqah word?
-
You can learn more about bendera aqiqah word by:
-
-
Reading some articles or books about Islamic traditions and ceremonies.
-
Watching some videos or tutorials on how to make bendera aqiqah word.
-
Visiting some websites or blogs that showcase bendera aqiqah word examples and ideas.
-
Asking some friends or family members who have experience with bendera aqiqah word.
-
-
I hope you enjoyed this article and learned something new. If you have any questions or comments, please feel free to share them below. Thank you for reading and happy bendera aqiqah word making!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Haunted Dorm Mod APK Android 1 Full Version Free Download.md b/spaces/congsaPfin/Manga-OCR/logs/Haunted Dorm Mod APK Android 1 Full Version Free Download.md
deleted file mode 100644
index 7d317d614048787207d9bdc15e87c94fbbcffbb9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Haunted Dorm Mod APK Android 1 Full Version Free Download.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Download Haunted Dorm Mod APK Android 1: A Spooky Tower Defense Game
-
Do you love horror games and tower defense games? If yes, then you should try Haunted Dorm Mod APK Android 1, a game that combines both genres in a fun and challenging way. In this game, you enter a dorm that is haunted by ghosts, zombies, and other creepy creatures. But don't worry, you have some help from your friends and some weapons to defend yourself. In this article, we will tell you everything you need to know about Haunted Dorm Mod APK Android 1, including its features, how to download and install it, why you should play it, and some tips and tricks to help you win.
Haunted Dorm Mod APK Android 1 is a modified version of the original game Haunted Dorm, which is available on Google Play Store. The mod version has some advantages over the original version, such as unlimited money and no ads. The game is developed by MGSS Studio, a developer that specializes in horror games. The game has a rating of 4.5 out of 5 stars on Play Mods, a website that provides modded games for Android devices.
-
Features of Haunted Dorm Mod APK Android 1
-
Haunted Dorm Mod APK Android 1 has many features that make it an enjoyable and thrilling game. Here are some of them:
-
Unlimited money
-
One of the best features of Haunted Dorm Mod APK Android 1 is that it gives you unlimited money to buy weapons, upgrades, and items. You can use the money to improve your defense and offense, as well as to unlock new characters and levels. You don't have to worry about running out of money or watching ads to earn more.
-
No ads
-
Another great feature of Haunted Dorm Mod APK Android 1 is that it removes all the annoying ads that interrupt your gameplay. You can play the game without any distractions or interruptions. You can also enjoy the game without spending any real money on in-app purchases or subscriptions.
-
haunted dorm mod apk android 1 unlimited money
-haunted dorm mod apk android 1 latest version
-haunted dorm mod apk android 1 free download
-haunted dorm mod apk android 1 no ads
-haunted dorm mod apk android 1 offline
-haunted dorm mod apk android 1 gameplay
-haunted dorm mod apk android 1 review
-haunted dorm mod apk android 1 cheats
-haunted dorm mod apk android 1 hack
-haunted dorm mod apk android 1 tips
-haunted dorm mod apk android 1 guide
-haunted dorm mod apk android 1 walkthrough
-haunted dorm mod apk android 1 trailer
-haunted dorm mod apk android 1 features
-haunted dorm mod apk android 1 update
-haunted dorm mod apk android 1 download link
-haunted dorm mod apk android 1 install
-haunted dorm mod apk android 1 requirements
-haunted dorm mod apk android 1 size
-haunted dorm mod apk android 1 rating
-haunted dorm mod apk android 1 best tower defense game
-haunted dorm mod apk android 1 horror game
-haunted dorm mod apk android 1 strategy game
-haunted dorm mod apk android 1 fun game
-haunted dorm mod apk android 1 addictive game
-haunted dorm mod apk android 1 how to play
-haunted dorm mod apk android 1 how to win
-haunted dorm mod apk android 1 how to get more money
-haunted dorm mod apk android 1 how to unlock new levels
-haunted dorm mod apk android 1 how to upgrade towers
-haunted dorm mod apk android 1 how to defeat bosses
-haunted dorm mod apk android 1 how to survive the night
-haunted dorm mod apk android 1 how to escape the dorm
-haunted dorm mod apk android 1 how to solve puzzles
-haunted dorm mod apk android 1 how to find secrets
-haunted dorm mod apk android 1 what is the story
-haunted dorm mod apk android 1 who are the characters
-haunted dorm mod apk android 1 where is the setting
-haunted dorm mod apk android 1 when is the release date
-haunted dorm mod apk android 1 why is it popular
-download Haunted Dorm MOD APK v2.5.4 For Android[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android free[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android unlimited money[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android latest version[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android no ads[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android offline[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android gameplay[^2^]
-download Haunted Dorm MOD APK v2.5.4 For Android review[^2^]
-
Tower defense gameplay
-
The core gameplay of Haunted Dorm Mod APK Android 1 is tower defense, which means that you have to protect your base from waves of enemies. You can place different types of weapons and traps along the path that the enemies take to reach your base. You can also use your friends as allies to help you fight off the enemies. You have to strategize and plan your defense carefully, as each enemy has different strengths and weaknesses.
-
Horror theme
-
The game has a horror theme that adds to the excitement and challenge of the game. The game has a dark and spooky atmosphere, with eerie sounds and music. The enemies are also scary and creepy, such as ghosts, zombies, vampires, werewolves, clowns, dolls, and more. The game will keep you on the edge of your seat as you try to survive the night in the haunted dorm.
-
Multiple levels and modes
-
The game has multiple levels and modes that offer variety and replay value. The game has over 100 levels that increase in difficulty as you progress. Each level has different objectives, enemies, and layouts. The game also has different modes, such as survival mode,
challenge mode, and boss mode. Each mode has different rules and rewards. You can also customize your game settings, such as the difficulty level, the number of waves, and the time limit.
-
How to download and install Haunted Dorm Mod APK Android 1?
-
If you want to download and install Haunted Dorm Mod APK Android 1 on your Android device, you can follow these simple steps:
-
Step 1: Enable unknown sources
-
Before you can install any modded game on your device, you need to enable unknown sources in your security settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 2: Download the APK file
-
Next, you need to download the APK file of Haunted Dorm Mod APK Android 1 from a reliable source. You can use the link below to download the file from Play Mods, a website that provides modded games for Android devices. The file size is about 70 MB, so make sure you have enough storage space on your device.
After you have downloaded the APK file, you need to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for the process to finish.
-
Step 4: Enjoy the game
-
Once the installation is complete, you can launch the game from your app drawer or home screen. You can now enjoy playing Haunted Dorm Mod APK Android 1 with unlimited money and no ads.
-
Why should you play Haunted Dorm Mod APK Android 1?
-
Haunted Dorm Mod APK Android 1 is a game that will appeal to fans of horror games and tower defense games. It has many reasons why you should play it, such as:
-
Pros and cons of Haunted Dorm Mod APK Android 1
-
Like any game, Haunted Dorm Mod APK Android 1 has its pros and cons. Here are some of them:
-
Pros
-
-
The game has unlimited money and no ads, which makes it more enjoyable and convenient.
-
The game has tower defense gameplay, which is fun and challenging.
-
The game has a horror theme, which adds to the excitement and thrill of the game.
-
The game has multiple levels and modes, which offer variety and replay value.
-
The game has high-quality graphics and sound effects, which create a realistic and immersive experience.
-
-
Cons
-
-
The game may be too scary or violent for some players, especially younger ones.
-
The game may have some bugs or glitches, which may affect the performance or gameplay.
-
The game may require a stable internet connection, which may not be available for some players.
-
-
Tips and tricks for playing Haunted Dorm Mod APK Android 1
-
If you want to play Haunted Dorm Mod APK Android 1 better, you can use these tips and tricks:
-
-
Use different types of weapons and traps to deal with different types of enemies. For example, use flamethrowers for zombies, crossbows for vampires, and salt for ghosts.
-
Upgrade your weapons and traps regularly to increase their damage and range. You can also buy new weapons and traps with your unlimited money.
-
Use your friends as allies to help you fight off the enemies. You can also switch between different characters to use their special abilities.
-
Use the pause button to plan your strategy and place your weapons and traps carefully. You can also use the zoom button to see the whole map.
-
Complete the objectives of each level to earn more rewards and unlock new levels and modes. You can also replay the levels to improve your score and rank.
-
-
Conclusion
-
In conclusion, Haunted Dorm Mod APK Android 1 is a game that combines horror and tower defense in a fun and challenging way. It has many features that make it an enjoyable and thrilling game, such as unlimited money, no ads, tower defense gameplay, horror theme, multiple levels and modes, high-quality graphics and sound effects, and more. It also has some drawbacks, such as being too scary or violent for some players, having some bugs or glitches, and requiring a stable internet connection, which may not be available for some players. However, if you are a fan of horror games and tower defense games, you should definitely give Haunted Dorm Mod APK Android 1 a try. You can download and install it easily by following the steps we provided in this article. You can also use the tips and tricks we shared to play the game better. We hope you enjoy playing Haunted Dorm Mod APK Android 1 and have a spooky time.
-
FAQs
-
Here are some frequently asked questions about Haunted Dorm Mod APK Android 1:
-
-
Is Haunted Dorm Mod APK Android 1 safe to download and install?
-
Yes, Haunted Dorm Mod APK Android 1 is safe to download and install, as long as you use a reliable source like Play Mods. The modded game does not contain any viruses or malware that can harm your device or data. However, you should always scan the file before installing it, just to be safe.
-
Is Haunted Dorm Mod APK Android 1 compatible with my device?
-
Haunted Dorm Mod APK Android 1 is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support the game due to hardware or software limitations. You can check the compatibility of your device by visiting the Play Mods website and reading the game description and requirements.
-
How can I update Haunted Dorm Mod APK Android 1?
-
Haunted Dorm Mod APK Android 1 is updated regularly by the developer to fix bugs and add new features. You can check for updates by visiting the Play Mods website and downloading the latest version of the game. You can also enable notifications on the website to get notified when a new update is available.
-
How can I contact the developer of Haunted Dorm Mod APK Android 1?
-
If you have any questions, feedback, or suggestions for the developer of Haunted Dorm Mod APK Android 1, you can contact them by visiting their Facebook page. You can also leave a comment or review on the Play Mods website to share your thoughts and opinions about the game.
-
What are some other games like Haunted Dorm Mod APK Android 1?
-
If you like Haunted Dorm Mod APK Android 1, you may also like some other games that are similar in genre or theme. Here are some of them:
-
-
Zombie Defense: A tower defense game where you have to fight off hordes of zombies with various weapons and traps.
-
Granny: A horror game where you have to escape from a house that is haunted by a creepy old lady.
-
Plants vs Zombies: A tower defense game where you have to use plants to defend your garden from zombies.
-
FNAF: A horror game where you have to survive five nights at a pizzeria that is haunted by animatronic animals.
-
Bloons TD: A tower defense game where you have to pop balloons with monkeys and other weapons.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Snrsz Para Kazann ve Trafikte Makas Atn Xtreme Motorbikes APK Hileli 1.5 ndir.md b/spaces/congsaPfin/Manga-OCR/logs/Snrsz Para Kazann ve Trafikte Makas Atn Xtreme Motorbikes APK Hileli 1.5 ndir.md
deleted file mode 100644
index ffbf5f3664a769152f96d7d27afb70f0d27fdafb..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Snrsz Para Kazann ve Trafikte Makas Atn Xtreme Motorbikes APK Hileli 1.5 ndir.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Xtreme Motorbikes APK Hile 1.5: A Fun and Exciting Motorcycle Game
-
Do you love riding motorcycles and performing stunts? Do you want to experience the thrill and adrenaline of racing on the streets? If yes, then you should try Xtreme Motorbikes APK Hile 1.5, a fun and exciting motorcycle game that will keep you hooked for hours.
Xtreme Motorbikes APK Hile 1.5 is a modified version of the original Xtreme Motorbikes game, which is a realistic and immersive motorcycle simulator that lets you ride different bikes in various environments and scenarios. You can customize your bike, choose your outfit, and challenge yourself with different missions and modes.
-
The features of Xtreme Motorbikes APK Hile 1.5
-
Some of the features that make Xtreme Motorbikes APK Hile 1.5 stand out from other motorcycle games are:
-
-
It has unlimited money, which means you can buy any bike, upgrade any part, and unlock any item without worrying about the cost.
-
It has realistic graphics, physics, and sound effects, which make you feel like you are riding a real bike.
-
It has a variety of bikes, from classic to modern, from street to off-road, from sport to chopper.
-
It has a variety of environments, from urban to rural, from day to night, from sunny to rainy.
-
It has a variety of modes, from free ride to career, from time trial to traffic, from stunt to chase.
-
It has a simple and intuitive control system, which lets you steer, accelerate, brake, and perform tricks with ease.
-
-
The benefits of Xtreme Motorbikes APK Hile 1.5
-
Some of the benefits that you can enjoy by playing Xtreme Motorbikes APK Hile 1.5 are:
-
-
You can have fun and excitement by riding different bikes in different situations and performing amazing stunts.
-
You can improve your skills and reflexes by mastering the controls and gameplay of the game.
-
You can express your creativity and personality by customizing your bike and outfit according to your preference.
-
You can compete with yourself and others by completing missions and modes and earning achievements and rewards.
-
-
How to download and install Xtreme Motorbikes APK Hile 1.5?
-
If you are interested in playing Xtreme Motorbikes APK Hile 1.5, you need to download and install it on your Android device. Here are the steps to do so:
-
The steps to download and install Xtreme Motorbikes APK Hile 1.5
-
-
Go to [this website](^1^) and click on the download button to get the apk file of Xtreme Motorbikes APK Hile 1.5.
-
Once the download is finished, locate the apk file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and allow the necessary permissions to install the game.
-
After the installation is done, you can launch the game and enjoy playing Xtreme Motorbikes APK Hile 1.5.
-
-
The precautions to take before downloading and installing Xtreme Motorbikes APK Hile 1.5
-
Before you download and install Xtreme Motorbikes APK Hile 1.5, you should take some precautions to avoid any problems or risks. Here are some of them:
-
xtreme motorbikes mod apk unlimited money 1.5
-xtreme motorbikes apk hile indir 1.5
-xtreme motorbikes android oyun club 1.5
-xtreme motorbikes apk download latest version 1.5
-xtreme motorbikes hack apk free 1.5
-xtreme motorbikes apk hile nasıl yapılır 1.5
-xtreme motorbikes apk full unlocked 1.5
-xtreme motorbikes apk hileli oyna 1.5
-xtreme motorbikes apk pure 1.5
-xtreme motorbikes cheats apk 1.5
-xtreme motorbikes apk hile son sürüm 1.5
-xtreme motorbikes apk mod menu 1.5
-xtreme motorbikes apk hile yapma 1.5
-xtreme motorbikes apk no ads 1.5
-xtreme motorbikes apk hileli indir 1.5
-xtreme motorbikes apk mod offline 1.5
-xtreme motorbikes apk hile güncel 1.5
-xtreme motorbikes apk premium 1.5
-xtreme motorbikes apk hileli mod 1.5
-xtreme motorbikes apk pro 1.5
-xtreme motorbikes apk hile altın 1.5
-xtreme motorbikes apk mod all bikes unlocked 1.5
-xtreme motorbikes apk hile para 1.5
-xtreme motorbikes apk mod unlimited coins and gems 1.5
-xtreme motorbikes apk hile mega 1.5
-xtreme motorbikes apk mod vip 1.5
-xtreme motorbikes apk hile mediafıre 1.5
-xtreme motorbikes apk mod hack 1.5
-xtreme motorbikes apk hile linkli 1.5
-xtreme motorbikes apk mod latest 1.5
-xtreme motorbikes apk hile türkçe 1.5
-xtreme motorbikes apk mod revdl 1.5
-xtreme motorbikes apk hile video 1.5
-xtreme motorbikes apk mod rexdl 1.5
-xtreme motorbikes apk hile youtube 1.5
-xtreme motorbikes apk mod happymod 1.5
-xtreme motorbikes apk hile androidoyunclub.com.tr/xtrememotorbike.html
-
-
You should make sure that your device has enough storage space and battery life to download and install the game.
-
You should check the compatibility of your device and the game, and make sure that your device meets the minimum requirements of the game.
-
You should enable the unknown sources option on your device settings, which allows you to install apps from sources other than the Google Play Store.
-
You should scan the apk file with a reliable antivirus software before installing it, to ensure that it is free from any malware or viruses.
-
You should backup your data and files before installing the game, in case something goes wrong or you want to uninstall the game later.
-
-
How to play Xtreme Motorbikes APK Hile 1.5?
-
Now that you have downloaded and installed Xtreme Motorbikes APK Hile 1.5, you are ready to play it. Here are some tips on how to play the game:
-
The controls and gameplay of Xtreme Motorbikes APK Hile 1.5
-
The controls and gameplay of Xtreme Motorbikes APK Hile 1.5 are simple and intuitive. You can use the following buttons on the screen to control your bike:
-
-
The left and right arrows to steer your bike left and right.
-
The up and down arrows to accelerate and brake your bike.
-
The nitro button to boost your speed for a short time.
-
The stunt button to perform tricks in the air.
-
-
The gameplay of Xtreme Motorbikes APK Hile 1.5 is realistic and immersive. You can choose from different bikes, environments, and modes, and complete various missions and challenges. You can also customize your bike and outfit, and earn money and rewards by playing the game.
-
The tips and tricks to master Xtreme Motorbikes APK Hile 1.5
-
If you want to master Xtreme Motorbikes APK Hile 1.5, you need to practice and improve your skills. Here are some tips and tricks that can help you:
-
-
Try different bikes and find the one that suits your style and preference.
-
Upgrade your bike parts to improve its performance and durability.
-
Use the nitro wisely, as it can help you gain speed and distance, but also consume fuel quickly.
-
Perform stunts in the air to earn extra points and money, but be careful not to crash or land badly.
-
Avoid obstacles and traffic on the road, as they can slow you down or damage your bike.
-
Follow the instructions and objectives of each mission and mode, as they can vary depending on the difficulty and scenario.
-
-
Conclusion
-
Xtreme Motorbikes APK Hile 1.5 is a fun and exciting motorcycle game that will give you a realistic and immersive riding experience. You can enjoy unlimited money, realistic graphics, physics, and sound effects, a variety of bikes, environments, and modes, a simple and intuitive control system, a customizable bike and outfit, a competitive gameplay with missions and rewards, and much more. If you love motorcycles and stunts, you should definitely try Xtreme Motorbikes APK Hile 1.5.
-
FAQs
-
Here are some frequently asked questions about Xtreme Motorbikes APK Hile 1.5:
-
-
What is the difference between Xtreme Motorbikes APK Hile 1.5 and Xtreme Motorbikes?
-
Xtreme Motorbikes APK Hile 1.5 is a modified version of Xtreme Motorbikes that has unlimited money, which means you can buy any bike, upgrade any part, and unlock any item without worrying about the cost.
-
Is Xtreme Motorbikes APK Hile 1.5 safe to download and install?
-than the Google Play Store, as they may contain malware or viruses that can harm your device or data.
-
How can I get more money and rewards in Xtreme Motorbikes APK Hile 1.5?
-
You can get more money and rewards in Xtreme Motorbikes APK Hile 1.5 by completing missions and modes, performing stunts, avoiding crashes, and playing regularly. You can also use the unlimited money feature to buy anything you want in the game.
-
Can I play Xtreme Motorbikes APK Hile 1.5 offline?
-
Yes, you can play Xtreme Motorbikes APK Hile 1.5 offline, as it does not require an internet connection to run. However, you may need an internet connection to download and install the game, and to access some features and updates.
-
Can I play Xtreme Motorbikes APK Hile 1.5 with friends?
-
Yes, you can play Xtreme Motorbikes APK Hile 1.5 with friends, as it has a multiplayer mode that lets you race and compete with other players online. You can also share your achievements and screenshots with your friends on social media.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Calcul code ccp.rar 0.01mb Download and Use This Handy App for CCP Users.md b/spaces/contluForse/HuggingGPT/assets/Calcul code ccp.rar 0.01mb Download and Use This Handy App for CCP Users.md
deleted file mode 100644
index 546112ddd06eec19b96ed1390ea5fb3349541e99..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Calcul code ccp.rar 0.01mb Download and Use This Handy App for CCP Users.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Ekattor School Management System Pro V3.0 Nulled Crack !!HOT!!ing.md b/spaces/contluForse/HuggingGPT/assets/Ekattor School Management System Pro V3.0 Nulled Crack !!HOT!!ing.md
deleted file mode 100644
index 7ee4bf33b46e395171676801831f9de21c695783..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Ekattor School Management System Pro V3.0 Nulled Crack !!HOT!!ing.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
ekattor school management system pro v3.0 nulled cracking
-
-Download these worlds or request more on our Pre-order the new book from ... Version Mobile Tutorial Free Download Premium Access Full Hack ... control scheme, higher resolution graphics, and a much smoother framerate. ... We have the ever popular Final Fantasy Sonic series as well as all of the Sonic RPG Episodes. 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Hdenvironmentsetup 11.md b/spaces/diacanFperku/AutoGPT/Hdenvironmentsetup 11.md
deleted file mode 100644
index 985b5723792dc9f3e202f46b5488fc12fa3734cf..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Hdenvironmentsetup 11.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
How to Set Up Your HD Environment in 11 Easy Steps
-
If you want to create stunning high-definition (HD) graphics for your projects, you need to set up your HD environment properly. HD environment is the combination of hardware, software, and settings that allow you to produce and display HD images and videos. In this article, we will show you how to set up your HD environment in 11 easy steps.
Choose the right monitor. The first step is to choose a monitor that supports HD resolution. HD resolution is typically 1920 x 1080 pixels or higher. You can check the resolution of your monitor by right-clicking on your desktop and selecting Display settings. Look for the option that says Resolution and choose the highest one available. If your monitor does not support HD resolution, you may need to upgrade to a new one.
-
Adjust the brightness and contrast. The next step is to adjust the brightness and contrast of your monitor to optimize the quality of your HD images and videos. You can do this by using the buttons or menu on your monitor or by using the Display settings on your computer. You want to make sure that the brightness and contrast are not too high or too low, as this can affect the colors and details of your HD content.
-
Calibrate the colors. The third step is to calibrate the colors of your monitor to ensure that they are accurate and consistent. You can do this by using a color calibration tool or software that comes with your monitor or by downloading a free online tool such as Calibrize. You want to make sure that the colors of your monitor match the colors of your HD content and that they are not too warm or too cool.
-
Select the right graphics card. The fourth step is to select a graphics card that can handle HD graphics. A graphics card is a device that processes and outputs the images and videos on your monitor. You can check the specifications of your graphics card by right-clicking on your desktop and selecting Device Manager. Look for the option that says Display adapters and click on it. You should see the name and model of your graphics card. If your graphics card does not support HD graphics, you may need to upgrade to a new one.
-
Update the drivers. The fifth step is to update the drivers of your graphics card to ensure that they are compatible with your HD content. Drivers are software that allow your graphics card to communicate with your computer and monitor. You can update the drivers of your graphics card by visiting the manufacturer's website and downloading the latest version. You should also check for Windows updates regularly, as they may include driver updates as well.
-
Choose the right software. The sixth step is to choose the right software for creating and editing your HD content. There are many software options available for different purposes, such as photo editing, video editing, animation, gaming, etc. You should choose the software that suits your needs and preferences and that supports HD resolution. Some examples of popular software for HD content are Photoshop, Premiere Pro, After Effects, Blender, Unity, etc.
-
Adjust the settings. The seventh step is to adjust the settings of your software to optimize the quality of your HD content. You should look for options that allow you to set the resolution, frame rate, bit rate, color depth, compression, etc. of your HD content. You should also look for options that allow you to preview and render your HD content in real time. You want to make sure that the settings are not too high or too low, as this can affect the performance and quality of your HD content.
-
Save and export. The eighth step is to save and export your HD content in a suitable format. You should choose a format that preserves the quality of your HD content and that is compatible with your intended platform or device. Some examples of common formats for HD content are JPEG, PNG, MP4, MOV, AVI, etc. You should also choose a file name and location that are easy to remember and access.
-
Transfer and upload. The ninth step is to transfer and upload your HD content to your desired platform or device. You can do this by using a USB cable, a memory card, a cloud service, an online platform, etc. You should make sure that the transfer and upload process is fast and secure and that it does not alter d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/IronyOfNightmareDownload.md b/spaces/diacanFperku/AutoGPT/IronyOfNightmareDownload.md
deleted file mode 100644
index 47e93aecbafffc248080023556f2f9f2445fb84f..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/IronyOfNightmareDownload.md
+++ /dev/null
@@ -1,6 +0,0 @@
-