diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clip Paint Studio Cost.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clip Paint Studio Cost.md
deleted file mode 100644
index 8b05f0fdb4d52d44e9a8a2ef79f78401c09b6e15..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clip Paint Studio Cost.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
How Much Does Clip Paint Studio Cost and Is It Worth It?
-
Clip Paint Studio (also known as Clip Studio Paint or Manga Studio) is a powerful and versatile software for creating digital art, comics, and animation. It offers a wide range of features and tools to help you bring your creative vision to life. But how much does it cost and is it worth the investment? Here are some things to consider before you buy Clip Paint Studio.
-
Clip Paint Studio Pricing Plans
-
Clip Paint Studio has two main versions: Pro and EX. The Pro version is designed for basic illustration and comic creation, while the EX version has more advanced features for professional comic and animation production. You can compare the features of each version here.
The pricing plans for Clip Paint Studio vary depending on the device and the payment method. You can choose to buy a perpetual license or a monthly subscription. Here are the current prices as of May 2023:
-
-
-
Device
-
Pro License
-
EX License
-
Pro Subscription
-
EX Subscription
-
-
-
Windows/Mac
-
$49.99 (one-time)
-
$219.00 (one-time)
-
$4.49/month or $24.99/year
-
$8.99/month or $71.99/year
-
-
-
iPad/iPhone
-
N/A
-
N/A
-
$4.49/month or $24.99/year
-
$8.99/month or $71.99/year
-
-
-
Android/Galaxy
-
N/A
-
N/A
-
$0.99/month or $9.99/year (first 6 months free)
-
$2.49/month or $24.99/year (first 6 months free)
-
-
-
Chromebook
-
N/A
-
N/A
-
$0.99/month or $9.99/year (first 3 months free)
-
$2.49/month or $24.99/year (first 3 months free)
-
-
-
-
Note: These prices are subject to change and may vary by region. You can check the latest prices on the official website.
-
-
Clip Paint Studio Benefits and Drawbacks
-
-
Clip Paint Studio is a popular choice among artists and creators for many reasons. Here are some of the benefits and drawbacks of using Clip Paint Studio:
-
-
Benefits:
-
-
-
-
It has a user-friendly interface and customizable workspace.
-
-
It supports various file formats and devices.
-
-
It has a large and active community of users and resources.
-
-
It has a rich collection of brushes, pens, textures, materials, and assets.
-
-
It has powerful tools for drawing, coloring, editing, vectoring, and animating.
-
-
It has smart features for comic and manga creation, such as panel layout, perspective rulers, word balloons, and 3D models.
-
-
It has frequent updates and improvements based on user feedback.
-
-
It offers a free trial and a money-back guarantee.
-
-
-
-
Drawbacks:
-
-
-
-
It can be overwhelming for beginners or casual users.
-
-
It can be expensive for some users, especially the EX version.
-
-
It can have compatibility issues with some devices or software.
-
-
It can have bugs or glitches sometimes.
-
-
It can have limited support or documentation for some languages or regions.
-
-
-
-
Is Clip Paint Studio Worth It?
-
-
The answer to this question depends on your needs, preferences, budget, and goals as an artist or creator. Clip Paint Studio is a great software for anyone who wants to create digital art, comics, or animation with high quality and efficiency. However, it may not be the best option for
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/CRACK ThunderSoft Folder Password Lock Pro 11.0.0 Multilingual Full Wi BEST.md b/spaces/1gistliPinn/ChatGPT4/Examples/CRACK ThunderSoft Folder Password Lock Pro 11.0.0 Multilingual Full Wi BEST.md
deleted file mode 100644
index 4b570f7d3ad00b4863d1e93e908df75497af621b..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/CRACK ThunderSoft Folder Password Lock Pro 11.0.0 Multilingual Full Wi BEST.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
If your computer has an older security system that you no longer want to install an. 0, Code : 110-8601. How to activate the function to display all letters in the password? How to active. CODE : 03-2944. PROGRAM2 - Unlock Folder Password Protection Using File Encryptor PRO 10. . CODE : 03-2944. 0 crack. 23-12-2017 06-20-2014.. .
-
Hidden files are not encrypted because the file name is not encrypted. Upon infection the cyber criminals will initially try to clear up the malicious files using the registry cleaner. -__hot__-crack-thundersoft-folder-password-lock-pro-11-0-0-multilingual-full-wil -__hot__-crack-thundersoft-folder-password-lock-pro-11-0-0-multilingual-full-wil -__hot__-crack-thundersoft-folder-password-lock-pro-11-0-0-multilingual-full-wil -__hot__-crack-thundersoft-folder-password-lock-pro-11-0-0-multilingual-full-wil.
-
CRACK ThunderSoft Folder Password Lock Pro 11.0.0 Multilingual Full Wi
You can download and install the folder password unlock without the need for registration. However, it seems to have some performance issues and doesn't seem very secure. While it is now easier to download and install the said software, there are more complex and security related concerns one might be faced with.
-
To access your encrypted files you need a decryption program, which is available here. The program is called CRACK ThunderSoft and this is the most sophisticated ransomware on the market. The ransomware is unknown and may be created by the hacker group in the past, since it appears to have been used by other groups. It is the first ransomware that can perform a forensic investigation on a computer system, by breaking through all kinds of layer.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Blockudoku A Relaxing and Stimulating Block Puzzle Game for Everyone - Indir and Experience.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Blockudoku A Relaxing and Stimulating Block Puzzle Game for Everyone - Indir and Experience.md
deleted file mode 100644
index b363723a097cf6c8d0bec06cecd5c035e7ba7f13..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Blockudoku A Relaxing and Stimulating Block Puzzle Game for Everyone - Indir and Experience.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
Block Puzzle Indir: How to Download and Play the Best Block Puzzle Games
-
Do you love playing puzzle games that challenge your brain and keep you entertained? If so, you might want to try block puzzle games, which are a popular genre of puzzle games that combine elements of Tetris, Sudoku, and jigsaw puzzles. Block puzzle games are simple yet addictive games that require you to fit different shapes of blocks on a grid, either horizontally or vertically, to clear lines or squares. They are fun, relaxing, and rewarding games that can improve your cognitive skills, mental health, and mood.
But how can you download and play block puzzle games? What are the best block puzzle games available? And what are some tips and tricks to master them? In this article, we will answer these questions and more. We will explain what block puzzle games are, how they originated and evolved, what benefits they offer, what features they have, how to download and play them, and how to improve your skills. By the end of this article, you will be ready to enjoy block puzzle games like a champ!
-
What are Block Puzzle Games?
-
Block puzzle games are a type of puzzle games that involve moving and rotating different shapes of blocks on a grid or board. The goal is to fill up the grid with blocks without leaving any gaps or spaces. Depending on the game mode, you may have to clear horizontal or vertical lines, 3x3 squares, or other patterns by placing blocks of the same color or type. You may also have to deal with obstacles, power-ups, timers, or other challenges. The game ends when there is no more space for new blocks or when you reach a certain score or level.
-
The Origin and Evolution of Block Puzzle Games
-
Block puzzle games have a long history that dates back to the 19th century. One of the earliest examples of block puzzle games is Tangram, a Chinese game that consists of seven flat shapes that can be arranged into various figures. Another example is Pentominoes, a game invented by American mathematician Solomon W. Golomb in 1953, which uses 12 shapes made of five squares each.
However, the most influential and famous block puzzle game is Tetris, which was created by Soviet engineer Alexey Pajitnov in 1984. Tetris is a game that involves falling blocks of different shapes that can be rotated and moved sideways to fit into a rectangular grid. The game became a worldwide phenomenon and inspired many variations and spin-offs.
-
Since then, block puzzle games have evolved and diversified into many subgenres and formats. Some examples of modern block puzzle games are Blockudoku, Blockscapes, Woodoku, Block Blast Adventure Master, Blocks: Block Puzzle Games, and many more. These games offer different features, themes, modes, graphics, sounds, and challenges that appeal to different tastes and preferences.
-
The Benefits of Playing Block Puzzle Games
-
Playing block puzzle games is not only fun but also beneficial for your brain and well-being. Here are some of the benefits of playing block puzzle games:
-
-
They improve your cognitive skills such as memory, attention, concentration, logic, problem-solving, spatial perception, visual analysis, and synthesis.
-
They enhance your motor skills such as hand-eye coordination, fine motor control, reaction time, and dexterity.
-
They reduce stress and anxiety by providing a relaxing and satisfying activity that distracts you from negative thoughts and emotions.
-
They boost your mood and self-esteem by giving you a sense of achievement and reward when you complete a puzzle or beat a high score.
-
They stimulate your creativity and imagination by allowing you to create different shapes and patterns with blocks.
The Features of Block Puzzle Games
-
Block puzzle games have various features that make them appealing and enjoyable for players of all ages and backgrounds. Some of the common features of block puzzle games are:
-
-
They have simple and intuitive controls that are easy to learn and use. You can usually move and rotate the blocks with a swipe, a tap, or a drag on your screen.
-
They have colorful and attractive graphics that create a pleasant and stimulating visual experience. You can choose from different themes and styles that suit your mood and preference.
-
They have soothing and catchy sounds that enhance the gameplay and create a relaxing and immersive atmosphere. You can listen to different music and sound effects that match the theme and mood of the game.
-
They have various modes and levels that offer different challenges and goals. You can play classic mode, arcade mode, time mode, endless mode, or other modes that test your skills and strategy. You can also progress through different levels of difficulty and complexity that keep you engaged and motivated.
-
They have leaderboards and achievements that allow you to compete with yourself and others. You can track your progress and performance, compare your scores and rankings with other players, and unlock new achievements and rewards.
-
-
How to Download and Play Block Puzzle Games?
-
If you are interested in playing block puzzle games, you may wonder how to download and play them on your device. Here are some steps that you can follow to enjoy block puzzle games:
-
Choosing the Right Platform and Device
-
The first step is to choose the right platform and device for playing block puzzle games. Block puzzle games are available on various platforms such as web browsers, desktop computers, laptops, tablets, smartphones, consoles, or smart TVs. You can choose the platform that is most convenient and accessible for you.
-
However, some platforms may have more options and features than others. For example, web browsers may have limited graphics and sounds, while smartphones may have smaller screens and batteries. Therefore, you should consider the pros and cons of each platform before choosing one.
-
The most popular platform for playing block puzzle games is smartphones, as they are portable, versatile, and easy to use. You can download block puzzle games from various app stores such as Google Play Store, Apple App Store, Amazon Appstore, or Samsung Galaxy Store. You can also play block puzzle games online without downloading them by visiting websites such as Block Puzzle Online or Block Puzzle Games.
-
Finding the Best Block Puzzle Games
-
The next step is to find the best block puzzle games that suit your taste and preference. There are hundreds of block puzzle games available on different platforms, so you may feel overwhelmed by the choices. However, you can narrow down your options by using some criteria such as:
-
-
The genre and theme of the game. You can choose from classic block puzzle games, wood block puzzle games, jewel block puzzle games, candy block puzzle games, or other themes that appeal to you.
-
The features and modes of the game. You can choose from block puzzle games that have different features such as power-ups, obstacles, timers, hints, or other challenges. You can also choose from block puzzle games that have different modes such as classic mode, arcade mode, time mode, endless mode, or other modes that offer different goals.
-
The ratings and reviews of the game. You can check the ratings and reviews of block puzzle games on app stores or websites to see what other players think about them. You can look for block puzzle games that have high ratings, positive reviews, or large numbers of downloads.
-
The recommendations and suggestions of the game. You can ask for recommendations and suggestions from your friends, family, or other players who play block puzzle games. You can also look for recommendations and suggestions from online sources such as blogs, forums, social media, or YouTube videos.
-
-
Installing and Launching the Games
-
The third step is to install and launch the block puzzle games on your device. If you download block puzzle games from app stores or websites, you need to follow the instructions on how to install them on your device. You may need to grant some permissions or accept some terms and conditions before installing them.
-
If you play block puzzle games online without downloading them, you need to visit the websites that host them on your web browser. You may need to enable some settings or plugins such as Flash Player or JavaScript before playing them.
-
Once you install or access the block puzzle games on your device, you need to launch them by tapping or clicking on their icons or links. You may need to wait for some loading time before the game starts.
-
Learning the Rules and Controls
-
The fourth step is The fourth step is to learn the rules and controls of the block puzzle games. Each block puzzle game may have different rules and controls, so you need to read the instructions or tutorials before playing them. You can usually find the instructions or tutorials on the main menu, the settings, or the help section of the game. You can also look for online guides or videos that explain how to play the game. The basic rules and controls of block puzzle games are: - You need to move and rotate the blocks that appear on the top or the side of the screen to fit them into the grid or board. - You can use your finger, your mouse, or your keyboard to move and rotate the blocks. You can swipe, tap, drag, click, or press the arrow keys to control the blocks. - You need to fill up the grid with blocks without leaving any gaps or spaces. You can place the blocks horizontally or vertically, depending on the game mode. - You need to clear lines or squares by placing blocks of the same color or type. You can clear horizontal or vertical lines, 3x3 squares, or other patterns, depending on the game mode. - You need to avoid filling up the grid with blocks that cannot be cleared. If there is no more space for new blocks, the game is over. - You need to score points by clearing lines or squares. The more lines or squares you clear at once, the more points you get. You may also get bonus points for clearing special blocks, using power-ups, or completing achievements. - You need to reach a certain score or level to win the game or advance to the next stage. You may also have a time limit or a move limit to complete the game or stage.
Applying Some Tips and Tricks
-
The fifth and final step is to apply some tips and tricks to improve your skills and enjoyment of block puzzle games. Here are some tips and tricks that you can use:
-
-
Plan ahead and think strategically. Before placing a block, look at the grid and see where it would fit best. Try to create as many lines or squares as possible with each block. Avoid placing blocks randomly or impulsively.
-
Use power-ups wisely. Some block puzzle games have power-ups that can help you clear more blocks, change the shape or color of blocks, remove obstacles, or extend time. Use them when you are stuck or when you want to boost your score.
-
Practice regularly and challenge yourself. The more you play block puzzle games, the better you will get at them. Practice different modes and levels to improve your speed, accuracy, and strategy. Challenge yourself by playing harder levels, setting higher goals, or competing with other players.
-
Have fun and relax. Block puzzle games are meant to be fun and relaxing, not stressful or frustrating. Don't worry too much about your score or performance. Enjoy the process of solving puzzles and creating shapes with blocks. Take breaks when you feel tired or bored.
-
-
Conclusion
-
Block puzzle games are a great way to spend your free time and exercise your brain. They are simple yet addictive games that require you to fit different shapes of blocks on a grid, either horizontally or vertically, to clear lines or squares. They have various benefits, features, modes, and challenges that make them appealing and enjoyable for players of all ages and backgrounds.
-
Summary of the Main Points
-
In this article, we have covered:
-
-
What are block puzzle games and how they originated and evolved.
-
What benefits they offer for your cognitive skills, motor skills, stress relief, mood enhancement, and creativity.
-
What features they have such as graphics, sounds, modes, levels, leaderboards, and achievements.
-
How to download and play them on different platforms and devices.
-
How to learn the rules and controls of different block puzzle games.
-
How to apply some tips and tricks to improve your skills and enjoyment of block puzzle games.
-
-
Call to Action
-
If you are interested in playing block puzzle games, don't hesitate to download them from app stores or websites today. You can also play them online without downloading them by visiting websites such as Block Puzzle Online or Block Puzzle Games. You will find a wide range of block puzzle games that suit your taste and preference.
-
Block puzzle games are fun, relaxing, and rewarding games that can improve your brain and well-being. They are easy to learn and play but hard to master and put down. They are perfect for killing time, relieving stress, boosting mood, stimulating creativity, and challenging yourself.
-
So what are you waiting for? Download block puzzle indir now and start playing!
-
FAQs
-
Here are some frequently asked questions (FAQs) about block puzzle games:
-
-
-
Question
-
Answer
-
-
-
What is the difference between block puzzle games and Tetris?
-
Tetris is a specific block puzzle game that involves falling blocks of four squares each that can be rotated and moved sideways to fit into a rectangular grid. Block puzzle games are a broader genre of puzzle games that involve different shapes of blocks that can be moved and rotated to fit into various grids or boards.
-
-
-
What are some of the best block puzzle games for Android and iOS?
-
Some of the best block puzzle games for Android and iOS are Blockudoku, Blockscapes, Woodoku, Block Blast Adventure Master, Blocks: Block Puzzle Games, and many more. You can download them from Google Play Store or Apple App Store.
-
-
-
How can I play block puzzle games online without downloading them?
-
You can play block puzzle games online without downloading them by visiting websites such as Block Puzzle Online or Block Puzzle Games. You can choose from different block puzzle games and play them on your web browser.
-
-
-
How can I improve my skills and strategy in block puzzle games?
-
You can improve your skills and strategy in block puzzle games by practicing regularly, challenging yourself, planning ahead, thinking strategically, using power-ups wisely, and having fun.
-
-
-
Are block puzzle games suitable for children?
-
Yes, block puzzle games are suitable for children as they are simple, fun, and educational. They can help children develop their cognitive skills, motor skills, creativity, and concentration. However, parents should supervise their children's screen time and game choices.
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dark Bitcoin Miner Pro V7.0 and Join the Crypto Revolution..md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dark Bitcoin Miner Pro V7.0 and Join the Crypto Revolution..md
deleted file mode 100644
index 74f8475ea43f93c76b904d4fd100d934c83594bc..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dark Bitcoin Miner Pro V7.0 and Join the Crypto Revolution..md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
Dark Bitcoin Miner Pro V7.0 Free Download: What You Need to Know
-
Bitcoin mining is a process of creating new bitcoins by solving complex mathematical problems using specialized hardware and software.
-
There are many types of bitcoin mining software available in the market, but not all of them.
One of the most popular and controversial bitcoin mining software is dark bitcoin miner pro v7.0, which claims to be the fastest and most efficient bitcoin miner ever created.
But what is dark bitcoin miner pro v7.0, why is it so popular, and what are the risks of downloading it?
-
In this article, we will answer these questions and more, and provide you with some alternatives to dark bitcoin miner pro v7.0 that are safer and more reliable.
-
What is Dark Bitcoin Miner Pro V7.0?
-
Dark bitcoin miner pro v7.0 is a bitcoin mining software that claims to be able to mine bitcoins using any device, such as CPU, GPU, ASIC, or FPGA.
-
It also claims to be compatible with various algorithms, such as SHA-256, Scrypt, X11, Ethash, and Equihash, and to support multiple cryptocurrencies, such as Bitcoin, Litecoin, Dash, Ethereum, and Zcash.
-
How Does Dark Bitcoin Miner Pro V7.0 Work?
-
Dark bitcoin miner pro v7.0 works by using the device's processing power to solve complex mathematical problems that verify transactions on the blockchain.
-
For every problem solved, the miner receives a reward in the form of newly created bitcoins or other cryptocurrencies.
-
The more processing power the device has, the faster and more efficient the mining process is.
-
What are the Features of Dark Bitcoin Miner Pro V7.0?
-
Some of the features of dark bitcoin miner pro v7.0 are:
-
-
-
High speed: Dark bitcoin miner pro v7.0 claims to be able to mine bitcoins at a rate of up to 1 BTC per day, depending on the device and the algorithm used.
-
Low power consumption: Dark bitcoin miner pro v7.0 claims to be able to mine bitcoins using only 10% of the device's power consumption, saving energy and money.
-
Compatibility: Dark bitcoin miner pro v7.0 claims to be compatible with any device that has a processor, such as laptops, desktops, smartphones, tablets, or even smart TVs.
-
Versatility: Dark bitcoin miner pro v7.0 claims to be able to mine any cryptocurrency that uses any algorithm, such as Bitcoin, Litecoin, Dash, Ethereum, or Zcash.
-
User-friendly: Dark bitcoin miner pro v7.0 claims to be easy to install and use, with a simple interface and automatic settings.
-
-
Why is Dark Bitcoin Miner Pro V7.0 Popular?
-
Dark bitcoin miner pro v7.0 is popular because it appeals to many people who want to mine bitcoins without investing in expensive and complicated hardware or software.
-
Many beginners and enthusiasts who are interested in bitcoin mining are attracted by the promises of dark bitcoin miner pro v7.0, such as high speed, low power consumption, compatibility, versatility, and user-friendliness.
-
They also believe that dark bitcoin miner pro v7.0 is a free and easy way to earn bitcoins without any risk or effort.
-
How to Download Dark Bitcoin Miner Pro V7.0?
-
Dark bitcoin miner pro v7.0 is not available on any official or reputable website or platform.
-
The only way to download dark bitcoin miner pro v7.0 is through unofficial and unverified sources, such as file-sharing websites, GitHub repositories, or Telegram channels.
-
These sources are often unreliable and unsafe, as they may contain viruses, malware, spyware, or other harmful programs that can infect your device or steal your data.
-
How to Install and Use Dark Bitcoin Miner Pro V7.0?
-
If you decide to download dark bitcoin miner pro v7.0 from one of these sources, you will need to follow these steps to install and use it:
-
-
Disable your antivirus program: Dark bitcoin miner pro v7.0 is detected as a malicious program by most antivirus programs, so you will need to disable your antivirus program before downloading or running it.
-
Extract the rar file: Dark bitcoin miner pro v7.0 is usually compressed in a rar file that you will need to extract using a program like WinRAR or 7-Zip.
-
Run the exe file: After extracting the rar file, you will find an exe file that you will need to run as administrator by right-clicking on it and selecting "Run as administrator".
-
Configure the settings: After running the exe file, you will see a window that will allow you to configure the settings of dark bitcoin miner pro v7.0, such as the algorithm, the cryptocurrency, the wallet address, the mining pool, and the mining speed.
-
Start mining: After configuring the settings, you will need to click on the "Start" button to start mining bitcoins or other cryptocurrencies with dark bitcoin miner pro v7.0.
-
-
What are the Risks of Downloading Dark Bitcoin Miner Pro V7.0?
-
Downloading dark bitcoin miner pro v7.0 is not only illegal, but also very risky.
-
There are many dangers of downloading dark bitcoin miner pro v7.0, such as:
-
How to Detect and Remove Malware from Dark Bitcoin Miner Pro V7.0?
-
One of the most common and serious dangers of downloading dark bitcoin miner pro v7.0 is malware infection.
-
Malware is a malicious software that can harm your device or data in various ways, such as deleting or encrypting your files, stealing your passwords or personal information, spying on your online activities, or hijacking your resources.
-
Dark bitcoin miner pro v7.0 may contain malware that can infect your device when you download or run it, or even when you extract the rar file.
-
To detect and remove malware from dark bitcoin miner pro v7.0, you will need to follow these steps:
-
-
Use a malware scanner: A malware scanner is a program that can scan your device for any signs of malware infection, such as suspicious files, processes, or registry entries. You can use a reputable and reliable malware scanner, such as Malwarebytes, to scan your device and remove any malware that it finds.
-
Delete suspicious files: If you suspect that dark bitcoin miner pro v7.0 has infected your device with malware, you should delete any suspicious files that are related to it, such as the rar file, the exe file, or any other files that have been created or modified by it.
-
Restore your system: If deleting suspicious files does not solve the problem, you may need to restore your system to a previous state before you downloaded or ran dark bitcoin miner pro v7.0. You can use a system restore point or a backup to restore your system and undo any changes that dark bitcoin miner pro v7.0 may have made.
-
-
How to Avoid Legal Issues from Using Dark Bitcoin Miner Pro V7.0?
-
Another danger of downloading dark bitcoin miner pro v7.0 is legal issues.
-
Legal issues are the problems that may arise from breaking the law by using dark bitcoin miner pro v7.0, such as violating the intellectual property rights of the original developers of the software, infringing the terms and conditions of the mining pools or platforms that you use, or engaging in illegal or fraudulent activities with the cryptocurrencies that you mine.
-
To avoid legal issues from using dark bitcoin miner pro v7.0, you will need to follow these precautions:
-
-
Check the local laws: Before downloading or using dark bitcoin miner pro v7.0, you should check the local laws of your country or region regarding bitcoin mining and cryptocurrency transactions. Some countries or regions may have strict regulations or prohibitions on these activities, and you may face legal consequences if you violate them.
-
Use a VPN: A VPN is a virtual private network that can hide your IP address and encrypt your online traffic, making it harder for anyone to track or monitor your online activities. You can use a VPN to protect your privacy and anonymity when using dark bitcoin miner pro v7.0, and to bypass any geo-restrictions or censorship that may prevent you from accessing certain websites or platforms.
-
Do not disclose personal information: When using dark bitcoin miner pro v7.0, you should not disclose any personal information that can identify you or link you to your activities, such as your name, email address, phone number, bank account number, or social media accounts. You should also avoid using the same wallet address for different transactions, and use a mixer service to anonymize your transactions.
-
-
What are the Alternatives to Dark Bitcoin Miner Pro V7.0?
-
If you want to mine bitcoins or other cryptocurrencies without risking your device, data, or reputation, you should avoid downloading dark bitcoin miner pro v7.0 and look for some alternatives that are safer and more reliable.
-
Some of the alternatives to dark bitcoin miner pro v7.0 are:
-
How to Choose the Best Alternative to Dark Bitcoin Miner Pro V7.0?
-
To choose the best alternative to dark bitcoin miner pro v7.0, you should consider some criteria that can help you evaluate the quality and suitability of the software, such as:
-
-
Security: The software should be secure and free from any malware, spyware, or viruses that can harm your device or data.
-
Performance: The software should be fast and efficient, and able to mine bitcoins or other cryptocurrencies at a reasonable rate and with minimal power consumption.
-
Cost: The software should be affordable and transparent, and not charge any hidden fees or commissions for using it.
-
Reputation: The software should be reputable and trustworthy, and have positive reviews and feedback from other users and experts.
-
-
How to Compare the Alternatives to Dark Bitcoin Miner Pro V7.0?
-
To compare the alternatives to dark bitcoin miner pro v7.0 based on the criteria mentioned above, you can use a table like this one:
- | Software | Security | Performance | Cost | Reputation | | -------- | -------- | ----------- | ---- | ---------- | | CGMiner | High: Open-source and widely used by miners. | High: Supports various devices and algorithms. | Low: Free to download and use. | High: One of the oldest and most popular mining software. | | NiceHash | Medium: Has been hacked in the past, but has improved its security measures. | Medium: Depends on the market demand and supply of hashing power. | Medium: Charges a small fee for using its service. | Medium: Has a large user base and a good customer support. | | Genesis Mining | High: Uses advanced encryption and security protocols. | Low: Limited by the contracts and plans available. | High: Requires a upfront payment and a maintenance fee. | High: One of the leading cloud mining providers with a good reputation. | | Slush Pool | High: Uses a secure connection and a unique voting system. | Medium: Depends on the pool size and the difficulty level. | Low: Charges a 2% fee for using its service. | High: The first and one of the largest mining pools in the world. |
Conclusion
-
In conclusion, dark bitcoin miner pro v7.0 is a bitcoin mining software that claims to be able to mine bitcoins using any device, algorithm, or cryptocurrency.
-
However, dark bitcoin miner pro v7.0 is also illegal, risky, and unreliable, as it may contain malware, steal your data, damage your device, or cause legal issues.
-
Therefore, you should avoid downloading dark bitcoin miner pro v7.0 and look for some alternatives that are safer and more reliable, such as legitimate mining software, cloud mining services, or mining pools.
-
FAQs
-
Here are some frequently asked questions related to the topic of this article:
-
-
Is dark bitcoin miner pro v7.0 a scam?
-
Yes, dark bitcoin miner pro v7.0 is a scam that tries to lure unsuspecting users into downloading malware or giving away their personal information.
-
How much can I earn with dark bitcoin miner pro v7.0?
-
You cannot earn anything with dark bitcoin miner pro v7.0, as it does not actually mine bitcoins or other cryptocurrencies.
-
Is dark bitcoin miner pro v7.0 safe to use?
-
No, dark bitcoin miner pro v7.0 is not safe to use, as it may infect your device with malware, steal your data, damage your device, or cause legal issues.
-
What are the best devices for dark bitcoin miner pro v7.0?
-
There are no best devices for dark bitcoin miner pro v7.0, as it does not work on any device.
-
How can I contact the developers of dark bitcoin miner pro v7.0?
-
You cannot contact the developers of dark bitcoin miner pro v7.0, as they are anonymous and untraceable.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download GTA 3 1.1 Ultimate Trainer V3 5 and Unlock All Features in Grand Theft Auto III.md b/spaces/1phancelerku/anime-remove-background/Download GTA 3 1.1 Ultimate Trainer V3 5 and Unlock All Features in Grand Theft Auto III.md
deleted file mode 100644
index f4568b56335d64d97ac783e1a751d8adba7416fc..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download GTA 3 1.1 Ultimate Trainer V3 5 and Unlock All Features in Grand Theft Auto III.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
How to Download GTA 3 1.1 Ultimate Trainer v3 5
-
If you are a fan of Grand Theft Auto III, you may have heard of GTA 3 1.1 Ultimate Trainer v3 5, a powerful tool that allows you to customize and enhance your gameplay experience. With this trainer, you can access dozens of cheats and options that will make your game more fun and exciting. In this article, we will show you how to download, install, and use GTA 3 1.1 Ultimate Trainer v3 5.
-
What is GTA 3 1.1 Ultimate Trainer v3 5?
-
GTA 3 1.1 Ultimate Trainer v3 5 is a mod for Grand Theft Auto III that adds a menu with various cheats and options that you can activate or deactivate at any time during the game. You can use this trainer to change your character's appearance, spawn vehicles and weapons, manipulate the weather and time, increase your health and money, and much more.
GameFront is a website that offers free downloads of mods, patches, demos, and other files for various games. You can download GTA 3 1.1 Ultimate Trainer v3 from this link:
The file size is 222 KB and the download speed depends on your internet connection. To download the file, you need to click on the "Download Now" button and wait for the countdown to finish. Then, you need to click on the "Download" button again and save the file to your PC.
MegaGames is another website that offers free downloads of mods, patches, trainers, and other files for various games. You can download GTA 3 1.1 Ultimate Trainer v3 5 from this link:
The file size is 223 KB and the download speed depends on your internet connection. To download the file, you need to click on the "Download" button and save the file to your PC.
-
How to install gta 3 ultimate trainer v3 5 on pc
-Gta 3 ultimate trainer v3 5 cheats and codes
-Gta 3 ultimate trainer v3 5 free download full version
-Gta 3 ultimate trainer v3 5 mod menu and features
-Gta 3 ultimate trainer v3 5 download link and instructions
-Gta 3 ultimate trainer v3 5 gameplay and review
-Gta 3 ultimate trainer v3 5 compatible with windows 10
-Gta 3 ultimate trainer v3 5 best settings and options
-Gta 3 ultimate trainer v3 5 unlimited money and health
-Gta 3 ultimate trainer v3 5 no virus and no survey
-Gta 3 ultimate trainer v3 5 latest update and patch
-Gta 3 ultimate trainer v3 5 online multiplayer and co-op
-Gta 3 ultimate trainer v3 5 system requirements and specifications
-Gta 3 ultimate trainer v3 5 tips and tricks
-Gta 3 ultimate trainer v3 5 error fix and troubleshooting
-Gta 3 ultimate trainer v3 5 backup and restore
-Gta 3 ultimate trainer v3 5 custom missions and maps
-Gta 3 ultimate trainer v3 5 hidden secrets and easter eggs
-Gta 3 ultimate trainer v3 5 fun and funny moments
-Gta 3 ultimate trainer v3 5 comparison with other trainers
-Gta 3 ultimate trainer v3 5 pros and cons
-Gta 3 ultimate trainer v3 5 download size and speed
-Gta 3 ultimate trainer v3 5 support and feedback
-Gta 3 ultimate trainer v3 5 alternatives and similar trainers
-Gta 3 ultimate trainer v3 5 guide and tutorial
-
How to Install GTA 3 1.1 Ultimate Trainer v3 5?
-
After you have downloaded GTA 3 1.1 Ultimate Trainer v3 5 from one of the websites above, you need to install it on your PC. The installation process is very simple and straightforward. Here are the steps you need to follow:
-
Extract the files to your GTA 3 folder
-
The file you have downloaded is a ZIP file that contains several files and folders. You need to extract them to your GTA 3 folder, which is usually located at C:\Program Files\Rockstar Games\GTAIII. To do this, you need to use WinRAR or any other program that can extract ZIP files. Right-click on the ZIP file and select "Extract Here" or "Extract to GTA_3_Ultimate_Trainer_v3_5". This will create a new folder with the same name as the ZIP file. Open this folder and copy all the files and folders inside it to your GTA 3 folder. You may need to overwrite some existing files, so make sure you have a backup of your original files in case something goes wrong.
-
Run the trainer and select the options you want
-
After you have copied the files to your GTA 3 folder, you can run the trainer by double-clicking on the GTA_III_Ultimate_Trainer_v35.exe file. This will open a window with a menu that shows all the cheats and options available in the trainer. You can use your mouse or keyboard to navigate through the menu and select the options you want. You can also customize the hotkeys for each option by clicking on the "Hotkeys" button at the bottom of the window. You can save your settings by clicking on the "Save Settings" button at the top of the window.
-
How to Use GTA 3 1.1 Ultimate Trainer v3 5?
-
Once you have installed and run GTA 3 1.1 Ultimate Trainer v3 5, you can use it to enhance your gameplay experience in Grand Theft Auto III. Here are some tips on how to use it:
-
Press F12 to activate the trainer
-
The trainer works in the background while you play GTA 3. To activate it, you need to press F12 on your keyboard. This will bring up a small window at the top left corner of your screen that shows the status of the trainer and some information about your game. You can press F12 again to hide this window.
-
Use the keyboard shortcuts to toggle the cheats
-
To use any of the cheats or options in the trainer, you need to press the corresponding keyboard shortcut that you have assigned in the menu. For example, if you want to activate infinite health, you need to press H on your keyboard. You will hear a sound and see a message on your screen that confirms that the cheat is activated or deactivated. You can also see which cheats are active by looking at the small window that appears when you press F12.
Tips and Tricks for GTA 3 1.1 Ultimate Trainer v3 5
-
GTA 3 1.1 Ultimate Trainer v3 5 is a great tool that can make your game more enjoyable and easier, but it also comes with some risks and limitations. Here are some tips and tricks that can help you use it safely and effectively:
-
Save your game before using the trainer
-
Using the trainer can sometimes cause glitches or crashes in your game, especially if you use too many cheats at once or change some settings that are not compatible with your game version. To avoid losing your progress or corrupting your save files, you should always save your game before using the trainer. You can use the savegame editor in the trainer to create multiple save slots and backup your saves.
-
Be careful with some cheats that may cause glitches or crashes
-
Some of the cheats and options in the trainer may have unintended consequences or side effects that can affect your game performance or stability. For example, using the flying cars cheat may cause your car to fly out of the map or get stuck in the air. Using the super speed cheat may make your game run too fast or slow down. Using the no police cheat may prevent you from completing some missions that require you to get a wanted level. You should use these cheats with caution and turn them off when you don't need them.
-
Conclusion
-
GTA 3 1.1 Ultimate Trainer v3 5 is a mod that adds a menu with various cheats and options that you can use to customize and enhance your gameplay experience in Grand Theft Auto III. You can download it from various websites that host mods for GTA 3, and install it by extracting the files to your GTA 3 folder. You can activate it by pressing F12 and use the keyboard shortcuts to toggle the cheats and options. You can also use the menu to change your hotkeys, edit your save files, take screenshots, teleport, spawn vehicles and weapons, change your skin, weather, stats, garage, and missions. However, you should also be careful with some cheats that may cause glitches or crashes in your game, and save your game before using the trainer.
-
FAQs
-
Here are some of the frequently asked questions about GTA 3 1.1 Ultimate Trainer v3 5:
-
Q: Does GTA 3 1.1 Ultimate Trainer v3 5 work with other mods?
-
A: GTA 3 1.1 Ultimate Trainer v3 5 may work with some other mods that do not modify the same files or features as the trainer. However, it may also cause conflicts or compatibility issues with some mods that do modify the same files or features as the trainer. You should always check the readme files or descriptions of the mods you want to use with the trainer, and make sure they are compatible with each other.
-
Q: Does GTA 3 1.1 Ultimate Trainer v3 5 work with Steam version of GTA 3?
-
A: GTA 3 1.1 Ultimate Trainer v3 5 works with Steam version of GTA 3, but you need to downgrade your game to version 1.1 first. The Steam version of GTA 3 is version 1.0, which is not compatible with the trainer. You can use a patch or a tool to downgrade your game to version 1.1, which you can find online.
-
Q: How do I uninstall GTA 3 1.1 Ultimate Trainer v3 5?
-
A: To uninstall GTA 3 1.1 Ultimate Trainer v3 5, you need to delete all the files and folders that you have copied to your GTA 3 folder when you installed the trainer. You may also need to restore your original files if you have overwritten them with the trainer files.
-
Q: Where can I find more information about GTA 3 1.1 Ultimate Trainer v3 5?
-
A: You can find more information about GTA 3 1.1 Ultimate Trainer v3 5 on the websites where you downloaded it from, or on the forums or communities dedicated to GTA mods. You can also contact the author of the trainer, LithJoe, if you have any questions or feedback.
-
Q: Is GTA 3 1.1 Ultimate Trainer v3 5 safe to use?
-
A: GTA 3 1.1 Ultimate Trainer v3 is safe to use as long as you download it from a trusted source and scan it for viruses or malware before installing it on your PC. However, you should also be aware that using any mod or trainer may affect your game performance or stability
or stability, and that using some cheats or options may be considered cheating or unfair by some players or online servers. You should use the trainer at your own risk and discretion, and respect the rules and preferences of other players and servers.
-
I hope this article has helped you learn how to download, install, and use GTA 3 1.1 Ultimate Trainer v3 5. If you have any comments or questions, feel free to leave them below. Happy gaming!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/3mrology/Chameleon_Text2Img_Generation_Demo/README.md b/spaces/3mrology/Chameleon_Text2Img_Generation_Demo/README.md
deleted file mode 100644
index 36a1c0a304d7b86460fc5494e0eb129324aa2687..0000000000000000000000000000000000000000
--- a/spaces/3mrology/Chameleon_Text2Img_Generation_Demo/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Chameleon Text2Image Demo
-emoji: 🦎
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: huggingface-projects/magic-diffusion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/7hao/bingo/tests/parse.ts b/spaces/7hao/bingo/tests/parse.ts
deleted file mode 100644
index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/tests/parse.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import { promises as fs } from 'fs'
-import { join } from 'path'
-import { parseHeadersFromCurl } from '@/lib/utils'
-
-(async () => {
- const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8')
- const headers = parseHeadersFromCurl(content)
- console.log(headers)
-
- const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8')
- const cmdHeaders = parseHeadersFromCurl(cmdContent)
- console.log(cmdHeaders)
-})()
diff --git a/spaces/801artistry/RVC801/mdx.py b/spaces/801artistry/RVC801/mdx.py
deleted file mode 100644
index 4cc7c08b37bc371294f2f82b3382424a5455b7c2..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/mdx.py
+++ /dev/null
@@ -1,228 +0,0 @@
-import torch
-import onnxruntime as ort
-from tqdm import tqdm
-import warnings
-import numpy as np
-import hashlib
-import queue
-import threading
-
-warnings.filterwarnings("ignore")
-
-class MDX_Model:
- def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000):
- self.dim_f = dim_f
- self.dim_t = dim_t
- self.dim_c = 4
- self.n_fft = n_fft
- self.hop = hop
- self.stem_name = stem_name
- self.compensation = compensation
-
- self.n_bins = self.n_fft//2+1
- self.chunk_size = hop * (self.dim_t-1)
- self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device)
-
- out_c = self.dim_c
-
- self.freq_pad = torch.zeros([1, out_c, self.n_bins-self.dim_f, self.dim_t]).to(device)
-
- def stft(self, x):
- x = x.reshape([-1, self.chunk_size])
- x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True)
- x = torch.view_as_real(x)
- x = x.permute([0,3,1,2])
- x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,4,self.n_bins,self.dim_t])
- return x[:,:,:self.dim_f]
-
- def istft(self, x, freq_pad=None):
- freq_pad = self.freq_pad.repeat([x.shape[0],1,1,1]) if freq_pad is None else freq_pad
- x = torch.cat([x, freq_pad], -2)
- # c = 4*2 if self.target_name=='*' else 2
- x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,2,self.n_bins,self.dim_t])
- x = x.permute([0,2,3,1])
- x = x.contiguous()
- x = torch.view_as_complex(x)
- x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True)
- return x.reshape([-1,2,self.chunk_size])
-
-
-class MDX:
-
- DEFAULT_SR = 44100
- # Unit: seconds
- DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR
- DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR
-
- DEFAULT_PROCESSOR = 0
-
- def __init__(self, model_path:str, params:MDX_Model, processor=DEFAULT_PROCESSOR):
-
- # Set the device and the provider (CPU or CUDA)
- self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu')
- self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider']
-
- self.model = params
-
- # Load the ONNX model using ONNX Runtime
- self.ort = ort.InferenceSession(model_path, providers=self.provider)
- # Preload the model for faster performance
- self.ort.run(None, {'input':torch.rand(1, 4, params.dim_f, params.dim_t).numpy()})
- self.process = lambda spec:self.ort.run(None, {'input': spec.cpu().numpy()})[0]
-
- self.prog = None
-
- @staticmethod
- def get_hash(model_path):
- try:
- with open(model_path, 'rb') as f:
- f.seek(- 10000 * 1024, 2)
- model_hash = hashlib.md5(f.read()).hexdigest()
- except:
- model_hash = hashlib.md5(open(model_path,'rb').read()).hexdigest()
-
- return model_hash
-
- @staticmethod
- def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE):
- """
- Segment or join segmented wave array
-
- Args:
- wave: (np.array) Wave array to be segmented or joined
- combine: (bool) If True, combines segmented wave array. If False, segments wave array.
- chunk_size: (int) Size of each segment (in samples)
- margin_size: (int) Size of margin between segments (in samples)
-
- Returns:
- numpy array: Segmented or joined wave array
- """
-
- if combine:
- processed_wave = None # Initializing as None instead of [] for later numpy array concatenation
- for segment_count, segment in enumerate(wave):
- start = 0 if segment_count == 0 else margin_size
- end = None if segment_count == len(wave)-1 else -margin_size
- if margin_size == 0:
- end = None
- if processed_wave is None: # Create array for first segment
- processed_wave = segment[:, start:end]
- else: # Concatenate to existing array for subsequent segments
- processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1)
-
- else:
- processed_wave = []
- sample_count = wave.shape[-1]
-
- if chunk_size <= 0 or chunk_size > sample_count:
- chunk_size = sample_count
-
- if margin_size > chunk_size:
- margin_size = chunk_size
-
- for segment_count, skip in enumerate(range(0, sample_count, chunk_size)):
-
- margin = 0 if segment_count == 0 else margin_size
- end = min(skip+chunk_size+margin_size, sample_count)
- start = skip-margin
-
- cut = wave[:,start:end].copy()
- processed_wave.append(cut)
-
- if end == sample_count:
- break
-
- return processed_wave
-
- def pad_wave(self, wave):
- """
- Pad the wave array to match the required chunk size
-
- Args:
- wave: (np.array) Wave array to be padded
-
- Returns:
- tuple: (padded_wave, pad, trim)
- - padded_wave: Padded wave array
- - pad: Number of samples that were padded
- - trim: Number of samples that were trimmed
- """
- n_sample = wave.shape[1]
- trim = self.model.n_fft//2
- gen_size = self.model.chunk_size-2*trim
- pad = gen_size - n_sample%gen_size
-
- # Padded wave
- wave_p = np.concatenate((np.zeros((2,trim)), wave, np.zeros((2,pad)), np.zeros((2,trim))), 1)
-
- mix_waves = []
- for i in range(0, n_sample+pad, gen_size):
- waves = np.array(wave_p[:, i:i+self.model.chunk_size])
- mix_waves.append(waves)
-
- mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device)
-
- return mix_waves, pad, trim
-
- def _process_wave(self, mix_waves, trim, pad, q:queue.Queue, _id:int):
- """
- Process each wave segment in a multi-threaded environment
-
- Args:
- mix_waves: (torch.Tensor) Wave segments to be processed
- trim: (int) Number of samples trimmed during padding
- pad: (int) Number of samples padded during padding
- q: (queue.Queue) Queue to hold the processed wave segments
- _id: (int) Identifier of the processed wave segment
-
- Returns:
- numpy array: Processed wave segment
- """
- mix_waves = mix_waves.split(1)
- with torch.no_grad():
- pw = []
- for mix_wave in mix_waves:
- self.prog.update()
- spec = self.model.stft(mix_wave)
- processed_spec = torch.tensor(self.process(spec))
- processed_wav = self.model.istft(processed_spec.to(self.device))
- processed_wav = processed_wav[:,:,trim:-trim].transpose(0,1).reshape(2, -1).cpu().numpy()
- pw.append(processed_wav)
- processed_signal = np.concatenate(pw, axis=-1)[:, :-pad]
- q.put({_id:processed_signal})
- return processed_signal
-
- def process_wave(self, wave:np.array, mt_threads=1):
- """
- Process the wave array in a multi-threaded environment
-
- Args:
- wave: (np.array) Wave array to be processed
- mt_threads: (int) Number of threads to be used for processing
-
- Returns:
- numpy array: Processed wave array
- """
- self.prog = tqdm(total=0)
- chunk = wave.shape[-1]//mt_threads
- waves = self.segment(wave, False, chunk)
-
- # Create a queue to hold the processed wave segments
- q = queue.Queue()
- threads = []
- for c, batch in enumerate(waves):
- mix_waves, pad, trim = self.pad_wave(batch)
- self.prog.total = len(mix_waves)*mt_threads
- thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c))
- thread.start()
- threads.append(thread)
- for thread in threads:
- thread.join()
- self.prog.close()
-
- processed_batches = []
- while not q.empty():
- processed_batches.append(q.get())
- processed_batches = [list(wave.values())[0] for wave in sorted(processed_batches, key=lambda d: list(d.keys())[0])]
- assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!'
- return self.segment(processed_batches, True, chunk)
\ No newline at end of file
diff --git a/spaces/801artistry/RVC801/tools/infer_batch_rvc.py b/spaces/801artistry/RVC801/tools/infer_batch_rvc.py
deleted file mode 100644
index 763d17f14877a2ce35f750202e91356c1f24270f..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/tools/infer_batch_rvc.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import argparse
-import os
-import sys
-
-print("Command-line arguments:", sys.argv)
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import sys
-
-import tqdm as tq
-from dotenv import load_dotenv
-from scipy.io import wavfile
-
-from configs.config import Config
-from infer.modules.vc.modules import VC
-
-
-def arg_parse() -> tuple:
- parser = argparse.ArgumentParser()
- parser.add_argument("--f0up_key", type=int, default=0)
- parser.add_argument("--input_path", type=str, help="input path")
- parser.add_argument("--index_path", type=str, help="index path")
- parser.add_argument("--f0method", type=str, default="harvest", help="harvest or pm")
- parser.add_argument("--opt_path", type=str, help="opt path")
- parser.add_argument("--model_name", type=str, help="store in assets/weight_root")
- parser.add_argument("--index_rate", type=float, default=0.66, help="index rate")
- parser.add_argument("--device", type=str, help="device")
- parser.add_argument("--is_half", type=bool, help="use half -> True")
- parser.add_argument("--filter_radius", type=int, default=3, help="filter radius")
- parser.add_argument("--resample_sr", type=int, default=0, help="resample sr")
- parser.add_argument("--rms_mix_rate", type=float, default=1, help="rms mix rate")
- parser.add_argument("--protect", type=float, default=0.33, help="protect")
-
- args = parser.parse_args()
- sys.argv = sys.argv[:1]
-
- return args
-
-
-def main():
- load_dotenv()
- args = arg_parse()
- config = Config()
- config.device = args.device if args.device else config.device
- config.is_half = args.is_half if args.is_half else config.is_half
- vc = VC(config)
- vc.get_vc(args.model_name)
- audios = os.listdir(args.input_path)
- for file in tq.tqdm(audios):
- if file.endswith(".wav"):
- file_path = os.path.join(args.input_path, file)
- _, wav_opt = vc.vc_single(
- 0,
- file_path,
- args.f0up_key,
- None,
- args.f0method,
- args.index_path,
- None,
- args.index_rate,
- args.filter_radius,
- args.resample_sr,
- args.rms_mix_rate,
- args.protect,
- )
- out_path = os.path.join(args.opt_path, file)
- wavfile.write(out_path, wav_opt[0], wav_opt[1])
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/AIConsultant/MusicGen/scripts/mos.py b/spaces/AIConsultant/MusicGen/scripts/mos.py
deleted file mode 100644
index a711c9ece23e72ed3a07032c7834ef7c56ab4f11..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/scripts/mos.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-"""
-To run this script, from the root of the repo. Make sure to have Flask installed
-
- FLASK_DEBUG=1 FLASK_APP=scripts.mos flask run -p 4567
- # or if you have gunicorn
- gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile -
-
-"""
-from collections import defaultdict
-from functools import wraps
-from hashlib import sha1
-import json
-import math
-from pathlib import Path
-import random
-import typing as tp
-
-from flask import Flask, redirect, render_template, request, session, url_for
-
-from audiocraft import train
-from audiocraft.utils.samples.manager import get_samples_for_xps
-
-
-SAMPLES_PER_PAGE = 8
-MAX_RATING = 5
-storage = Path(train.main.dora.dir / 'mos_storage')
-storage.mkdir(exist_ok=True)
-surveys = storage / 'surveys'
-surveys.mkdir(exist_ok=True)
-magma_root = Path(train.__file__).parent.parent
-app = Flask('mos', static_folder=str(magma_root / 'scripts/static'),
- template_folder=str(magma_root / 'scripts/templates'))
-app.secret_key = b'audiocraft makes the best songs'
-
-
-def normalize_path(path: Path):
- """Just to make path a bit nicer, make them relative to the Dora root dir.
- """
- path = path.resolve()
- dora_dir = train.main.dora.dir.resolve() / 'xps'
- return path.relative_to(dora_dir)
-
-
-def get_full_path(normalized_path: Path):
- """Revert `normalize_path`.
- """
- return train.main.dora.dir.resolve() / 'xps' / normalized_path
-
-
-def get_signature(xps: tp.List[str]):
- """Return a signature for a list of XP signatures.
- """
- return sha1(json.dumps(xps).encode()).hexdigest()[:10]
-
-
-def ensure_logged(func):
- """Ensure user is logged in.
- """
- @wraps(func)
- def _wrapped(*args, **kwargs):
- user = session.get('user')
- if user is None:
- return redirect(url_for('login', redirect_to=request.url))
- return func(*args, **kwargs)
- return _wrapped
-
-
-@app.route('/login', methods=['GET', 'POST'])
-def login():
- """Login user if not already, then redirect.
- """
- user = session.get('user')
- if user is None:
- error = None
- if request.method == 'POST':
- user = request.form['user']
- if not user:
- error = 'User cannot be empty'
- if user is None or error:
- return render_template('login.html', error=error)
- assert user
- session['user'] = user
- redirect_to = request.args.get('redirect_to')
- if redirect_to is None:
- redirect_to = url_for('index')
- return redirect(redirect_to)
-
-
-@app.route('/', methods=['GET', 'POST'])
-@ensure_logged
-def index():
- """Offer to create a new study.
- """
- errors = []
- if request.method == 'POST':
- xps_or_grids = [part.strip() for part in request.form['xps'].split()]
- xps = set()
- for xp_or_grid in xps_or_grids:
- xp_path = train.main.dora.dir / 'xps' / xp_or_grid
- if xp_path.exists():
- xps.add(xp_or_grid)
- continue
- grid_path = train.main.dora.dir / 'grids' / xp_or_grid
- if grid_path.exists():
- for child in grid_path.iterdir():
- if child.is_symlink():
- xps.add(child.name)
- continue
- errors.append(f'{xp_or_grid} is neither an XP nor a grid!')
- assert xps or errors
- blind = 'true' if request.form.get('blind') == 'on' else 'false'
- xps = list(xps)
- if not errors:
- signature = get_signature(xps)
- manifest = {
- 'xps': xps,
- }
- survey_path = surveys / signature
- survey_path.mkdir(exist_ok=True)
- with open(survey_path / 'manifest.json', 'w') as f:
- json.dump(manifest, f, indent=2)
- return redirect(url_for('survey', blind=blind, signature=signature))
- return render_template('index.html', errors=errors)
-
-
-@app.route('/survey/', methods=['GET', 'POST'])
-@ensure_logged
-def survey(signature):
- success = request.args.get('success', False)
- seed = int(request.args.get('seed', 4321))
- blind = request.args.get('blind', 'false') in ['true', 'on', 'True']
- exclude_prompted = request.args.get('exclude_prompted', 'false') in ['true', 'on', 'True']
- exclude_unprompted = request.args.get('exclude_unprompted', 'false') in ['true', 'on', 'True']
- max_epoch = int(request.args.get('max_epoch', '-1'))
- survey_path = surveys / signature
- assert survey_path.exists(), survey_path
-
- user = session['user']
- result_folder = survey_path / 'results'
- result_folder.mkdir(exist_ok=True)
- result_file = result_folder / f'{user}_{seed}.json'
-
- with open(survey_path / 'manifest.json') as f:
- manifest = json.load(f)
-
- xps = [train.main.get_xp_from_sig(xp) for xp in manifest['xps']]
- names, ref_name = train.main.get_names(xps)
-
- samples_kwargs = {
- 'exclude_prompted': exclude_prompted,
- 'exclude_unprompted': exclude_unprompted,
- 'max_epoch': max_epoch,
- }
- matched_samples = get_samples_for_xps(xps, epoch=-1, **samples_kwargs) # fetch latest epoch
- models_by_id = {
- id: [{
- 'xp': xps[idx],
- 'xp_name': names[idx],
- 'model_id': f'{xps[idx].sig}-{sample.id}',
- 'sample': sample,
- 'is_prompted': sample.prompt is not None,
- 'errors': [],
- } for idx, sample in enumerate(samples)]
- for id, samples in matched_samples.items()
- }
- experiments = [
- {'xp': xp, 'name': names[idx], 'epoch': list(matched_samples.values())[0][idx].epoch}
- for idx, xp in enumerate(xps)
- ]
-
- keys = list(matched_samples.keys())
- keys.sort()
- rng = random.Random(seed)
- rng.shuffle(keys)
- model_ids = keys[:SAMPLES_PER_PAGE]
-
- if blind:
- for key in model_ids:
- rng.shuffle(models_by_id[key])
-
- ok = True
- if request.method == 'POST':
- all_samples_results = []
- for id in model_ids:
- models = models_by_id[id]
- result = {
- 'id': id,
- 'is_prompted': models[0]['is_prompted'],
- 'models': {}
- }
- all_samples_results.append(result)
- for model in models:
- rating = request.form[model['model_id']]
- if rating:
- rating = int(rating)
- assert rating <= MAX_RATING and rating >= 1
- result['models'][model['xp'].sig] = rating
- model['rating'] = rating
- else:
- ok = False
- model['errors'].append('Please rate this model.')
- if ok:
- result = {
- 'results': all_samples_results,
- 'seed': seed,
- 'user': user,
- 'blind': blind,
- 'exclude_prompted': exclude_prompted,
- 'exclude_unprompted': exclude_unprompted,
- }
- print(result)
- with open(result_file, 'w') as f:
- json.dump(result, f)
- seed = seed + 1
- return redirect(url_for(
- 'survey', signature=signature, blind=blind, seed=seed,
- exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted,
- max_epoch=max_epoch, success=True))
-
- ratings = list(range(1, MAX_RATING + 1))
- return render_template(
- 'survey.html', ratings=ratings, blind=blind, seed=seed, signature=signature, success=success,
- exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, max_epoch=max_epoch,
- experiments=experiments, models_by_id=models_by_id, model_ids=model_ids, errors=[],
- ref_name=ref_name, already_filled=result_file.exists())
-
-
-@app.route('/audio/')
-def audio(path: str):
- full_path = Path('/') / path
- assert full_path.suffix in [".mp3", ".wav"]
- return full_path.read_bytes(), {'Content-Type': 'audio/mpeg'}
-
-
-def mean(x):
- return sum(x) / len(x)
-
-
-def std(x):
- m = mean(x)
- return math.sqrt(sum((i - m)**2 for i in x) / len(x))
-
-
-@app.route('/results/')
-@ensure_logged
-def results(signature):
-
- survey_path = surveys / signature
- assert survey_path.exists(), survey_path
- result_folder = survey_path / 'results'
- result_folder.mkdir(exist_ok=True)
-
- # ratings per model, then per user.
- ratings_per_model = defaultdict(list)
- users = []
- for result_file in result_folder.iterdir():
- if result_file.suffix != '.json':
- continue
- with open(result_file) as f:
- results = json.load(f)
- users.append(results['user'])
- for result in results['results']:
- for sig, rating in result['models'].items():
- ratings_per_model[sig].append(rating)
-
- fmt = '{:.2f}'
- models = []
- for model in sorted(ratings_per_model.keys()):
- ratings = ratings_per_model[model]
-
- models.append({
- 'sig': model,
- 'samples': len(ratings),
- 'mean_rating': fmt.format(mean(ratings)),
- # the value 1.96 was probably chosen to achieve some
- # confidence interval assuming gaussianity.
- 'std_rating': fmt.format(1.96 * std(ratings) / len(ratings)**0.5),
- })
- return render_template('results.html', signature=signature, models=models, users=users)
diff --git a/spaces/AIFILMS/StyleGANEX/models/bisenet/model.py b/spaces/AIFILMS/StyleGANEX/models/bisenet/model.py
deleted file mode 100644
index 1d2a16ca7533c7b92c600c4dddb89f5f68191d4f..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/bisenet/model.py
+++ /dev/null
@@ -1,283 +0,0 @@
-#!/usr/bin/python
-# -*- encoding: utf-8 -*-
-
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-from models.bisenet.resnet import Resnet18
-# from modules.bn import InPlaceABNSync as BatchNorm2d
-
-
-class ConvBNReLU(nn.Module):
- def __init__(self, in_chan, out_chan, ks=3, stride=1, padding=1, *args, **kwargs):
- super(ConvBNReLU, self).__init__()
- self.conv = nn.Conv2d(in_chan,
- out_chan,
- kernel_size = ks,
- stride = stride,
- padding = padding,
- bias = False)
- self.bn = nn.BatchNorm2d(out_chan)
- self.init_weight()
-
- def forward(self, x):
- x = self.conv(x)
- x = F.relu(self.bn(x))
- return x
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
-class BiSeNetOutput(nn.Module):
- def __init__(self, in_chan, mid_chan, n_classes, *args, **kwargs):
- super(BiSeNetOutput, self).__init__()
- self.conv = ConvBNReLU(in_chan, mid_chan, ks=3, stride=1, padding=1)
- self.conv_out = nn.Conv2d(mid_chan, n_classes, kernel_size=1, bias=False)
- self.init_weight()
-
- def forward(self, x):
- x = self.conv(x)
- x = self.conv_out(x)
- return x
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
- def get_params(self):
- wd_params, nowd_params = [], []
- for name, module in self.named_modules():
- if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d):
- wd_params.append(module.weight)
- if not module.bias is None:
- nowd_params.append(module.bias)
- elif isinstance(module, nn.BatchNorm2d):
- nowd_params += list(module.parameters())
- return wd_params, nowd_params
-
-
-class AttentionRefinementModule(nn.Module):
- def __init__(self, in_chan, out_chan, *args, **kwargs):
- super(AttentionRefinementModule, self).__init__()
- self.conv = ConvBNReLU(in_chan, out_chan, ks=3, stride=1, padding=1)
- self.conv_atten = nn.Conv2d(out_chan, out_chan, kernel_size= 1, bias=False)
- self.bn_atten = nn.BatchNorm2d(out_chan)
- self.sigmoid_atten = nn.Sigmoid()
- self.init_weight()
-
- def forward(self, x):
- feat = self.conv(x)
- atten = F.avg_pool2d(feat, feat.size()[2:])
- atten = self.conv_atten(atten)
- atten = self.bn_atten(atten)
- atten = self.sigmoid_atten(atten)
- out = torch.mul(feat, atten)
- return out
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
-
-class ContextPath(nn.Module):
- def __init__(self, *args, **kwargs):
- super(ContextPath, self).__init__()
- self.resnet = Resnet18()
- self.arm16 = AttentionRefinementModule(256, 128)
- self.arm32 = AttentionRefinementModule(512, 128)
- self.conv_head32 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1)
- self.conv_head16 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1)
- self.conv_avg = ConvBNReLU(512, 128, ks=1, stride=1, padding=0)
-
- self.init_weight()
-
- def forward(self, x):
- H0, W0 = x.size()[2:]
- feat8, feat16, feat32 = self.resnet(x)
- H8, W8 = feat8.size()[2:]
- H16, W16 = feat16.size()[2:]
- H32, W32 = feat32.size()[2:]
-
- avg = F.avg_pool2d(feat32, feat32.size()[2:])
- avg = self.conv_avg(avg)
- avg_up = F.interpolate(avg, (H32, W32), mode='nearest')
-
- feat32_arm = self.arm32(feat32)
- feat32_sum = feat32_arm + avg_up
- feat32_up = F.interpolate(feat32_sum, (H16, W16), mode='nearest')
- feat32_up = self.conv_head32(feat32_up)
-
- feat16_arm = self.arm16(feat16)
- feat16_sum = feat16_arm + feat32_up
- feat16_up = F.interpolate(feat16_sum, (H8, W8), mode='nearest')
- feat16_up = self.conv_head16(feat16_up)
-
- return feat8, feat16_up, feat32_up # x8, x8, x16
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
- def get_params(self):
- wd_params, nowd_params = [], []
- for name, module in self.named_modules():
- if isinstance(module, (nn.Linear, nn.Conv2d)):
- wd_params.append(module.weight)
- if not module.bias is None:
- nowd_params.append(module.bias)
- elif isinstance(module, nn.BatchNorm2d):
- nowd_params += list(module.parameters())
- return wd_params, nowd_params
-
-
-### This is not used, since I replace this with the resnet feature with the same size
-class SpatialPath(nn.Module):
- def __init__(self, *args, **kwargs):
- super(SpatialPath, self).__init__()
- self.conv1 = ConvBNReLU(3, 64, ks=7, stride=2, padding=3)
- self.conv2 = ConvBNReLU(64, 64, ks=3, stride=2, padding=1)
- self.conv3 = ConvBNReLU(64, 64, ks=3, stride=2, padding=1)
- self.conv_out = ConvBNReLU(64, 128, ks=1, stride=1, padding=0)
- self.init_weight()
-
- def forward(self, x):
- feat = self.conv1(x)
- feat = self.conv2(feat)
- feat = self.conv3(feat)
- feat = self.conv_out(feat)
- return feat
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
- def get_params(self):
- wd_params, nowd_params = [], []
- for name, module in self.named_modules():
- if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d):
- wd_params.append(module.weight)
- if not module.bias is None:
- nowd_params.append(module.bias)
- elif isinstance(module, nn.BatchNorm2d):
- nowd_params += list(module.parameters())
- return wd_params, nowd_params
-
-
-class FeatureFusionModule(nn.Module):
- def __init__(self, in_chan, out_chan, *args, **kwargs):
- super(FeatureFusionModule, self).__init__()
- self.convblk = ConvBNReLU(in_chan, out_chan, ks=1, stride=1, padding=0)
- self.conv1 = nn.Conv2d(out_chan,
- out_chan//4,
- kernel_size = 1,
- stride = 1,
- padding = 0,
- bias = False)
- self.conv2 = nn.Conv2d(out_chan//4,
- out_chan,
- kernel_size = 1,
- stride = 1,
- padding = 0,
- bias = False)
- self.relu = nn.ReLU(inplace=True)
- self.sigmoid = nn.Sigmoid()
- self.init_weight()
-
- def forward(self, fsp, fcp):
- fcat = torch.cat([fsp, fcp], dim=1)
- feat = self.convblk(fcat)
- atten = F.avg_pool2d(feat, feat.size()[2:])
- atten = self.conv1(atten)
- atten = self.relu(atten)
- atten = self.conv2(atten)
- atten = self.sigmoid(atten)
- feat_atten = torch.mul(feat, atten)
- feat_out = feat_atten + feat
- return feat_out
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
- def get_params(self):
- wd_params, nowd_params = [], []
- for name, module in self.named_modules():
- if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d):
- wd_params.append(module.weight)
- if not module.bias is None:
- nowd_params.append(module.bias)
- elif isinstance(module, nn.BatchNorm2d):
- nowd_params += list(module.parameters())
- return wd_params, nowd_params
-
-
-class BiSeNet(nn.Module):
- def __init__(self, n_classes, *args, **kwargs):
- super(BiSeNet, self).__init__()
- self.cp = ContextPath()
- ## here self.sp is deleted
- self.ffm = FeatureFusionModule(256, 256)
- self.conv_out = BiSeNetOutput(256, 256, n_classes)
- self.conv_out16 = BiSeNetOutput(128, 64, n_classes)
- self.conv_out32 = BiSeNetOutput(128, 64, n_classes)
- self.init_weight()
-
- def forward(self, x):
- H, W = x.size()[2:]
- feat_res8, feat_cp8, feat_cp16 = self.cp(x) # here return res3b1 feature
- feat_sp = feat_res8 # use res3b1 feature to replace spatial path feature
- feat_fuse = self.ffm(feat_sp, feat_cp8)
-
- feat_out = self.conv_out(feat_fuse)
- feat_out16 = self.conv_out16(feat_cp8)
- feat_out32 = self.conv_out32(feat_cp16)
-
- feat_out = F.interpolate(feat_out, (H, W), mode='bilinear', align_corners=True)
- feat_out16 = F.interpolate(feat_out16, (H, W), mode='bilinear', align_corners=True)
- feat_out32 = F.interpolate(feat_out32, (H, W), mode='bilinear', align_corners=True)
- return feat_out, feat_out16, feat_out32
-
- def init_weight(self):
- for ly in self.children():
- if isinstance(ly, nn.Conv2d):
- nn.init.kaiming_normal_(ly.weight, a=1)
- if not ly.bias is None: nn.init.constant_(ly.bias, 0)
-
- def get_params(self):
- wd_params, nowd_params, lr_mul_wd_params, lr_mul_nowd_params = [], [], [], []
- for name, child in self.named_children():
- child_wd_params, child_nowd_params = child.get_params()
- if isinstance(child, FeatureFusionModule) or isinstance(child, BiSeNetOutput):
- lr_mul_wd_params += child_wd_params
- lr_mul_nowd_params += child_nowd_params
- else:
- wd_params += child_wd_params
- nowd_params += child_nowd_params
- return wd_params, nowd_params, lr_mul_wd_params, lr_mul_nowd_params
-
-
-if __name__ == "__main__":
- net = BiSeNet(19)
- net.cuda()
- net.eval()
- in_ten = torch.randn(16, 3, 640, 480).cuda()
- out, out16, out32 = net(in_ten)
- print(out.shape)
-
- net.get_params()
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/run.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/run.py
deleted file mode 100644
index 7778a333d2e9b53e28bab6f93f0abf1c3540a079..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/run.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import os
-
-os.environ["OMP_NUM_THREADS"] = "1"
-
-from text_to_speech.utils.commons.hparams import hparams, set_hparams
-import importlib
-
-
-def run_task():
- assert hparams['task_cls'] != ''
- pkg = ".".join(hparams["task_cls"].split(".")[:-1])
- cls_name = hparams["task_cls"].split(".")[-1]
- task_cls = getattr(importlib.import_module(pkg), cls_name)
- task_cls.start()
-
-
-if __name__ == '__main__':
- set_hparams()
- run_task()
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/conv.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/conv.py
deleted file mode 100644
index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`.
- """
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!
- """
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1'
- f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).')
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/Abubakari/Sepsis-prediction-streamlit-app/app.py b/spaces/Abubakari/Sepsis-prediction-streamlit-app/app.py
deleted file mode 100644
index 67735b5ceba65ecf18e1e14f20773efedcb483bf..0000000000000000000000000000000000000000
--- a/spaces/Abubakari/Sepsis-prediction-streamlit-app/app.py
+++ /dev/null
@@ -1,154 +0,0 @@
-import streamlit as st
-import pandas as pd
-import joblib
-import matplotlib.pyplot as plt
-import time
-import base64
-
-# Load the pre-trained numerical imputer, scaler, and model using joblib
-num_imputer = joblib.load('numerical_imputer.joblib')
-scaler = joblib.load('scaler.joblib')
-model = joblib.load('Final_model.joblib')
-
-# Define a function to preprocess the input data
-def preprocess_input_data(input_data):
- input_data_df = pd.DataFrame(input_data, columns=['PRG', 'PL', 'PR', 'SK', 'TS', 'M11', 'BD2', 'Age', 'Insurance'])
- num_columns = input_data_df.select_dtypes(include='number').columns
-
- input_data_imputed_num = num_imputer.transform(input_data_df[num_columns])
- input_scaled_df = pd.DataFrame(scaler.transform(input_data_imputed_num), columns=num_columns)
-
- return input_scaled_df
-
-
-# Define a function to make the sepsis prediction
-def predict_sepsis(input_data):
- input_scaled_df = preprocess_input_data(input_data)
- prediction = model.predict(input_scaled_df)[0]
- probabilities = model.predict_proba(input_scaled_df)[0]
- sepsis_status = "Positive" if prediction == 1 else "Negative"
-
- status_icon = "✔" if prediction == 1 else "✘" # Red 'X' icon for positive sepsis prediction, green checkmark icon for negative sepsis prediction
- sepsis_explanation = "Sepsis is a life-threatening condition caused by an infection. A positive prediction suggests that the patient might be exhibiting sepsis symptoms and requires immediate medical attention." if prediction == 1 else "Sepsis is a life-threatening condition caused by an infection. A negative prediction suggests that the patient is not currently exhibiting sepsis symptoms."
-
- output_df = pd.DataFrame(input_data, columns=['PRG', 'PL', 'PR', 'SK', 'TS', 'M11', 'BD2', 'Age', 'Insurance'])
- output_df['Prediction'] = sepsis_status
- output_df['Negative Probability'] = probabilities[0]
- output_df['Positive Probability'] = probabilities[1]
-
- return output_df, probabilities, status_icon, sepsis_explanation
-
-# Create a Streamlit app
-def main():
- st.title('Sepsis Prediction App')
-
- st.image("Strealit_.jpg")
-
- # How to use
- st.sidebar.title('How to Use')
- st.sidebar.markdown('1. Adjust the input parameters on the left sidebar.')
- st.sidebar.markdown('2. Click the "Predict" button to initiate the prediction.')
- st.sidebar.markdown('3. The app will simulate a prediction process with a progress bar.')
- st.sidebar.markdown('4. Once the prediction is complete, the results will be displayed below.')
-
-
- st.sidebar.title('Input Parameters')
-
- # Input parameter explanations
- st.sidebar.markdown('**PRG:** Plasma Glucose')
- PRG = st.sidebar.number_input('PRG', value=0.0)
-
- st.sidebar.markdown('**PL:** Blood Work Result 1')
- PL = st.sidebar.number_input('PL', value=0.0)
-
- st.sidebar.markdown('**PR:** Blood Pressure Measured')
- PR = st.sidebar.number_input('PR', value=0.0)
-
- st.sidebar.markdown('**SK:** Blood Work Result 2')
- SK = st.sidebar.number_input('SK', value=0.0)
-
- st.sidebar.markdown('**TS:** Blood Work Result 3')
- TS = st.sidebar.number_input('TS', value=0.0)
-
- st.sidebar.markdown('**M11:** BMI')
- M11 = st.sidebar.number_input('M11', value=0.0)
-
- st.sidebar.markdown('**BD2:** Blood Work Result 4')
- BD2 = st.sidebar.number_input('BD2', value=0.0)
-
- st.sidebar.markdown('**Age:** What is the Age of the Patient: ')
- Age = st.sidebar.number_input('Age', value=0.0)
-
- st.sidebar.markdown('**Insurance:** Does the patient have Insurance?')
- insurance_options = {0: 'NO', 1: 'YES'}
- Insurance = st.sidebar.radio('Insurance', list(insurance_options.keys()), format_func=lambda x: insurance_options[x])
-
-
- input_data = [[PRG, PL, PR, SK, TS, M11, BD2, Age, Insurance]]
-
- if st.sidebar.button('Predict'):
- with st.spinner("Predicting..."):
- # Simulate a long-running process
- progress_bar = st.progress(0)
- step = 20 # A big step will reduce the execution time
- for i in range(0, 100, step):
- time.sleep(0.1)
- progress_bar.progress(i + step)
-
- output_df, probabilities, status_icon, sepsis_explanation = predict_sepsis(input_data)
-
- st.subheader('Prediction Result')
- prediction_text = "Positive" if status_icon == "✔" else "Negative"
- st.markdown(f"Prediction: **{prediction_text}**")
- st.markdown(f"{status_icon} {sepsis_explanation}")
- st.write(output_df)
-
- # Add a download button for output_df
- csv = output_df.to_csv(index=False)
- b64 = base64.b64encode(csv.encode()).decode()
- href = f'Download Output CSV'
- st.markdown(href, unsafe_allow_html=True)
-
-
- # Plot the probabilities
- fig, ax = plt.subplots()
- ax.bar(['Negative', 'Positive'], probabilities)
- ax.set_xlabel('Sepsis Status')
- ax.set_ylabel('Probability')
- ax.set_title('Sepsis Prediction Probabilities')
- st.pyplot(fig)
-
- # Print feature importance
- if hasattr(model, 'coef_'):
- feature_importances = model.coef_[0]
- feature_names = ['PRG', 'PL', 'PR', 'SK', 'TS', 'M11', 'BD2', 'Age', 'Insurance']
-
- importance_df = pd.DataFrame({'Feature': feature_names, 'Importance': feature_importances})
- importance_df = importance_df.sort_values('Importance', ascending=False)
-
- st.subheader('Feature Importance')
- fig, ax = plt.subplots()
- bars = ax.bar(importance_df['Feature'], importance_df['Importance'])
- ax.set_xlabel('Feature')
- ax.set_ylabel('Importance')
- ax.set_title('Feature Importance')
- ax.tick_params(axis='x', rotation=45)
-
- # Add data labels to the bars
- for bar in bars:
- height = bar.get_height()
- ax.annotate(f'{height:.2f}', xy=(bar.get_x() + bar.get_width() / 2, height),
- xytext=(0, 3), # 3 points vertical offset
- textcoords="offset points",
- ha='center', va='bottom')
- st.pyplot(fig)
-
- else:
- st.write('Feature importance is not available for this model.')
-
- #st.subheader('Sepsis Explanation')
- #st.markdown(f"{status_icon} {sepsis_explanation}")
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/__init__.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Adapting/YouTube-Downloader/app.py b/spaces/Adapting/YouTube-Downloader/app.py
deleted file mode 100644
index 631b245f5f7428f46839a636e560989ccad433ba..0000000000000000000000000000000000000000
--- a/spaces/Adapting/YouTube-Downloader/app.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import streamlit as st
-import tube as tb
-
-tb.clear_cache()
-
-
-md = '''
-# YouTube Downloader
-'''
-
-st.markdown(md)
-
-
-url = st.text_input(
- placeholder="https://www.youtube.com/",
- label='**Enter the url of the youtube:**',
- key='title'
-)
-
-
-
-if url is not None and ('https' in url or 'http' in url):
- tb.download_yt(url)
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Aditya9790/yolo7-object-tracking/LICENSE.md b/spaces/Aditya9790/yolo7-object-tracking/LICENSE.md
deleted file mode 100644
index f288702d2fa16d3cdf0035b15a9fcbc552cd88e7..0000000000000000000000000000000000000000
--- a/spaces/Aditya9790/yolo7-object-tracking/LICENSE.md
+++ /dev/null
@@ -1,674 +0,0 @@
- GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filedropzone/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filedropzone/Factory.d.ts
deleted file mode 100644
index ecb4da5b9d4dd004be9f728d8316d82f15e853e6..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filedropzone/Factory.d.ts
+++ /dev/null
@@ -1,5 +0,0 @@
-import FileDropZone from './FileDropZone.js';
-
-export default function (
- config?: FileDropZone.IConfig
-): FileDropZone;
diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/__init__.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/__init__.py
deleted file mode 100644
index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/models/ade20k/segm_lib/nn/modules/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : __init__.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d
-from .replicate import DataParallelWithCallback, patch_replication_callback
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py
deleted file mode 100644
index 0acd088a469e682011a90b770efa51116f6c42ca..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_instaboost_4x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/res2net.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/res2net.py
deleted file mode 100644
index 7901b7f2fa29741d72328bdbdbf92fc4d5c5f847..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/res2net.py
+++ /dev/null
@@ -1,351 +0,0 @@
-import math
-
-import torch
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init,
- kaiming_init)
-from mmcv.runner import load_checkpoint
-from torch.nn.modules.batchnorm import _BatchNorm
-
-from mmdet.utils import get_root_logger
-from ..builder import BACKBONES
-from .resnet import Bottleneck as _Bottleneck
-from .resnet import ResNet
-
-
-class Bottle2neck(_Bottleneck):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- scales=4,
- base_width=26,
- base_channels=64,
- stage_type='normal',
- **kwargs):
- """Bottle2neck block for Res2Net.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottle2neck, self).__init__(inplanes, planes, **kwargs)
- assert scales > 1, 'Res2Net degenerates to ResNet when scales = 1.'
- width = int(math.floor(self.planes * (base_width / base_channels)))
-
- self.norm1_name, norm1 = build_norm_layer(
- self.norm_cfg, width * scales, postfix=1)
- self.norm3_name, norm3 = build_norm_layer(
- self.norm_cfg, self.planes * self.expansion, postfix=3)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- self.inplanes,
- width * scales,
- kernel_size=1,
- stride=self.conv1_stride,
- bias=False)
- self.add_module(self.norm1_name, norm1)
-
- if stage_type == 'stage' and self.conv2_stride != 1:
- self.pool = nn.AvgPool2d(
- kernel_size=3, stride=self.conv2_stride, padding=1)
- convs = []
- bns = []
-
- fallback_on_stride = False
- if self.with_dcn:
- fallback_on_stride = self.dcn.pop('fallback_on_stride', False)
- if not self.with_dcn or fallback_on_stride:
- for i in range(scales - 1):
- convs.append(
- build_conv_layer(
- self.conv_cfg,
- width,
- width,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- bias=False))
- bns.append(
- build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1])
- self.convs = nn.ModuleList(convs)
- self.bns = nn.ModuleList(bns)
- else:
- assert self.conv_cfg is None, 'conv_cfg must be None for DCN'
- for i in range(scales - 1):
- convs.append(
- build_conv_layer(
- self.dcn,
- width,
- width,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=self.dilation,
- dilation=self.dilation,
- bias=False))
- bns.append(
- build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1])
- self.convs = nn.ModuleList(convs)
- self.bns = nn.ModuleList(bns)
-
- self.conv3 = build_conv_layer(
- self.conv_cfg,
- width * scales,
- self.planes * self.expansion,
- kernel_size=1,
- bias=False)
- self.add_module(self.norm3_name, norm3)
-
- self.stage_type = stage_type
- self.scales = scales
- self.width = width
- delattr(self, 'conv2')
- delattr(self, self.norm2_name)
-
- def forward(self, x):
- """Forward function."""
-
- def _inner_forward(x):
- identity = x
-
- out = self.conv1(x)
- out = self.norm1(out)
- out = self.relu(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv1_plugin_names)
-
- spx = torch.split(out, self.width, 1)
- sp = self.convs[0](spx[0].contiguous())
- sp = self.relu(self.bns[0](sp))
- out = sp
- for i in range(1, self.scales - 1):
- if self.stage_type == 'stage':
- sp = spx[i]
- else:
- sp = sp + spx[i]
- sp = self.convs[i](sp.contiguous())
- sp = self.relu(self.bns[i](sp))
- out = torch.cat((out, sp), 1)
-
- if self.stage_type == 'normal' or self.conv2_stride == 1:
- out = torch.cat((out, spx[self.scales - 1]), 1)
- elif self.stage_type == 'stage':
- out = torch.cat((out, self.pool(spx[self.scales - 1])), 1)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv2_plugin_names)
-
- out = self.conv3(out)
- out = self.norm3(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv3_plugin_names)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-class Res2Layer(nn.Sequential):
- """Res2Layer to build Res2Net style backbone.
-
- Args:
- block (nn.Module): block used to build ResLayer.
- inplanes (int): inplanes of block.
- planes (int): planes of block.
- num_blocks (int): number of blocks.
- stride (int): stride of the first block. Default: 1
- avg_down (bool): Use AvgPool instead of stride conv when
- downsampling in the bottle2neck. Default: False
- conv_cfg (dict): dictionary to construct and config conv layer.
- Default: None
- norm_cfg (dict): dictionary to construct and config norm layer.
- Default: dict(type='BN')
- scales (int): Scales used in Res2Net. Default: 4
- base_width (int): Basic width of each scale. Default: 26
- """
-
- def __init__(self,
- block,
- inplanes,
- planes,
- num_blocks,
- stride=1,
- avg_down=True,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- scales=4,
- base_width=26,
- **kwargs):
- self.block = block
-
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.AvgPool2d(
- kernel_size=stride,
- stride=stride,
- ceil_mode=True,
- count_include_pad=False),
- build_conv_layer(
- conv_cfg,
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=1,
- bias=False),
- build_norm_layer(norm_cfg, planes * block.expansion)[1],
- )
-
- layers = []
- layers.append(
- block(
- inplanes=inplanes,
- planes=planes,
- stride=stride,
- downsample=downsample,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- scales=scales,
- base_width=base_width,
- stage_type='stage',
- **kwargs))
- inplanes = planes * block.expansion
- for i in range(1, num_blocks):
- layers.append(
- block(
- inplanes=inplanes,
- planes=planes,
- stride=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- scales=scales,
- base_width=base_width,
- **kwargs))
- super(Res2Layer, self).__init__(*layers)
-
-
-@BACKBONES.register_module()
-class Res2Net(ResNet):
- """Res2Net backbone.
-
- Args:
- scales (int): Scales used in Res2Net. Default: 4
- base_width (int): Basic width of each scale. Default: 26
- depth (int): Depth of res2net, from {50, 101, 152}.
- in_channels (int): Number of input image channels. Default: 3.
- num_stages (int): Res2net stages. Default: 4.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv
- avg_down (bool): Use AvgPool instead of stride conv when
- downsampling in the bottle2neck.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- plugins (list[dict]): List of plugins for stages, each dict contains:
-
- - cfg (dict, required): Cfg dict to build plugin.
- - position (str, required): Position inside block to insert
- plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'.
- - stages (tuple[bool], optional): Stages to apply plugin, length
- should be same as 'num_stages'.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): Whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
-
- Example:
- >>> from mmdet.models import Res2Net
- >>> import torch
- >>> self = Res2Net(depth=50, scales=4, base_width=26)
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 32, 32)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 256, 8, 8)
- (1, 512, 4, 4)
- (1, 1024, 2, 2)
- (1, 2048, 1, 1)
- """
-
- arch_settings = {
- 50: (Bottle2neck, (3, 4, 6, 3)),
- 101: (Bottle2neck, (3, 4, 23, 3)),
- 152: (Bottle2neck, (3, 8, 36, 3))
- }
-
- def __init__(self,
- scales=4,
- base_width=26,
- style='pytorch',
- deep_stem=True,
- avg_down=True,
- **kwargs):
- self.scales = scales
- self.base_width = base_width
- super(Res2Net, self).__init__(
- style='pytorch', deep_stem=True, avg_down=True, **kwargs)
-
- def make_res_layer(self, **kwargs):
- return Res2Layer(
- scales=self.scales,
- base_width=self.base_width,
- base_channels=self.base_channels,
- **kwargs)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
-
- if self.dcn is not None:
- for m in self.modules():
- if isinstance(m, Bottle2neck):
- # dcn in Res2Net bottle2neck is in ModuleList
- for n in m.convs:
- if hasattr(n, 'conv_offset'):
- constant_init(n.conv_offset, 0)
-
- if self.zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottle2neck):
- constant_init(m.norm3, 0)
- else:
- raise TypeError('pretrained must be a str or None')
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/nonlocal_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/nonlocal_r50-d8.py
deleted file mode 100644
index 5674a39854cafd1f2e363bac99c58ccae62f24da..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/nonlocal_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='NLHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- dropout_ratio=0.1,
- reduction=2,
- use_scale=True,
- mode='embedded_gaussian',
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/README.md
deleted file mode 100644
index da0924ac60f0a16a17fe4705e0edbf5aad962a82..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/README.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# Non-local Neural Networks
-
-## Introduction
-
-
-
-```latex
-@inproceedings{wang2018non,
- title={Non-local neural networks},
- author={Wang, Xiaolong and Girshick, Ross and Gupta, Abhinav and He, Kaiming},
- booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
- pages={7794--7803},
- year={2018}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| -------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| NonLocal | R-50-D8 | 512x1024 | 40000 | 7.4 | 2.72 | 78.24 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes/nonlocal_r50-d8_512x1024_40k_cityscapes_20200605_210748-c75e81e3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes/nonlocal_r50-d8_512x1024_40k_cityscapes_20200605_210748.log.json) |
-| NonLocal | R-101-D8 | 512x1024 | 40000 | 10.9 | 1.95 | 78.66 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes/nonlocal_r101-d8_512x1024_40k_cityscapes_20200605_210748-d63729fa.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes/nonlocal_r101-d8_512x1024_40k_cityscapes_20200605_210748.log.json) |
-| NonLocal | R-50-D8 | 769x769 | 40000 | 8.9 | 1.52 | 78.33 | 79.92 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes/nonlocal_r50-d8_769x769_40k_cityscapes_20200530_045243-82ef6749.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes/nonlocal_r50-d8_769x769_40k_cityscapes_20200530_045243.log.json) |
-| NonLocal | R-101-D8 | 769x769 | 40000 | 12.8 | 1.05 | 78.57 | 80.29 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_769x769_40k_cityscapes/nonlocal_r101-d8_769x769_40k_cityscapes_20200530_045348-8fe9a9dc.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_769x769_40k_cityscapes/nonlocal_r101-d8_769x769_40k_cityscapes_20200530_045348.log.json) |
-| NonLocal | R-50-D8 | 512x1024 | 80000 | - | - | 78.01 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x1024_80k_cityscapes/nonlocal_r50-d8_512x1024_80k_cityscapes_20200607_193518-d6839fae.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x1024_80k_cityscapes/nonlocal_r50-d8_512x1024_80k_cityscapes_20200607_193518.log.json) |
-| NonLocal | R-101-D8 | 512x1024 | 80000 | - | - | 78.93 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes/nonlocal_r101-d8_512x1024_80k_cityscapes_20200607_183411-32700183.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes/nonlocal_r101-d8_512x1024_80k_cityscapes_20200607_183411.log.json) |
-| NonLocal | R-50-D8 | 769x769 | 80000 | - | - | 79.05 | 80.68 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_769x769_80k_cityscapes/nonlocal_r50-d8_769x769_80k_cityscapes_20200607_193506-1f9792f6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_769x769_80k_cityscapes/nonlocal_r50-d8_769x769_80k_cityscapes_20200607_193506.log.json) |
-| NonLocal | R-101-D8 | 769x769 | 80000 | - | - | 79.40 | 80.85 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_769x769_80k_cityscapes/nonlocal_r101-d8_769x769_80k_cityscapes_20200607_183428-0e1fa4f9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_769x769_80k_cityscapes/nonlocal_r101-d8_769x769_80k_cityscapes_20200607_183428.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| -------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| NonLocal | R-50-D8 | 512x512 | 80000 | 9.1 | 21.37 | 40.75 | 42.05 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_80k_ade20k/nonlocal_r50-d8_512x512_80k_ade20k_20200615_015801-5ae0aa33.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_80k_ade20k/nonlocal_r50-d8_512x512_80k_ade20k_20200615_015801.log.json) |
-| NonLocal | R-101-D8 | 512x512 | 80000 | 12.6 | 13.97 | 42.90 | 44.27 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_80k_ade20k/nonlocal_r101-d8_512x512_80k_ade20k_20200615_015758-24105919.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_80k_ade20k/nonlocal_r101-d8_512x512_80k_ade20k_20200615_015758.log.json) |
-| NonLocal | R-50-D8 | 512x512 | 160000 | - | - | 42.03 | 43.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_160k_ade20k/nonlocal_r50-d8_512x512_160k_ade20k_20200616_005410-baef45e3.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_160k_ade20k/nonlocal_r50-d8_512x512_160k_ade20k_20200616_005410.log.json) |
-| NonLocal | R-101-D8 | 512x512 | 160000 | - | - | 43.36 | 44.83 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k/nonlocal_r101-d8_512x512_160k_ade20k_20200616_003422-affd0f8d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k/nonlocal_r101-d8_512x512_160k_ade20k_20200616_003422.log.json) |
-
-### Pascal VOC 2012 + Aug
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| -------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| NonLocal | R-50-D8 | 512x512 | 20000 | 6.4 | 21.21 | 76.20 | 77.12 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_20k_voc12aug/nonlocal_r50-d8_512x512_20k_voc12aug_20200617_222613-07f2a57c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_20k_voc12aug/nonlocal_r50-d8_512x512_20k_voc12aug_20200617_222613.log.json) |
-| NonLocal | R-101-D8 | 512x512 | 20000 | 9.8 | 14.01 | 78.15 | 78.86 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_20k_voc12aug/nonlocal_r101-d8_512x512_20k_voc12aug_20200617_222615-948c68ab.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_20k_voc12aug/nonlocal_r101-d8_512x512_20k_voc12aug_20200617_222615.log.json) |
-| NonLocal | R-50-D8 | 512x512 | 40000 | - | - | 76.65 | 77.47 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug/nonlocal_r50-d8_512x512_40k_voc12aug_20200614_000028-0139d4a9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug/nonlocal_r50-d8_512x512_40k_voc12aug_20200614_000028.log.json) |
-| NonLocal | R-101-D8 | 512x512 | 40000 | - | - | 78.27 | 79.12 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug/nonlocal_r101-d8_512x512_40k_voc12aug_20200614_000028-7e5ff470.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug/nonlocal_r101-d8_512x512_40k_voc12aug_20200614_000028.log.json) |
diff --git a/spaces/AnimaLab/bias-test-gpt-pairs/openAI_manager.py b/spaces/AnimaLab/bias-test-gpt-pairs/openAI_manager.py
deleted file mode 100644
index 3f996b00204f4113ac301a77aa891455bc913bc0..0000000000000000000000000000000000000000
--- a/spaces/AnimaLab/bias-test-gpt-pairs/openAI_manager.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import openai
-import backoff
-import json
-import re
-import random
-
-import mgr_bias_scoring as bt_mgr
-
-def initOpenAI(key):
- openai.api_key = key
-
- # list models
- models = openai.Model.list()
-
- return models
-
-# construct prompts from example_shots
-def examples_to_prompt(example_shots, kwd_pair):
- prompt = ""
- for shot in example_shots:
- prompt += "Keywords: "+', '.join(shot['Keywords'])+" ## Sentence: "+ \
- shot['Sentence']+" ##\n"
- prompt += f"Keywords: {kwd_pair[0]}, {kwd_pair[1]} ## Sentence: "
- return prompt
-
-def genChatGPT(model_name, kwd_pair, bias_spec, num2gen, numTries, temperature=0.8):
- t1, t2, a1, a2 = bt_mgr.get_words(bias_spec)
- att_terms_str = ','.join([f"'{t}'" for t in random.sample(a1+a2, min(8, len(a1+a2)))])
- t_terms_str = ','.join([f"'{t}'" for t in random.sample(t1+t2, min(8, len(t1+t2)))])
-
- # find out which social group the generator term belongs to
- grp_term = kwd_pair[0]
- if grp_term in t1:
- grp_term_idx = t1.index(grp_term)
- grp_term_pair = [grp_term, t2[grp_term_idx]]
- else:
- grp_term_idx = t2.index(grp_term)
- grp_term_pair = [grp_term, t1[grp_term_idx]]
-
- # construct prompt
- #instruction = f"Write a sentence including terms \"{kwd_pair[0]}\" and \"{kwd_pair[1]}\"."# Use examples as guide for the type of sentences to write."
- #prompt = examples_to_prompt(example_shots, kwd_pair)
- instruction = f"Write a sentence including target term \"{kwd_pair[0]}\" and attribute term \"{kwd_pair[1]}\".\n \
-Other target terms in this context are: {t_terms_str}. Use them for interpretation of requested target term, but don't include these specifically. \
-Other attribute terms in this context are: {att_terms_str}. Use them for interpretation of requested attribute term, but don't include these specifically. "# Use examples as guide for the type of sentences to write."
-
- #print(f"Prompt: {prompt}")
- #print(f"Instruction: {instruction}")
-
- # https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb
- @backoff.on_exception(backoff.expo, (openai.error.RateLimitError,
- openai.error.APIError,
- openai.error.ServiceUnavailableError,
- ConnectionResetError,
- json.decoder.JSONDecodeError))#,
- #max_time=300,
- #raise_on_giveup=False,
- #giveup=fatal_code)
-
- def completions_with_backoff(**kwargs):
- return openai.ChatCompletion.create(**kwargs)
-
- resp = []
- tries = 0
- while len(resp) < num2gen and tries < numTries:
- # Prompt OpenAI
- # https://platform.openai.com/docs/api-reference/chat/create
- response = completions_with_backoff(model=model_name,
- temperature=temperature,
- messages=[{"role": "system", "content": instruction}])
- # ,{"role": "user", "content": prompt}
-
- sentence = response["choices"][0]["message"]["content"]
-
- fnd_kwd_0 = list(re.finditer(f'{kwd_pair[0].lower()}[ .,!]+', sentence.lower()))
- fnd_kwd_1 = list(re.finditer(f'{kwd_pair[1].lower()}[ .,!]+', sentence.lower()))
- if len(fnd_kwd_0)>0 and len(fnd_kwd_1)>0:
- resp.append([kwd_pair[0], kwd_pair[1], sentence, grp_term_pair[0], grp_term_pair[1]])
-
- tries += 1
-
- return resp, instruction
-
-# Prompt ChatGPT to write a sentence alternaitve for the other social group term
-def promptChatGPTTemplate(model_name, term1, term2, sentence, temperature=0.0):
- instruction = f"Rewrite the sentence to replace {term1} with {term2}. Make only minimal changes to preserve grammar."
- prompt = f"Sentence: {sentence}, Rewrite: "
-
- # https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb
- @backoff.on_exception(backoff.expo, (openai.error.RateLimitError,
- openai.error.APIError,
- openai.error.ServiceUnavailableError,
- ConnectionResetError,
- json.decoder.JSONDecodeError))
-
- def completions_with_backoff(**kwargs):
- return openai.ChatCompletion.create(**kwargs)
-
- # Prompt OpenAI
- # https://platform.openai.com/docs/api-reference/chat/create
- response = completions_with_backoff(model=model_name,
- temperature=temperature,
- messages=[{"role": "system", "content": instruction},
- {"role": "user", "content": prompt}])
-
- return response["choices"][0]["message"]["content"]
-
-# turn generated sentence into a test templates
-def chatgpt_sentence_alternative(row, model_name):
- sentence = row['Sentence']
- grp_term = row['org_grp_term']
- att_term = row['Attribute term']
- grp_term1 = row['Group term 1']
- grp_term2 = row['Group term 2']
-
- rewrite = promptChatGPTTemplate(model_name, grp_term1, grp_term2, sentence)
-
- #template, grp_refs = maskDifferences(sentence, rewrite, grp_term_pair, att_term)
- return rewrite
-
-def generateTestSentencesCustom(model_name, gr1_kwds, gr2_kwds, attribute_kwds, att_counts, bias_spec, progress):
- print(f"Running Custom Sentence Generator, Counts:\n {att_counts}")
- print(f"Groups: [{gr1_kwds}, {gr2_kwds}]\nAttributes: {attribute_kwds}")
-
- numGlobTries = 5
- numTries = 10
- all_gens = []
- show_instr = False
- num_steps = len(attribute_kwds)
- for ai, att_kwd in enumerate(attribute_kwds):
- print(f'Running att: {att_kwd}..')
- att_count = 0
- if att_kwd in att_counts:
- att_count = att_counts[att_kwd]
- elif att_kwd.replace(' ','-') in att_counts:
- att_count = att_counts[att_kwd.replace(' ','-')]
- else:
- print(f"Missing count for attribute: <{att_kwd}>")
-
- if att_count != 0:
- print(f"For {att_kwd} generate {att_count}")
-
- att_gens = []
- glob_tries = 0
- while len(att_gens) < att_count and glob_tries < att_count*numGlobTries:
- gr1_kwd = random.sample(gr1_kwds, 1)[0]
- gr2_kwd = random.sample(gr2_kwds, 1)[0]
-
- for kwd_pair in [[gr1_kwd.strip(), att_kwd.strip()], [gr2_kwd.strip(), att_kwd.strip()]]:
- progress((ai)/num_steps, desc=f"Generating {kwd_pair[0]}<>{att_kwd}...")
-
- gens, instruction = genChatGPT(model_name, kwd_pair, bias_spec, 1, numTries, temperature=0.8)
- att_gens.extend(gens)
-
- if show_instr == False:
- print(f"Instruction: {instruction}")
- show_instr = True
-
- glob_tries += 1
- print(".", end="", flush=True)
- print()
-
- if len(att_gens) > att_count:
- print(f"Downsampling from {len(att_gens)} to {att_count}...")
- att_gens = random.sample(att_gens, att_count)
-
- print(f"Num generated: {len(att_gens)}")
- all_gens.extend(att_gens)
-
- return all_gens
-
-
-# generate sentences
-def generateTestSentences(model_name, group_kwds, attribute_kwds, num2gen, progress):
- print(f"Groups: [{group_kwds}]\nAttributes: [{attribute_kwds}]")
-
- numTries = 5
- #num2gen = 2
- all_gens = []
- num_steps = len(group_kwds)*len(attribute_kwds)
- for gi, grp_kwd in enumerate(group_kwds):
- for ai, att_kwd in enumerate(attribute_kwds):
- progress((gi*len(attribute_kwds)+ai)/num_steps, desc=f"Generating {grp_kwd}<>{att_kwd}...")
-
- kwd_pair = [grp_kwd.strip(), att_kwd.strip()]
-
- gens = genChatGPT(model_name, kwd_pair, num2gen, numTries, temperature=0.8)
- #print(f"Gens for pair: <{kwd_pair}> -> {gens}")
- all_gens.extend(gens)
-
- return all_gens
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/tool_add_control.py b/spaces/Anonymous-sub/Rerender/ControlNet/tool_add_control.py
deleted file mode 100644
index 8076b5143405e5516b063f4fd63096f65cffbed2..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/tool_add_control.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import sys
-import os
-
-assert len(sys.argv) == 3, 'Args are wrong.'
-
-input_path = sys.argv[1]
-output_path = sys.argv[2]
-
-assert os.path.exists(input_path), 'Input model does not exist.'
-assert not os.path.exists(output_path), 'Output filename already exists.'
-assert os.path.exists(os.path.dirname(output_path)), 'Output path is not valid.'
-
-import torch
-from share import *
-from cldm.model import create_model
-
-
-def get_node_name(name, parent_name):
- if len(name) <= len(parent_name):
- return False, ''
- p = name[:len(parent_name)]
- if p != parent_name:
- return False, ''
- return True, name[len(parent_name):]
-
-
-model = create_model(config_path='./models/cldm_v15.yaml')
-
-pretrained_weights = torch.load(input_path)
-if 'state_dict' in pretrained_weights:
- pretrained_weights = pretrained_weights['state_dict']
-
-scratch_dict = model.state_dict()
-
-target_dict = {}
-for k in scratch_dict.keys():
- is_control, name = get_node_name(k, 'control_')
- if is_control:
- copy_k = 'model.diffusion_' + name
- else:
- copy_k = k
- if copy_k in pretrained_weights:
- target_dict[k] = pretrained_weights[copy_k].clone()
- else:
- target_dict[k] = scratch_dict[k].clone()
- print(f'These weights are newly added: {k}')
-
-model.load_state_dict(target_dict, strict=True)
-torch.save(model.state_dict(), output_path)
-print('Done.')
diff --git a/spaces/Apex-X/nono/roop/core.py b/spaces/Apex-X/nono/roop/core.py
deleted file mode 100644
index b70d8548194c74cce3e4d20c53c7a88c119c2028..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/nono/roop/core.py
+++ /dev/null
@@ -1,215 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import sys
-# single thread doubles cuda performance - needs to be set before torch import
-if any(arg.startswith('--execution-provider') for arg in sys.argv):
- os.environ['OMP_NUM_THREADS'] = '1'
-# reduce tensorflow log level
-os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
-import warnings
-from typing import List
-import platform
-import signal
-import shutil
-import argparse
-import torch
-import onnxruntime
-import tensorflow
-
-import roop.globals
-import roop.metadata
-import roop.ui as ui
-from roop.predicter import predict_image, predict_video
-from roop.processors.frame.core import get_frame_processors_modules
-from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path
-
-if 'ROCMExecutionProvider' in roop.globals.execution_providers:
- del torch
-
-warnings.filterwarnings('ignore', category=FutureWarning, module='insightface')
-warnings.filterwarnings('ignore', category=UserWarning, module='torchvision')
-
-
-def parse_args() -> None:
- signal.signal(signal.SIGINT, lambda signal_number, frame: destroy())
- program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100))
- program.add_argument('-s', '--source', help='select an source image', dest='source_path')
- program.add_argument('-t', '--target', help='select an target image or video', dest='target_path')
- program.add_argument('-o', '--output', help='select output file or directory', dest='output_path')
- program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+')
- program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False)
- program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True)
- program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False)
- program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False)
- program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9'])
- program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]')
- program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory())
- program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+')
- program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads())
- program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}')
-
- args = program.parse_args()
-
- roop.globals.source_path = args.source_path
- roop.globals.target_path = args.target_path
- roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path)
- roop.globals.frame_processors = args.frame_processor
- roop.globals.headless = args.source_path or args.target_path or args.output_path
- roop.globals.keep_fps = args.keep_fps
- roop.globals.keep_audio = args.keep_audio
- roop.globals.keep_frames = args.keep_frames
- roop.globals.many_faces = args.many_faces
- roop.globals.video_encoder = args.video_encoder
- roop.globals.video_quality = args.video_quality
- roop.globals.max_memory = args.max_memory
- roop.globals.execution_providers = decode_execution_providers(args.execution_provider)
- roop.globals.execution_threads = args.execution_threads
-
-
-def encode_execution_providers(execution_providers: List[str]) -> List[str]:
- return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers]
-
-
-def decode_execution_providers(execution_providers: List[str]) -> List[str]:
- return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers()))
- if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)]
-
-
-def suggest_max_memory() -> int:
- if platform.system().lower() == 'darwin':
- return 4
- return 16
-
-
-def suggest_execution_providers() -> List[str]:
- return encode_execution_providers(onnxruntime.get_available_providers())
-
-
-def suggest_execution_threads() -> int:
- if 'DmlExecutionProvider' in roop.globals.execution_providers:
- return 1
- if 'ROCMExecutionProvider' in roop.globals.execution_providers:
- return 1
- return 8
-
-
-def limit_resources() -> None:
- # prevent tensorflow memory leak
- gpus = tensorflow.config.experimental.list_physical_devices('GPU')
- for gpu in gpus:
- tensorflow.config.experimental.set_virtual_device_configuration(gpu, [
- tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)
- ])
- # limit memory usage
- if roop.globals.max_memory:
- memory = roop.globals.max_memory * 1024 ** 3
- if platform.system().lower() == 'darwin':
- memory = roop.globals.max_memory * 1024 ** 6
- if platform.system().lower() == 'windows':
- import ctypes
- kernel32 = ctypes.windll.kernel32
- kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory))
- else:
- import resource
- resource.setrlimit(resource.RLIMIT_DATA, (memory, memory))
-
-
-def release_resources() -> None:
- if 'CUDAExecutionProvider' in roop.globals.execution_providers:
- torch.cuda.empty_cache()
-
-
-def pre_check() -> bool:
- if sys.version_info < (3, 9):
- update_status('Python version is not supported - please upgrade to 3.9 or higher.')
- return False
- if not shutil.which('ffmpeg'):
- update_status('ffmpeg is not installed.')
- return False
- return True
-
-
-def update_status(message: str, scope: str = 'ROOP.CORE') -> None:
- print(f'[{scope}] {message}')
- if not roop.globals.headless:
- ui.update_status(message)
-
-
-def start() -> None:
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- if not frame_processor.pre_start():
- return
- # process image to image
- if has_image_extension(roop.globals.target_path):
- if predict_image(roop.globals.target_path):
- destroy()
- shutil.copy2(roop.globals.target_path, roop.globals.output_path)
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- update_status('Progressing...', frame_processor.NAME)
- frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path)
- frame_processor.post_process()
- release_resources()
- if is_image(roop.globals.target_path):
- update_status('Processing to image succeed!')
- else:
- update_status('Processing to image failed!')
- return
- # process image to videos
- if predict_video(roop.globals.target_path):
- destroy()
- update_status('Creating temp resources...')
- create_temp(roop.globals.target_path)
- update_status('Extracting frames...')
- extract_frames(roop.globals.target_path)
- temp_frame_paths = get_temp_frame_paths(roop.globals.target_path)
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- update_status('Progressing...', frame_processor.NAME)
- frame_processor.process_video(roop.globals.source_path, temp_frame_paths)
- frame_processor.post_process()
- release_resources()
- # handles fps
- if roop.globals.keep_fps:
- update_status('Detecting fps...')
- fps = detect_fps(roop.globals.target_path)
- update_status(f'Creating video with {fps} fps...')
- create_video(roop.globals.target_path, fps)
- else:
- update_status('Creating video with 30.0 fps...')
- create_video(roop.globals.target_path)
- # handle audio
- if roop.globals.keep_audio:
- if roop.globals.keep_fps:
- update_status('Restoring audio...')
- else:
- update_status('Restoring audio might cause issues as fps are not kept...')
- restore_audio(roop.globals.target_path, roop.globals.output_path)
- else:
- move_temp(roop.globals.target_path, roop.globals.output_path)
- # clean and validate
- clean_temp(roop.globals.target_path)
- if is_video(roop.globals.target_path):
- update_status('Processing to video succeed!')
- else:
- update_status('Processing to video failed!')
-
-
-def destroy() -> None:
- if roop.globals.target_path:
- clean_temp(roop.globals.target_path)
- quit()
-
-
-def run() -> None:
- parse_args()
- if not pre_check():
- return
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- if not frame_processor.pre_check():
- return
- limit_resources()
- if roop.globals.headless:
- start()
- else:
- window = ui.init(start, destroy)
- window.mainloop()
diff --git a/spaces/Ariharasudhan/YoloV5/utils/loggers/__init__.py b/spaces/Ariharasudhan/YoloV5/utils/loggers/__init__.py
deleted file mode 100644
index bc8dd7621579f6372ce60e317c9e031e313e1c37..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/loggers/__init__.py
+++ /dev/null
@@ -1,404 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Logging utils
-"""
-
-import os
-import warnings
-from pathlib import Path
-
-import pkg_resources as pkg
-import torch
-from torch.utils.tensorboard import SummaryWriter
-
-from utils.general import LOGGER, colorstr, cv2
-from utils.loggers.clearml.clearml_utils import ClearmlLogger
-from utils.loggers.wandb.wandb_utils import WandbLogger
-from utils.plots import plot_images, plot_labels, plot_results
-from utils.torch_utils import de_parallel
-
-LOGGERS = ('csv', 'tb', 'wandb', 'clearml', 'comet') # *.csv, TensorBoard, Weights & Biases, ClearML
-RANK = int(os.getenv('RANK', -1))
-
-try:
- import wandb
-
- assert hasattr(wandb, '__version__') # verify package import not local dir
- if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in {0, -1}:
- try:
- wandb_login_success = wandb.login(timeout=30)
- except wandb.errors.UsageError: # known non-TTY terminal issue
- wandb_login_success = False
- if not wandb_login_success:
- wandb = None
-except (ImportError, AssertionError):
- wandb = None
-
-try:
- import clearml
-
- assert hasattr(clearml, '__version__') # verify package import not local dir
-except (ImportError, AssertionError):
- clearml = None
-
-try:
- if RANK not in [0, -1]:
- comet_ml = None
- else:
- import comet_ml
-
- assert hasattr(comet_ml, '__version__') # verify package import not local dir
- from utils.loggers.comet import CometLogger
-
-except (ModuleNotFoundError, ImportError, AssertionError):
- comet_ml = None
-
-
-class Loggers():
- # YOLOv5 Loggers class
- def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS):
- self.save_dir = save_dir
- self.weights = weights
- self.opt = opt
- self.hyp = hyp
- self.plots = not opt.noplots # plot results
- self.logger = logger # for printing results to console
- self.include = include
- self.keys = [
- 'train/box_loss',
- 'train/obj_loss',
- 'train/cls_loss', # train loss
- 'metrics/precision',
- 'metrics/recall',
- 'metrics/mAP_0.5',
- 'metrics/mAP_0.5:0.95', # metrics
- 'val/box_loss',
- 'val/obj_loss',
- 'val/cls_loss', # val loss
- 'x/lr0',
- 'x/lr1',
- 'x/lr2'] # params
- self.best_keys = ['best/epoch', 'best/precision', 'best/recall', 'best/mAP_0.5', 'best/mAP_0.5:0.95']
- for k in LOGGERS:
- setattr(self, k, None) # init empty logger dictionary
- self.csv = True # always log to csv
-
- # Messages
- # if not wandb:
- # prefix = colorstr('Weights & Biases: ')
- # s = f"{prefix}run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs in Weights & Biases"
- # self.logger.info(s)
- if not clearml:
- prefix = colorstr('ClearML: ')
- s = f"{prefix}run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML"
- self.logger.info(s)
- if not comet_ml:
- prefix = colorstr('Comet: ')
- s = f"{prefix}run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet"
- self.logger.info(s)
- # TensorBoard
- s = self.save_dir
- if 'tb' in self.include and not self.opt.evolve:
- prefix = colorstr('TensorBoard: ')
- self.logger.info(f"{prefix}Start with 'tensorboard --logdir {s.parent}', view at http://localhost:6006/")
- self.tb = SummaryWriter(str(s))
-
- # W&B
- if wandb and 'wandb' in self.include:
- wandb_artifact_resume = isinstance(self.opt.resume, str) and self.opt.resume.startswith('wandb-artifact://')
- run_id = torch.load(self.weights).get('wandb_id') if self.opt.resume and not wandb_artifact_resume else None
- self.opt.hyp = self.hyp # add hyperparameters
- self.wandb = WandbLogger(self.opt, run_id)
- # temp warn. because nested artifacts not supported after 0.12.10
- # if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.11'):
- # s = "YOLOv5 temporarily requires wandb version 0.12.10 or below. Some features may not work as expected."
- # self.logger.warning(s)
- else:
- self.wandb = None
-
- # ClearML
- if clearml and 'clearml' in self.include:
- self.clearml = ClearmlLogger(self.opt, self.hyp)
- else:
- self.clearml = None
-
- # Comet
- if comet_ml and 'comet' in self.include:
- if isinstance(self.opt.resume, str) and self.opt.resume.startswith("comet://"):
- run_id = self.opt.resume.split("/")[-1]
- self.comet_logger = CometLogger(self.opt, self.hyp, run_id=run_id)
-
- else:
- self.comet_logger = CometLogger(self.opt, self.hyp)
-
- else:
- self.comet_logger = None
-
- @property
- def remote_dataset(self):
- # Get data_dict if custom dataset artifact link is provided
- data_dict = None
- if self.clearml:
- data_dict = self.clearml.data_dict
- if self.wandb:
- data_dict = self.wandb.data_dict
- if self.comet_logger:
- data_dict = self.comet_logger.data_dict
-
- return data_dict
-
- def on_train_start(self):
- if self.comet_logger:
- self.comet_logger.on_train_start()
-
- def on_pretrain_routine_start(self):
- if self.comet_logger:
- self.comet_logger.on_pretrain_routine_start()
-
- def on_pretrain_routine_end(self, labels, names):
- # Callback runs on pre-train routine end
- if self.plots:
- plot_labels(labels, names, self.save_dir)
- paths = self.save_dir.glob('*labels*.jpg') # training labels
- if self.wandb:
- self.wandb.log({"Labels": [wandb.Image(str(x), caption=x.name) for x in paths]})
- # if self.clearml:
- # pass # ClearML saves these images automatically using hooks
- if self.comet_logger:
- self.comet_logger.on_pretrain_routine_end(paths)
-
- def on_train_batch_end(self, model, ni, imgs, targets, paths, vals):
- log_dict = dict(zip(self.keys[0:3], vals))
- # Callback runs on train batch end
- # ni: number integrated batches (since train start)
- if self.plots:
- if ni < 3:
- f = self.save_dir / f'train_batch{ni}.jpg' # filename
- plot_images(imgs, targets, paths, f)
- if ni == 0 and self.tb and not self.opt.sync_bn:
- log_tensorboard_graph(self.tb, model, imgsz=(self.opt.imgsz, self.opt.imgsz))
- if ni == 10 and (self.wandb or self.clearml):
- files = sorted(self.save_dir.glob('train*.jpg'))
- if self.wandb:
- self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]})
- if self.clearml:
- self.clearml.log_debug_samples(files, title='Mosaics')
-
- if self.comet_logger:
- self.comet_logger.on_train_batch_end(log_dict, step=ni)
-
- def on_train_epoch_end(self, epoch):
- # Callback runs on train epoch end
- if self.wandb:
- self.wandb.current_epoch = epoch + 1
-
- if self.comet_logger:
- self.comet_logger.on_train_epoch_end(epoch)
-
- def on_val_start(self):
- if self.comet_logger:
- self.comet_logger.on_val_start()
-
- def on_val_image_end(self, pred, predn, path, names, im):
- # Callback runs on val image end
- if self.wandb:
- self.wandb.val_one_image(pred, predn, path, names, im)
- if self.clearml:
- self.clearml.log_image_with_boxes(path, pred, names, im)
-
- def on_val_batch_end(self, batch_i, im, targets, paths, shapes, out):
- if self.comet_logger:
- self.comet_logger.on_val_batch_end(batch_i, im, targets, paths, shapes, out)
-
- def on_val_end(self, nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix):
- # Callback runs on val end
- if self.wandb or self.clearml:
- files = sorted(self.save_dir.glob('val*.jpg'))
- if self.wandb:
- self.wandb.log({"Validation": [wandb.Image(str(f), caption=f.name) for f in files]})
- if self.clearml:
- self.clearml.log_debug_samples(files, title='Validation')
-
- if self.comet_logger:
- self.comet_logger.on_val_end(nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix)
-
- def on_fit_epoch_end(self, vals, epoch, best_fitness, fi):
- # Callback runs at the end of each fit (train+val) epoch
- x = dict(zip(self.keys, vals))
- if self.csv:
- file = self.save_dir / 'results.csv'
- n = len(x) + 1 # number of cols
- s = '' if file.exists() else (('%20s,' * n % tuple(['epoch'] + self.keys)).rstrip(',') + '\n') # add header
- with open(file, 'a') as f:
- f.write(s + ('%20.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n')
-
- if self.tb:
- for k, v in x.items():
- self.tb.add_scalar(k, v, epoch)
- elif self.clearml: # log to ClearML if TensorBoard not used
- for k, v in x.items():
- title, series = k.split('/')
- self.clearml.task.get_logger().report_scalar(title, series, v, epoch)
-
- if self.wandb:
- if best_fitness == fi:
- best_results = [epoch] + vals[3:7]
- for i, name in enumerate(self.best_keys):
- self.wandb.wandb_run.summary[name] = best_results[i] # log best results in the summary
- self.wandb.log(x)
- self.wandb.end_epoch(best_result=best_fitness == fi)
-
- if self.clearml:
- self.clearml.current_epoch_logged_images = set() # reset epoch image limit
- self.clearml.current_epoch += 1
-
- if self.comet_logger:
- self.comet_logger.on_fit_epoch_end(x, epoch=epoch)
-
- def on_model_save(self, last, epoch, final_epoch, best_fitness, fi):
- # Callback runs on model save event
- if (epoch + 1) % self.opt.save_period == 0 and not final_epoch and self.opt.save_period != -1:
- if self.wandb:
- self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi)
- if self.clearml:
- self.clearml.task.update_output_model(model_path=str(last),
- model_name='Latest Model',
- auto_delete_file=False)
-
- if self.comet_logger:
- self.comet_logger.on_model_save(last, epoch, final_epoch, best_fitness, fi)
-
- def on_train_end(self, last, best, epoch, results):
- # Callback runs on training end, i.e. saving best model
- if self.plots:
- plot_results(file=self.save_dir / 'results.csv') # save results.png
- files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))]
- files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()] # filter
- self.logger.info(f"Results saved to {colorstr('bold', self.save_dir)}")
-
- if self.tb and not self.clearml: # These images are already captured by ClearML by now, we don't want doubles
- for f in files:
- self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC')
-
- if self.wandb:
- self.wandb.log(dict(zip(self.keys[3:10], results)))
- self.wandb.log({"Results": [wandb.Image(str(f), caption=f.name) for f in files]})
- # Calling wandb.log. TODO: Refactor this into WandbLogger.log_model
- if not self.opt.evolve:
- wandb.log_artifact(str(best if best.exists() else last),
- type='model',
- name=f'run_{self.wandb.wandb_run.id}_model',
- aliases=['latest', 'best', 'stripped'])
- self.wandb.finish_run()
-
- if self.clearml and not self.opt.evolve:
- self.clearml.task.update_output_model(model_path=str(best if best.exists() else last),
- name='Best Model',
- auto_delete_file=False)
-
- if self.comet_logger:
- final_results = dict(zip(self.keys[3:10], results))
- self.comet_logger.on_train_end(files, self.save_dir, last, best, epoch, final_results)
-
- def on_params_update(self, params: dict):
- # Update hyperparams or configs of the experiment
- if self.wandb:
- self.wandb.wandb_run.config.update(params, allow_val_change=True)
- if self.comet_logger:
- self.comet_logger.on_params_update(params)
-
-
-class GenericLogger:
- """
- YOLOv5 General purpose logger for non-task specific logging
- Usage: from utils.loggers import GenericLogger; logger = GenericLogger(...)
- Arguments
- opt: Run arguments
- console_logger: Console logger
- include: loggers to include
- """
-
- def __init__(self, opt, console_logger, include=('tb', 'wandb')):
- # init default loggers
- self.save_dir = Path(opt.save_dir)
- self.include = include
- self.console_logger = console_logger
- self.csv = self.save_dir / 'results.csv' # CSV logger
- if 'tb' in self.include:
- prefix = colorstr('TensorBoard: ')
- self.console_logger.info(
- f"{prefix}Start with 'tensorboard --logdir {self.save_dir.parent}', view at http://localhost:6006/")
- self.tb = SummaryWriter(str(self.save_dir))
-
- if wandb and 'wandb' in self.include:
- self.wandb = wandb.init(project=web_project_name(str(opt.project)),
- name=None if opt.name == "exp" else opt.name,
- config=opt)
- else:
- self.wandb = None
-
- def log_metrics(self, metrics, epoch):
- # Log metrics dictionary to all loggers
- if self.csv:
- keys, vals = list(metrics.keys()), list(metrics.values())
- n = len(metrics) + 1 # number of cols
- s = '' if self.csv.exists() else (('%23s,' * n % tuple(['epoch'] + keys)).rstrip(',') + '\n') # header
- with open(self.csv, 'a') as f:
- f.write(s + ('%23.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n')
-
- if self.tb:
- for k, v in metrics.items():
- self.tb.add_scalar(k, v, epoch)
-
- if self.wandb:
- self.wandb.log(metrics, step=epoch)
-
- def log_images(self, files, name='Images', epoch=0):
- # Log images to all loggers
- files = [Path(f) for f in (files if isinstance(files, (tuple, list)) else [files])] # to Path
- files = [f for f in files if f.exists()] # filter by exists
-
- if self.tb:
- for f in files:
- self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC')
-
- if self.wandb:
- self.wandb.log({name: [wandb.Image(str(f), caption=f.name) for f in files]}, step=epoch)
-
- def log_graph(self, model, imgsz=(640, 640)):
- # Log model graph to all loggers
- if self.tb:
- log_tensorboard_graph(self.tb, model, imgsz)
-
- def log_model(self, model_path, epoch=0, metadata={}):
- # Log model to all loggers
- if self.wandb:
- art = wandb.Artifact(name=f"run_{wandb.run.id}_model", type="model", metadata=metadata)
- art.add_file(str(model_path))
- wandb.log_artifact(art)
-
- def update_params(self, params):
- # Update the paramters logged
- if self.wandb:
- wandb.run.config.update(params, allow_val_change=True)
-
-
-def log_tensorboard_graph(tb, model, imgsz=(640, 640)):
- # Log model graph to TensorBoard
- try:
- p = next(model.parameters()) # for device, type
- imgsz = (imgsz, imgsz) if isinstance(imgsz, int) else imgsz # expand
- im = torch.zeros((1, 3, *imgsz)).to(p.device).type_as(p) # input image (WARNING: must be zeros, not empty)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress jit trace warning
- tb.add_graph(torch.jit.trace(de_parallel(model), im, strict=False), [])
- except Exception as e:
- LOGGER.warning(f'WARNING ⚠️ TensorBoard graph visualization failure {e}')
-
-
-def web_project_name(project):
- # Convert local project name to web project name
- if not project.startswith('runs/train'):
- return project
- suffix = '-Classify' if project.endswith('-cls') else '-Segment' if project.endswith('-seg') else ''
- return f'YOLOv5{suffix}'
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/download.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/download.py
deleted file mode 100644
index 79b82a570e5be5ce4f8e4dcc4906da8c18f08ef6..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/download.py
+++ /dev/null
@@ -1,186 +0,0 @@
-"""Download files with progress indicators.
-"""
-import email.message
-import logging
-import mimetypes
-import os
-from typing import Iterable, Optional, Tuple
-
-from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response
-
-from pip._internal.cli.progress_bars import get_download_progress_renderer
-from pip._internal.exceptions import NetworkConnectionError
-from pip._internal.models.index import PyPI
-from pip._internal.models.link import Link
-from pip._internal.network.cache import is_from_cache
-from pip._internal.network.session import PipSession
-from pip._internal.network.utils import HEADERS, raise_for_status, response_chunks
-from pip._internal.utils.misc import format_size, redact_auth_from_url, splitext
-
-logger = logging.getLogger(__name__)
-
-
-def _get_http_response_size(resp: Response) -> Optional[int]:
- try:
- return int(resp.headers["content-length"])
- except (ValueError, KeyError, TypeError):
- return None
-
-
-def _prepare_download(
- resp: Response,
- link: Link,
- progress_bar: str,
-) -> Iterable[bytes]:
- total_length = _get_http_response_size(resp)
-
- if link.netloc == PyPI.file_storage_domain:
- url = link.show_url
- else:
- url = link.url_without_fragment
-
- logged_url = redact_auth_from_url(url)
-
- if total_length:
- logged_url = "{} ({})".format(logged_url, format_size(total_length))
-
- if is_from_cache(resp):
- logger.info("Using cached %s", logged_url)
- else:
- logger.info("Downloading %s", logged_url)
-
- if logger.getEffectiveLevel() > logging.INFO:
- show_progress = False
- elif is_from_cache(resp):
- show_progress = False
- elif not total_length:
- show_progress = True
- elif total_length > (40 * 1000):
- show_progress = True
- else:
- show_progress = False
-
- chunks = response_chunks(resp, CONTENT_CHUNK_SIZE)
-
- if not show_progress:
- return chunks
-
- renderer = get_download_progress_renderer(bar_type=progress_bar, size=total_length)
- return renderer(chunks)
-
-
-def sanitize_content_filename(filename: str) -> str:
- """
- Sanitize the "filename" value from a Content-Disposition header.
- """
- return os.path.basename(filename)
-
-
-def parse_content_disposition(content_disposition: str, default_filename: str) -> str:
- """
- Parse the "filename" value from a Content-Disposition header, and
- return the default filename if the result is empty.
- """
- m = email.message.Message()
- m["content-type"] = content_disposition
- filename = m.get_param("filename")
- if filename:
- # We need to sanitize the filename to prevent directory traversal
- # in case the filename contains ".." path parts.
- filename = sanitize_content_filename(str(filename))
- return filename or default_filename
-
-
-def _get_http_response_filename(resp: Response, link: Link) -> str:
- """Get an ideal filename from the given HTTP response, falling back to
- the link filename if not provided.
- """
- filename = link.filename # fallback
- # Have a look at the Content-Disposition header for a better guess
- content_disposition = resp.headers.get("content-disposition")
- if content_disposition:
- filename = parse_content_disposition(content_disposition, filename)
- ext: Optional[str] = splitext(filename)[1]
- if not ext:
- ext = mimetypes.guess_extension(resp.headers.get("content-type", ""))
- if ext:
- filename += ext
- if not ext and link.url != resp.url:
- ext = os.path.splitext(resp.url)[1]
- if ext:
- filename += ext
- return filename
-
-
-def _http_get_download(session: PipSession, link: Link) -> Response:
- target_url = link.url.split("#", 1)[0]
- resp = session.get(target_url, headers=HEADERS, stream=True)
- raise_for_status(resp)
- return resp
-
-
-class Downloader:
- def __init__(
- self,
- session: PipSession,
- progress_bar: str,
- ) -> None:
- self._session = session
- self._progress_bar = progress_bar
-
- def __call__(self, link: Link, location: str) -> Tuple[str, str]:
- """Download the file given by link into location."""
- try:
- resp = _http_get_download(self._session, link)
- except NetworkConnectionError as e:
- assert e.response is not None
- logger.critical(
- "HTTP error %s while getting %s", e.response.status_code, link
- )
- raise
-
- filename = _get_http_response_filename(resp, link)
- filepath = os.path.join(location, filename)
-
- chunks = _prepare_download(resp, link, self._progress_bar)
- with open(filepath, "wb") as content_file:
- for chunk in chunks:
- content_file.write(chunk)
- content_type = resp.headers.get("Content-Type", "")
- return filepath, content_type
-
-
-class BatchDownloader:
- def __init__(
- self,
- session: PipSession,
- progress_bar: str,
- ) -> None:
- self._session = session
- self._progress_bar = progress_bar
-
- def __call__(
- self, links: Iterable[Link], location: str
- ) -> Iterable[Tuple[Link, Tuple[str, str]]]:
- """Download the files given by links into location."""
- for link in links:
- try:
- resp = _http_get_download(self._session, link)
- except NetworkConnectionError as e:
- assert e.response is not None
- logger.critical(
- "HTTP error %s while getting %s",
- e.response.status_code,
- link,
- )
- raise
-
- filename = _get_http_response_filename(resp, link)
- filepath = os.path.join(location, filename)
-
- chunks = _prepare_download(resp, link, self._progress_bar)
- with open(filepath, "wb") as content_file:
- for chunk in chunks:
- content_file.write(chunk)
- content_type = resp.headers.get("Content-Type", "")
- yield link, (filepath, content_type)
diff --git a/spaces/AutoLLM/AutoAgents/setup.py b/spaces/AutoLLM/AutoAgents/setup.py
deleted file mode 100644
index 3eb5fbb4a074897c0ecec51025ec55236eadf2a8..0000000000000000000000000000000000000000
--- a/spaces/AutoLLM/AutoAgents/setup.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from setuptools import setup, find_packages
-
-setup(
- name='autoagents',
- version='0.1.0',
- packages=find_packages(include=['autoagents', 'autoagents.*'])
-)
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/english.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/english.py
deleted file mode 100644
index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/english.py
+++ /dev/null
@@ -1,138 +0,0 @@
-import pickle
-import os
-import re
-from g2p_en import G2p
-from string import punctuation
-
-from text import symbols
-
-current_file_path = os.path.dirname(__file__)
-CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep')
-CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle')
-_g2p = G2p()
-
-arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'}
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def read_dict():
- g2p_dict = {}
- start_line = 49
- with open(CMU_DICT_PATH) as f:
- line = f.readline()
- line_index = 1
- while line:
- if line_index >= start_line:
- line = line.strip()
- word_split = line.split(' ')
- word = word_split[0]
-
- syllable_split = word_split[1].split(' - ')
- g2p_dict[word] = []
- for syllable in syllable_split:
- phone_split = syllable.split(' ')
- g2p_dict[word].append(phone_split)
-
- line_index = line_index + 1
- line = f.readline()
-
- return g2p_dict
-
-
-def cache_dict(g2p_dict, file_path):
- with open(file_path, 'wb') as pickle_file:
- pickle.dump(g2p_dict, pickle_file)
-
-
-def get_dict():
- if os.path.exists(CACHE_PATH):
- with open(CACHE_PATH, 'rb') as pickle_file:
- g2p_dict = pickle.load(pickle_file)
- else:
- g2p_dict = read_dict()
- cache_dict(g2p_dict, CACHE_PATH)
-
- return g2p_dict
-
-eng_dict = get_dict()
-
-def refine_ph(phn):
- tone = 0
- if re.search(r'\d$', phn):
- tone = int(phn[-1]) + 1
- phn = phn[:-1]
- return phn.lower(), tone
-
-def refine_syllables(syllables):
- tones = []
- phonemes = []
- for phn_list in syllables:
- for i in range(len(phn_list)):
- phn = phn_list[i]
- phn, tone = refine_ph(phn)
- phonemes.append(phn)
- tones.append(tone)
- return phonemes, tones
-
-
-def text_normalize(text):
- # todo: eng text normalize
- return text
-
-def g2p(text):
-
- phones = []
- tones = []
- words = re.split(r"([,;.\-\?\!\s+])", text)
- for w in words:
- if w.upper() in eng_dict:
- phns, tns = refine_syllables(eng_dict[w.upper()])
- phones += phns
- tones += tns
- else:
- phone_list = list(filter(lambda p: p != " ", _g2p(w)))
- for ph in phone_list:
- if ph in arpa:
- ph, tn = refine_ph(ph)
- phones.append(ph)
- tones.append(tn)
- else:
- phones.append(ph)
- tones.append(0)
- # todo: implement word2ph
- word2ph = [1 for i in phones]
-
- phones = [post_replace_ph(i) for i in phones]
- return phones, tones, word2ph
-
-if __name__ == "__main__":
- # print(get_dict())
- # print(eng_word_to_phoneme("hello"))
- print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder."))
- # all_phones = set()
- # for k, syllables in eng_dict.items():
- # for group in syllables:
- # for ph in group:
- # all_phones.add(ph)
- # print(all_phones)
\ No newline at end of file
diff --git a/spaces/BAAI/vid2vid-zero/gradio_demo/runner.py b/spaces/BAAI/vid2vid-zero/gradio_demo/runner.py
deleted file mode 100644
index 75af3356bea78ad5356b37af2a6eff5e46aa3a8f..0000000000000000000000000000000000000000
--- a/spaces/BAAI/vid2vid-zero/gradio_demo/runner.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from __future__ import annotations
-
-import datetime
-import os
-import pathlib
-import shlex
-import shutil
-import subprocess
-import sys
-
-import gradio as gr
-import slugify
-import torch
-import huggingface_hub
-from huggingface_hub import HfApi
-from omegaconf import OmegaConf
-
-
-ORIGINAL_SPACE_ID = 'BAAI/vid2vid-zero'
-SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
-
-
-class Runner:
- def __init__(self, hf_token: str | None = None):
- self.hf_token = hf_token
-
- self.checkpoint_dir = pathlib.Path('checkpoints')
- self.checkpoint_dir.mkdir(exist_ok=True)
-
- def download_base_model(self, base_model_id: str, token=None) -> str:
- model_dir = self.checkpoint_dir / base_model_id
- org_name = base_model_id.split('/')[0]
- org_dir = self.checkpoint_dir / org_name
- if not model_dir.exists():
- org_dir.mkdir(exist_ok=True)
- print(f'https://huggingface.co/{base_model_id}')
- if token == None:
- subprocess.run(shlex.split(f'git lfs install'), cwd=org_dir)
- subprocess.run(shlex.split(
- f'git lfs clone https://huggingface.co/{base_model_id}'),
- cwd=org_dir)
- return model_dir.as_posix()
- else:
- temp_path = huggingface_hub.snapshot_download(base_model_id, use_auth_token=token)
- print(temp_path, org_dir)
- # subprocess.run(shlex.split(f'mv {temp_path} {model_dir.as_posix()}'))
- # return model_dir.as_posix()
- return temp_path
-
- def join_model_library_org(self, token: str) -> None:
- subprocess.run(
- shlex.split(
- f'curl -X POST -H "Authorization: Bearer {token}" -H "Content-Type: application/json" {URL_TO_JOIN_MODEL_LIBRARY_ORG}'
- ))
-
- def run_vid2vid_zero(
- self,
- model_path: str,
- input_video: str,
- prompt: str,
- n_sample_frames: int,
- sample_start_idx: int,
- sample_frame_rate: int,
- validation_prompt: str,
- guidance_scale: float,
- resolution: str,
- seed: int,
- remove_gpu_after_running: bool,
- input_token: str = None,
- ) -> str:
-
- if not torch.cuda.is_available():
- raise gr.Error('CUDA is not available.')
- if input_video is None:
- raise gr.Error('You need to upload a video.')
- if not prompt:
- raise gr.Error('The input prompt is missing.')
- if not validation_prompt:
- raise gr.Error('The validation prompt is missing.')
-
- resolution = int(resolution)
- n_sample_frames = int(n_sample_frames)
- sample_start_idx = int(sample_start_idx)
- sample_frame_rate = int(sample_frame_rate)
-
- repo_dir = pathlib.Path(__file__).parent
- prompt_path = prompt.replace(' ', '_')
- output_dir = repo_dir / 'outputs' / prompt_path
- output_dir.mkdir(parents=True, exist_ok=True)
-
- config = OmegaConf.load('configs/black-swan.yaml')
- config.pretrained_model_path = self.download_base_model(model_path, token=input_token)
-
- # we remove null-inversion & use fp16 for fast inference on web demo
- config.mixed_precision = "fp16"
- config.validation_data.use_null_inv = False
-
- config.output_dir = output_dir.as_posix()
- config.input_data.video_path = input_video.name # type: ignore
- config.input_data.prompt = prompt
- config.input_data.n_sample_frames = n_sample_frames
- config.input_data.width = resolution
- config.input_data.height = resolution
- config.input_data.sample_start_idx = sample_start_idx
- config.input_data.sample_frame_rate = sample_frame_rate
-
- config.validation_data.prompts = [validation_prompt]
- config.validation_data.video_length = 8
- config.validation_data.width = resolution
- config.validation_data.height = resolution
- config.validation_data.num_inference_steps = 50
- config.validation_data.guidance_scale = guidance_scale
-
- config.input_batch_size = 1
- config.seed = seed
-
- config_path = output_dir / 'config.yaml'
- with open(config_path, 'w') as f:
- OmegaConf.save(config, f)
-
- command = f'accelerate launch test_vid2vid_zero.py --config {config_path}'
- subprocess.run(shlex.split(command))
-
- output_video_path = os.path.join(output_dir, "sample-all.mp4")
- print(f"video path for gradio: {output_video_path}")
- message = 'Running completed!'
- print(message)
-
- if remove_gpu_after_running:
- space_id = os.getenv('SPACE_ID')
- if space_id:
- api = HfApi(
- token=self.hf_token if self.hf_token else input_token)
- api.request_space_hardware(repo_id=space_id,
- hardware='cpu-basic')
-
- return output_video_path
diff --git a/spaces/BasToTheMax/22h-vintedois-diffusion-v0-1/app.py b/spaces/BasToTheMax/22h-vintedois-diffusion-v0-1/app.py
deleted file mode 100644
index c1dd484084e36ddbdfd38baef27a08040b2d7893..0000000000000000000000000000000000000000
--- a/spaces/BasToTheMax/22h-vintedois-diffusion-v0-1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/22h/vintedois-diffusion-v0-1").launch()
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/build_tracker.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/build_tracker.py
deleted file mode 100644
index 6621549b8449130d2d01ebac0a3649d8b70c4f91..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/build_tracker.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import contextlib
-import hashlib
-import logging
-import os
-from types import TracebackType
-from typing import Dict, Generator, Optional, Set, Type, Union
-
-from pip._internal.models.link import Link
-from pip._internal.req.req_install import InstallRequirement
-from pip._internal.utils.temp_dir import TempDirectory
-
-logger = logging.getLogger(__name__)
-
-
-@contextlib.contextmanager
-def update_env_context_manager(**changes: str) -> Generator[None, None, None]:
- target = os.environ
-
- # Save values from the target and change them.
- non_existent_marker = object()
- saved_values: Dict[str, Union[object, str]] = {}
- for name, new_value in changes.items():
- try:
- saved_values[name] = target[name]
- except KeyError:
- saved_values[name] = non_existent_marker
- target[name] = new_value
-
- try:
- yield
- finally:
- # Restore original values in the target.
- for name, original_value in saved_values.items():
- if original_value is non_existent_marker:
- del target[name]
- else:
- assert isinstance(original_value, str) # for mypy
- target[name] = original_value
-
-
-@contextlib.contextmanager
-def get_build_tracker() -> Generator["BuildTracker", None, None]:
- root = os.environ.get("PIP_BUILD_TRACKER")
- with contextlib.ExitStack() as ctx:
- if root is None:
- root = ctx.enter_context(TempDirectory(kind="build-tracker")).path
- ctx.enter_context(update_env_context_manager(PIP_BUILD_TRACKER=root))
- logger.debug("Initialized build tracking at %s", root)
-
- with BuildTracker(root) as tracker:
- yield tracker
-
-
-class BuildTracker:
- def __init__(self, root: str) -> None:
- self._root = root
- self._entries: Set[InstallRequirement] = set()
- logger.debug("Created build tracker: %s", self._root)
-
- def __enter__(self) -> "BuildTracker":
- logger.debug("Entered build tracker: %s", self._root)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self.cleanup()
-
- def _entry_path(self, link: Link) -> str:
- hashed = hashlib.sha224(link.url_without_fragment.encode()).hexdigest()
- return os.path.join(self._root, hashed)
-
- def add(self, req: InstallRequirement) -> None:
- """Add an InstallRequirement to build tracking."""
-
- assert req.link
- # Get the file to write information about this requirement.
- entry_path = self._entry_path(req.link)
-
- # Try reading from the file. If it exists and can be read from, a build
- # is already in progress, so a LookupError is raised.
- try:
- with open(entry_path) as fp:
- contents = fp.read()
- except FileNotFoundError:
- pass
- else:
- message = "{} is already being built: {}".format(req.link, contents)
- raise LookupError(message)
-
- # If we're here, req should really not be building already.
- assert req not in self._entries
-
- # Start tracking this requirement.
- with open(entry_path, "w", encoding="utf-8") as fp:
- fp.write(str(req))
- self._entries.add(req)
-
- logger.debug("Added %s to build tracker %r", req, self._root)
-
- def remove(self, req: InstallRequirement) -> None:
- """Remove an InstallRequirement from build tracking."""
-
- assert req.link
- # Delete the created file and the corresponding entries.
- os.unlink(self._entry_path(req.link))
- self._entries.remove(req)
-
- logger.debug("Removed %s from build tracker %r", req, self._root)
-
- def cleanup(self) -> None:
- for req in set(self._entries):
- self.remove(req)
-
- logger.debug("Removed build tracker: %r", self._root)
-
- @contextlib.contextmanager
- def track(self, req: InstallRequirement) -> Generator[None, None, None]:
- self.add(req)
- yield
- self.remove(req)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/inject_securetransport.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/inject_securetransport.py
deleted file mode 100644
index 276aa79bb81356cdca73af0a5851b448707784a4..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/inject_securetransport.py
+++ /dev/null
@@ -1,35 +0,0 @@
-"""A helper module that injects SecureTransport, on import.
-
-The import should be done as early as possible, to ensure all requests and
-sessions (or whatever) are created after injecting SecureTransport.
-
-Note that we only do the injection on macOS, when the linked OpenSSL is too
-old to handle TLSv1.2.
-"""
-
-import sys
-
-
-def inject_securetransport() -> None:
- # Only relevant on macOS
- if sys.platform != "darwin":
- return
-
- try:
- import ssl
- except ImportError:
- return
-
- # Checks for OpenSSL 1.0.1
- if ssl.OPENSSL_VERSION_NUMBER >= 0x1000100F:
- return
-
- try:
- from pip._vendor.urllib3.contrib import securetransport
- except (ImportError, OSError):
- return
-
- securetransport.inject_into_urllib3()
-
-
-inject_securetransport()
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/sdist.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/sdist.py
deleted file mode 100644
index 4a8cde7e160df63093afed9b3b030ddfb76ddd05..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/sdist.py
+++ /dev/null
@@ -1,210 +0,0 @@
-from distutils import log
-import distutils.command.sdist as orig
-import os
-import sys
-import io
-import contextlib
-from itertools import chain
-
-from .py36compat import sdist_add_defaults
-
-from .._importlib import metadata
-from .build import _ORIGINAL_SUBCOMMANDS
-
-_default_revctrl = list
-
-
-def walk_revctrl(dirname=''):
- """Find all files under revision control"""
- for ep in metadata.entry_points(group='setuptools.file_finders'):
- for item in ep.load()(dirname):
- yield item
-
-
-class sdist(sdist_add_defaults, orig.sdist):
- """Smart sdist that finds anything supported by revision control"""
-
- user_options = [
- ('formats=', None,
- "formats for source distribution (comma-separated list)"),
- ('keep-temp', 'k',
- "keep the distribution tree around after creating " +
- "archive file(s)"),
- ('dist-dir=', 'd',
- "directory to put the source distribution archive(s) in "
- "[default: dist]"),
- ('owner=', 'u',
- "Owner name used when creating a tar file [default: current user]"),
- ('group=', 'g',
- "Group name used when creating a tar file [default: current group]"),
- ]
-
- negative_opt = {}
-
- README_EXTENSIONS = ['', '.rst', '.txt', '.md']
- READMES = tuple('README{0}'.format(ext) for ext in README_EXTENSIONS)
-
- def run(self):
- self.run_command('egg_info')
- ei_cmd = self.get_finalized_command('egg_info')
- self.filelist = ei_cmd.filelist
- self.filelist.append(os.path.join(ei_cmd.egg_info, 'SOURCES.txt'))
- self.check_readme()
-
- # Run sub commands
- for cmd_name in self.get_sub_commands():
- self.run_command(cmd_name)
-
- self.make_distribution()
-
- dist_files = getattr(self.distribution, 'dist_files', [])
- for file in self.archive_files:
- data = ('sdist', '', file)
- if data not in dist_files:
- dist_files.append(data)
-
- def initialize_options(self):
- orig.sdist.initialize_options(self)
-
- self._default_to_gztar()
-
- def _default_to_gztar(self):
- # only needed on Python prior to 3.6.
- if sys.version_info >= (3, 6, 0, 'beta', 1):
- return
- self.formats = ['gztar']
-
- def make_distribution(self):
- """
- Workaround for #516
- """
- with self._remove_os_link():
- orig.sdist.make_distribution(self)
-
- @staticmethod
- @contextlib.contextmanager
- def _remove_os_link():
- """
- In a context, remove and restore os.link if it exists
- """
-
- class NoValue:
- pass
-
- orig_val = getattr(os, 'link', NoValue)
- try:
- del os.link
- except Exception:
- pass
- try:
- yield
- finally:
- if orig_val is not NoValue:
- setattr(os, 'link', orig_val)
-
- def add_defaults(self):
- super().add_defaults()
- self._add_defaults_build_sub_commands()
-
- def _add_defaults_optional(self):
- super()._add_defaults_optional()
- if os.path.isfile('pyproject.toml'):
- self.filelist.append('pyproject.toml')
-
- def _add_defaults_python(self):
- """getting python files"""
- if self.distribution.has_pure_modules():
- build_py = self.get_finalized_command('build_py')
- self.filelist.extend(build_py.get_source_files())
- self._add_data_files(self._safe_data_files(build_py))
-
- def _add_defaults_build_sub_commands(self):
- build = self.get_finalized_command("build")
- missing_cmds = set(build.get_sub_commands()) - _ORIGINAL_SUBCOMMANDS
- # ^-- the original built-in sub-commands are already handled by default.
- cmds = (self.get_finalized_command(c) for c in missing_cmds)
- files = (c.get_source_files() for c in cmds if hasattr(c, "get_source_files"))
- self.filelist.extend(chain.from_iterable(files))
-
- def _safe_data_files(self, build_py):
- """
- Since the ``sdist`` class is also used to compute the MANIFEST
- (via :obj:`setuptools.command.egg_info.manifest_maker`),
- there might be recursion problems when trying to obtain the list of
- data_files and ``include_package_data=True`` (which in turn depends on
- the files included in the MANIFEST).
-
- To avoid that, ``manifest_maker`` should be able to overwrite this
- method and avoid recursive attempts to build/analyze the MANIFEST.
- """
- return build_py.data_files
-
- def _add_data_files(self, data_files):
- """
- Add data files as found in build_py.data_files.
- """
- self.filelist.extend(
- os.path.join(src_dir, name)
- for _, src_dir, _, filenames in data_files
- for name in filenames
- )
-
- def _add_defaults_data_files(self):
- try:
- super()._add_defaults_data_files()
- except TypeError:
- log.warn("data_files contains unexpected objects")
-
- def check_readme(self):
- for f in self.READMES:
- if os.path.exists(f):
- return
- else:
- self.warn(
- "standard file not found: should have one of " +
- ', '.join(self.READMES)
- )
-
- def make_release_tree(self, base_dir, files):
- orig.sdist.make_release_tree(self, base_dir, files)
-
- # Save any egg_info command line options used to create this sdist
- dest = os.path.join(base_dir, 'setup.cfg')
- if hasattr(os, 'link') and os.path.exists(dest):
- # unlink and re-copy, since it might be hard-linked, and
- # we don't want to change the source version
- os.unlink(dest)
- self.copy_file('setup.cfg', dest)
-
- self.get_finalized_command('egg_info').save_version_info(dest)
-
- def _manifest_is_not_generated(self):
- # check for special comment used in 2.7.1 and higher
- if not os.path.isfile(self.manifest):
- return False
-
- with io.open(self.manifest, 'rb') as fp:
- first_line = fp.readline()
- return (first_line !=
- '# file GENERATED by distutils, do NOT edit\n'.encode())
-
- def read_manifest(self):
- """Read the manifest file (named by 'self.manifest') and use it to
- fill in 'self.filelist', the list of files to include in the source
- distribution.
- """
- log.info("reading manifest file '%s'", self.manifest)
- manifest = open(self.manifest, 'rb')
- for line in manifest:
- # The manifest must contain UTF-8. See #303.
- try:
- line = line.decode('UTF-8')
- except UnicodeDecodeError:
- log.warn("%r not UTF-8 decodable -- skipping" % line)
- continue
- # ignore comments and blank lines
- line = line.strip()
- if line.startswith('#') or not line:
- continue
- self.filelist.append(line)
- manifest.close()
diff --git a/spaces/BongoCaat/ArtGenerator/README.md b/spaces/BongoCaat/ArtGenerator/README.md
deleted file mode 100644
index f4e1ae012844a860ccefcbd3b61859e8f2afa10a..0000000000000000000000000000000000000000
--- a/spaces/BongoCaat/ArtGenerator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ArtGenerator
-emoji: 🏃
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rpn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rpn.py
deleted file mode 100644
index 8999b82c0894cb357b652f175ade5a78f3b5c2db..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rpn.py
+++ /dev/null
@@ -1,185 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from typing import Dict, List
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.layers import ShapeSpec
-from detectron2.utils.registry import Registry
-
-from ..anchor_generator import build_anchor_generator
-from ..box_regression import Box2BoxTransform
-from ..matcher import Matcher
-from .build import PROPOSAL_GENERATOR_REGISTRY
-from .rpn_outputs import RPNOutputs, find_top_rpn_proposals
-
-RPN_HEAD_REGISTRY = Registry("RPN_HEAD")
-RPN_HEAD_REGISTRY.__doc__ = """
-Registry for RPN heads, which take feature maps and perform
-objectness classification and bounding box regression for anchors.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-The call should return a `nn.Module` object.
-"""
-
-
-def build_rpn_head(cfg, input_shape):
- """
- Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`.
- """
- name = cfg.MODEL.RPN.HEAD_NAME
- return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape)
-
-
-@RPN_HEAD_REGISTRY.register()
-class StandardRPNHead(nn.Module):
- """
- RPN classification and regression heads. Uses a 3x3 conv to produce a shared
- hidden state from which one 1x1 conv predicts objectness logits for each anchor
- and a second 1x1 conv predicts bounding-box deltas specifying how to deform
- each anchor into an object proposal.
- """
-
- def __init__(self, cfg, input_shape: List[ShapeSpec]):
- super().__init__()
-
- # Standard RPN is shared across levels:
- in_channels = [s.channels for s in input_shape]
- assert len(set(in_channels)) == 1, "Each level must have the same channel!"
- in_channels = in_channels[0]
-
- # RPNHead should take the same input as anchor generator
- # NOTE: it assumes that creating an anchor generator does not have unwanted side effect.
- anchor_generator = build_anchor_generator(cfg, input_shape)
- num_cell_anchors = anchor_generator.num_cell_anchors
- box_dim = anchor_generator.box_dim
- assert (
- len(set(num_cell_anchors)) == 1
- ), "Each level must have the same number of cell anchors"
- num_cell_anchors = num_cell_anchors[0]
-
- # 3x3 conv for the hidden representation
- self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1)
- # 1x1 conv for predicting objectness logits
- self.objectness_logits = nn.Conv2d(in_channels, num_cell_anchors, kernel_size=1, stride=1)
- # 1x1 conv for predicting box2box transform deltas
- self.anchor_deltas = nn.Conv2d(
- in_channels, num_cell_anchors * box_dim, kernel_size=1, stride=1
- )
-
- for l in [self.conv, self.objectness_logits, self.anchor_deltas]:
- nn.init.normal_(l.weight, std=0.01)
- nn.init.constant_(l.bias, 0)
-
- def forward(self, features):
- """
- Args:
- features (list[Tensor]): list of feature maps
- """
- pred_objectness_logits = []
- pred_anchor_deltas = []
- for x in features:
- t = F.relu(self.conv(x))
- pred_objectness_logits.append(self.objectness_logits(t))
- pred_anchor_deltas.append(self.anchor_deltas(t))
- return pred_objectness_logits, pred_anchor_deltas
-
-
-@PROPOSAL_GENERATOR_REGISTRY.register()
-class RPN(nn.Module):
- """
- Region Proposal Network, introduced by the Faster R-CNN paper.
- """
-
- def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]):
- super().__init__()
-
- # fmt: off
- self.min_box_side_len = cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE
- self.in_features = cfg.MODEL.RPN.IN_FEATURES
- self.nms_thresh = cfg.MODEL.RPN.NMS_THRESH
- self.batch_size_per_image = cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE
- self.positive_fraction = cfg.MODEL.RPN.POSITIVE_FRACTION
- self.smooth_l1_beta = cfg.MODEL.RPN.SMOOTH_L1_BETA
- self.loss_weight = cfg.MODEL.RPN.LOSS_WEIGHT
- # fmt: on
-
- # Map from self.training state to train/test settings
- self.pre_nms_topk = {
- True: cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN,
- False: cfg.MODEL.RPN.PRE_NMS_TOPK_TEST,
- }
- self.post_nms_topk = {
- True: cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN,
- False: cfg.MODEL.RPN.POST_NMS_TOPK_TEST,
- }
- self.boundary_threshold = cfg.MODEL.RPN.BOUNDARY_THRESH
-
- self.anchor_generator = build_anchor_generator(
- cfg, [input_shape[f] for f in self.in_features]
- )
- self.box2box_transform = Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS)
- self.anchor_matcher = Matcher(
- cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True
- )
- self.rpn_head = build_rpn_head(cfg, [input_shape[f] for f in self.in_features])
-
- def forward(self, images, features, gt_instances=None):
- """
- Args:
- images (ImageList): input images of length `N`
- features (dict[str: Tensor]): input data as a mapping from feature
- map name to tensor. Axis 0 represents the number of images `N` in
- the input data; axes 1-3 are channels, height, and width, which may
- vary between feature maps (e.g., if a feature pyramid is used).
- gt_instances (list[Instances], optional): a length `N` list of `Instances`s.
- Each `Instances` stores ground-truth instances for the corresponding image.
-
- Returns:
- proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits"
- loss: dict[Tensor] or None
- """
- gt_boxes = [x.gt_boxes for x in gt_instances] if gt_instances is not None else None
- del gt_instances
- features = [features[f] for f in self.in_features]
- pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features)
- anchors = self.anchor_generator(features)
- # TODO: The anchors only depend on the feature map shape; there's probably
- # an opportunity for some optimizations (e.g., caching anchors).
- outputs = RPNOutputs(
- self.box2box_transform,
- self.anchor_matcher,
- self.batch_size_per_image,
- self.positive_fraction,
- images,
- pred_objectness_logits,
- pred_anchor_deltas,
- anchors,
- self.boundary_threshold,
- gt_boxes,
- self.smooth_l1_beta,
- )
-
- if self.training:
- losses = {k: v * self.loss_weight for k, v in outputs.losses().items()}
- else:
- losses = {}
-
- with torch.no_grad():
- # Find the top proposals by applying NMS and removing boxes that
- # are too small. The proposals are treated as fixed for approximate
- # joint training with roi heads. This approach ignores the derivative
- # w.r.t. the proposal boxes’ coordinates that are also network
- # responses, so is approximate.
- proposals = find_top_rpn_proposals(
- outputs.predict_proposals(),
- outputs.predict_objectness_logits(),
- images,
- self.nms_thresh,
- self.pre_nms_topk[self.training],
- self.post_nms_topk[self.training],
- self.min_box_side_len,
- self.training,
- )
-
- return proposals, losses
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/__init__.py
deleted file mode 100644
index f9d3562e922561ada495d4944e09e384de54b81c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from . import dataset # just to register data
-from .config import add_densepose_config
-from .dataset_mapper import DatasetMapper
-from .densepose_head import ROI_DENSEPOSE_HEAD_REGISTRY
-from .evaluator import DensePoseCOCOEvaluator
-from .roi_head import DensePoseROIHeads
-from .structures import DensePoseDataRelative, DensePoseList, DensePoseTransformData
-from .modeling.test_time_augmentation import DensePoseGeneralizedRCNNWithTTA
-from .utils.transform import load_from_cfg
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_setup.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_setup.py
deleted file mode 100644
index 8596cf233e25158a9cacfda7ee33d15ca3acfb0e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_setup.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-import unittest
-
-from .common import get_config_files, get_quick_schedules_config_files, setup
-
-
-class TestSetup(unittest.TestCase):
- def _test_setup(self, config_file):
- setup(config_file)
-
- def test_setup_configs(self):
- config_files = get_config_files()
- for config_file in config_files:
- self._test_setup(config_file)
-
- def test_setup_quick_schedules_configs(self):
- config_files = get_quick_schedules_config_files()
- for config_file in config_files:
- self._test_setup(config_file)
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_stl_binders.py b/spaces/CVPR/LIVE/pybind11/tests/test_stl_binders.py
deleted file mode 100644
index f9b8ea4af2a3156ecb09dd9d61d3464bb85ceefb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_stl_binders.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# -*- coding: utf-8 -*-
-import pytest
-
-import env # noqa: F401
-
-from pybind11_tests import stl_binders as m
-
-
-def test_vector_int():
- v_int = m.VectorInt([0, 0])
- assert len(v_int) == 2
- assert bool(v_int) is True
-
- # test construction from a generator
- v_int1 = m.VectorInt(x for x in range(5))
- assert v_int1 == m.VectorInt([0, 1, 2, 3, 4])
-
- v_int2 = m.VectorInt([0, 0])
- assert v_int == v_int2
- v_int2[1] = 1
- assert v_int != v_int2
-
- v_int2.append(2)
- v_int2.insert(0, 1)
- v_int2.insert(0, 2)
- v_int2.insert(0, 3)
- v_int2.insert(6, 3)
- assert str(v_int2) == "VectorInt[3, 2, 1, 0, 1, 2, 3]"
- with pytest.raises(IndexError):
- v_int2.insert(8, 4)
-
- v_int.append(99)
- v_int2[2:-2] = v_int
- assert v_int2 == m.VectorInt([3, 2, 0, 0, 99, 2, 3])
- del v_int2[1:3]
- assert v_int2 == m.VectorInt([3, 0, 99, 2, 3])
- del v_int2[0]
- assert v_int2 == m.VectorInt([0, 99, 2, 3])
-
- v_int2.extend(m.VectorInt([4, 5]))
- assert v_int2 == m.VectorInt([0, 99, 2, 3, 4, 5])
-
- v_int2.extend([6, 7])
- assert v_int2 == m.VectorInt([0, 99, 2, 3, 4, 5, 6, 7])
-
- # test error handling, and that the vector is unchanged
- with pytest.raises(RuntimeError):
- v_int2.extend([8, 'a'])
-
- assert v_int2 == m.VectorInt([0, 99, 2, 3, 4, 5, 6, 7])
-
- # test extending from a generator
- v_int2.extend(x for x in range(5))
- assert v_int2 == m.VectorInt([0, 99, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 4])
-
- # test negative indexing
- assert v_int2[-1] == 4
-
- # insert with negative index
- v_int2.insert(-1, 88)
- assert v_int2 == m.VectorInt([0, 99, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 88, 4])
-
- # delete negative index
- del v_int2[-1]
- assert v_int2 == m.VectorInt([0, 99, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 88])
-
- v_int2.clear()
- assert len(v_int2) == 0
-
-
-# Older PyPy's failed here, related to the PyPy's buffer protocol.
-def test_vector_buffer():
- b = bytearray([1, 2, 3, 4])
- v = m.VectorUChar(b)
- assert v[1] == 2
- v[2] = 5
- mv = memoryview(v) # We expose the buffer interface
- if not env.PY2:
- assert mv[2] == 5
- mv[2] = 6
- else:
- assert mv[2] == '\x05'
- mv[2] = '\x06'
- assert v[2] == 6
-
- if not env.PY2:
- mv = memoryview(b)
- v = m.VectorUChar(mv[::2])
- assert v[1] == 3
-
- with pytest.raises(RuntimeError) as excinfo:
- m.create_undeclstruct() # Undeclared struct contents, no buffer interface
- assert "NumPy type info missing for " in str(excinfo.value)
-
-
-def test_vector_buffer_numpy():
- np = pytest.importorskip("numpy")
- a = np.array([1, 2, 3, 4], dtype=np.int32)
- with pytest.raises(TypeError):
- m.VectorInt(a)
-
- a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]], dtype=np.uintc)
- v = m.VectorInt(a[0, :])
- assert len(v) == 4
- assert v[2] == 3
- ma = np.asarray(v)
- ma[2] = 5
- assert v[2] == 5
-
- v = m.VectorInt(a[:, 1])
- assert len(v) == 3
- assert v[2] == 10
-
- v = m.get_vectorstruct()
- assert v[0].x == 5
- ma = np.asarray(v)
- ma[1]['x'] = 99
- assert v[1].x == 99
-
- v = m.VectorStruct(np.zeros(3, dtype=np.dtype([('w', 'bool'), ('x', 'I'),
- ('y', 'float64'), ('z', 'bool')], align=True)))
- assert len(v) == 3
-
- b = np.array([1, 2, 3, 4], dtype=np.uint8)
- v = m.VectorUChar(b[::2])
- assert v[1] == 3
-
-
-def test_vector_bool():
- import pybind11_cross_module_tests as cm
-
- vv_c = cm.VectorBool()
- for i in range(10):
- vv_c.append(i % 2 == 0)
- for i in range(10):
- assert vv_c[i] == (i % 2 == 0)
- assert str(vv_c) == "VectorBool[1, 0, 1, 0, 1, 0, 1, 0, 1, 0]"
-
-
-def test_vector_custom():
- v_a = m.VectorEl()
- v_a.append(m.El(1))
- v_a.append(m.El(2))
- assert str(v_a) == "VectorEl[El{1}, El{2}]"
-
- vv_a = m.VectorVectorEl()
- vv_a.append(v_a)
- vv_b = vv_a[0]
- assert str(vv_b) == "VectorEl[El{1}, El{2}]"
-
-
-def test_map_string_double():
- mm = m.MapStringDouble()
- mm['a'] = 1
- mm['b'] = 2.5
-
- assert list(mm) == ['a', 'b']
- assert list(mm.items()) == [('a', 1), ('b', 2.5)]
- assert str(mm) == "MapStringDouble{a: 1, b: 2.5}"
-
- um = m.UnorderedMapStringDouble()
- um['ua'] = 1.1
- um['ub'] = 2.6
-
- assert sorted(list(um)) == ['ua', 'ub']
- assert sorted(list(um.items())) == [('ua', 1.1), ('ub', 2.6)]
- assert "UnorderedMapStringDouble" in str(um)
-
-
-def test_map_string_double_const():
- mc = m.MapStringDoubleConst()
- mc['a'] = 10
- mc['b'] = 20.5
- assert str(mc) == "MapStringDoubleConst{a: 10, b: 20.5}"
-
- umc = m.UnorderedMapStringDoubleConst()
- umc['a'] = 11
- umc['b'] = 21.5
-
- str(umc)
-
-
-def test_noncopyable_containers():
- # std::vector
- vnc = m.get_vnc(5)
- for i in range(0, 5):
- assert vnc[i].value == i + 1
-
- for i, j in enumerate(vnc, start=1):
- assert j.value == i
-
- # std::deque
- dnc = m.get_dnc(5)
- for i in range(0, 5):
- assert dnc[i].value == i + 1
-
- i = 1
- for j in dnc:
- assert(j.value == i)
- i += 1
-
- # std::map
- mnc = m.get_mnc(5)
- for i in range(1, 6):
- assert mnc[i].value == 10 * i
-
- vsum = 0
- for k, v in mnc.items():
- assert v.value == 10 * k
- vsum += v.value
-
- assert vsum == 150
-
- # std::unordered_map
- mnc = m.get_umnc(5)
- for i in range(1, 6):
- assert mnc[i].value == 10 * i
-
- vsum = 0
- for k, v in mnc.items():
- assert v.value == 10 * k
- vsum += v.value
-
- assert vsum == 150
-
- # nested std::map
- nvnc = m.get_nvnc(5)
- for i in range(1, 6):
- for j in range(0, 5):
- assert nvnc[i][j].value == j + 1
-
- # Note: maps do not have .values()
- for _, v in nvnc.items():
- for i, j in enumerate(v, start=1):
- assert j.value == i
-
- # nested std::map
- nmnc = m.get_nmnc(5)
- for i in range(1, 6):
- for j in range(10, 60, 10):
- assert nmnc[i][j].value == 10 * j
-
- vsum = 0
- for _, v_o in nmnc.items():
- for k_i, v_i in v_o.items():
- assert v_i.value == 10 * k_i
- vsum += v_i.value
-
- assert vsum == 7500
-
- # nested std::unordered_map
- numnc = m.get_numnc(5)
- for i in range(1, 6):
- for j in range(10, 60, 10):
- assert numnc[i][j].value == 10 * j
-
- vsum = 0
- for _, v_o in numnc.items():
- for k_i, v_i in v_o.items():
- assert v_i.value == 10 * k_i
- vsum += v_i.value
-
- assert vsum == 7500
-
-
-def test_map_delitem():
- mm = m.MapStringDouble()
- mm['a'] = 1
- mm['b'] = 2.5
-
- assert list(mm) == ['a', 'b']
- assert list(mm.items()) == [('a', 1), ('b', 2.5)]
- del mm['a']
- assert list(mm) == ['b']
- assert list(mm.items()) == [('b', 2.5)]
-
- um = m.UnorderedMapStringDouble()
- um['ua'] = 1.1
- um['ub'] = 2.6
-
- assert sorted(list(um)) == ['ua', 'ub']
- assert sorted(list(um.items())) == [('ua', 1.1), ('ub', 2.6)]
- del um['ua']
- assert sorted(list(um)) == ['ub']
- assert sorted(list(um.items())) == [('ub', 2.6)]
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/logical.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/logical.h
deleted file mode 100644
index 4199063183dbc38b79c7707bb8301e5ca8aa6ad5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/logical.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits logical
-#include
-
diff --git a/spaces/CVPR/transfiner/configs/common/models/keypoint_rcnn_fpn.py b/spaces/CVPR/transfiner/configs/common/models/keypoint_rcnn_fpn.py
deleted file mode 100644
index 56b3994df249884d4816fc9a5c7f553a9ab6f400..0000000000000000000000000000000000000000
--- a/spaces/CVPR/transfiner/configs/common/models/keypoint_rcnn_fpn.py
+++ /dev/null
@@ -1,33 +0,0 @@
-from detectron2.config import LazyCall as L
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.poolers import ROIPooler
-from detectron2.modeling.roi_heads import KRCNNConvDeconvUpsampleHead
-
-from .mask_rcnn_fpn import model
-
-[model.roi_heads.pop(x) for x in ["mask_in_features", "mask_pooler", "mask_head"]]
-
-model.roi_heads.update(
- num_classes=1,
- keypoint_in_features=["p2", "p3", "p4", "p5"],
- keypoint_pooler=L(ROIPooler)(
- output_size=14,
- scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
- sampling_ratio=0,
- pooler_type="ROIAlignV2",
- ),
- keypoint_head=L(KRCNNConvDeconvUpsampleHead)(
- input_shape=ShapeSpec(channels=256, width=14, height=14),
- num_keypoints=17,
- conv_dims=[512] * 8,
- loss_normalizer="visible",
- ),
-)
-
-# Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2.
-# 1000 proposals per-image is found to hurt box AP.
-# Therefore we increase it to 1500 per-image.
-model.proposal_generator.post_nms_topk = (1500, 1000)
-
-# Keypoint AP degrades (though box AP improves) when using plain L1 loss
-model.roi_heads.box_predictor.smooth_l1_beta = 0.5
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/json_utils/json_fix_general.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/json_utils/json_fix_general.py
deleted file mode 100644
index 7010fa3b9c1909de0e5a7f6ec13ca8aa418fe6c7..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/json_utils/json_fix_general.py
+++ /dev/null
@@ -1,124 +0,0 @@
-"""This module contains functions to fix JSON strings using general programmatic approaches, suitable for addressing
-common JSON formatting issues."""
-from __future__ import annotations
-
-import contextlib
-import json
-import re
-from typing import Optional
-
-from autogpt.config import Config
-from autogpt.json_utils.utilities import extract_char_position
-
-CFG = Config()
-
-
-def fix_invalid_escape(json_to_load: str, error_message: str) -> str:
- """Fix invalid escape sequences in JSON strings.
-
- Args:
- json_to_load (str): The JSON string.
- error_message (str): The error message from the JSONDecodeError
- exception.
-
- Returns:
- str: The JSON string with invalid escape sequences fixed.
- """
- while error_message.startswith("Invalid \\escape"):
- bad_escape_location = extract_char_position(error_message)
- json_to_load = (
- json_to_load[:bad_escape_location] + json_to_load[bad_escape_location + 1 :]
- )
- try:
- json.loads(json_to_load)
- return json_to_load
- except json.JSONDecodeError as e:
- if CFG.debug_mode:
- print("json loads error - fix invalid escape", e)
- error_message = str(e)
- return json_to_load
-
-
-def balance_braces(json_string: str) -> Optional[str]:
- """
- Balance the braces in a JSON string.
-
- Args:
- json_string (str): The JSON string.
-
- Returns:
- str: The JSON string with braces balanced.
- """
-
- open_braces_count = json_string.count("{")
- close_braces_count = json_string.count("}")
-
- while open_braces_count > close_braces_count:
- json_string += "}"
- close_braces_count += 1
-
- while close_braces_count > open_braces_count:
- json_string = json_string.rstrip("}")
- close_braces_count -= 1
-
- with contextlib.suppress(json.JSONDecodeError):
- json.loads(json_string)
- return json_string
-
-
-def add_quotes_to_property_names(json_string: str) -> str:
- """
- Add quotes to property names in a JSON string.
-
- Args:
- json_string (str): The JSON string.
-
- Returns:
- str: The JSON string with quotes added to property names.
- """
-
- def replace_func(match: re.Match) -> str:
- return f'"{match[1]}":'
-
- property_name_pattern = re.compile(r"(\w+):")
- corrected_json_string = property_name_pattern.sub(replace_func, json_string)
-
- try:
- json.loads(corrected_json_string)
- return corrected_json_string
- except json.JSONDecodeError as e:
- raise e
-
-
-def correct_json(json_to_load: str) -> str:
- """
- Correct common JSON errors.
- Args:
- json_to_load (str): The JSON string.
- """
-
- try:
- if CFG.debug_mode:
- print("json", json_to_load)
- json.loads(json_to_load)
- return json_to_load
- except json.JSONDecodeError as e:
- if CFG.debug_mode:
- print("json loads error", e)
- error_message = str(e)
- if error_message.startswith("Invalid \\escape"):
- json_to_load = fix_invalid_escape(json_to_load, error_message)
- if error_message.startswith(
- "Expecting property name enclosed in double quotes"
- ):
- json_to_load = add_quotes_to_property_names(json_to_load)
- try:
- json.loads(json_to_load)
- return json_to_load
- except json.JSONDecodeError as e:
- if CFG.debug_mode:
- print("json loads error - add quotes", e)
- error_message = str(e)
- if balanced_str := balance_braces(json_to_load):
- return balanced_str
- return json_to_load
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/infer_tools/trans_key.py b/spaces/ChrisPreston/diff-svc_minato_aqua/infer_tools/trans_key.py
deleted file mode 100644
index dc4f30aa054ee20b228c193fa115f767cbbf7055..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/infer_tools/trans_key.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import os
-head_list = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"]
-
-
-def trans_f0_seq(feature_pit, transform):
- feature_pit = feature_pit * 2 ** (transform / 12)
- return round(feature_pit, 1)
-
-
-def move_key(raw_data, mv_key):
- head = raw_data[:-1]
- body = int(raw_data[-1])
- new_head_index = head_list.index(head) + mv_key
- while new_head_index < 0:
- body -= 1
- new_head_index += 12
- while new_head_index > 11:
- body += 1
- new_head_index -= 12
- result_data = head_list[new_head_index] + str(body)
- return result_data
-
-
-def trans_key(raw_data, key):
- for i in raw_data:
- note_seq_list = i["note_seq"].split(" ")
- new_note_seq_list = []
- for note_seq in note_seq_list:
- if note_seq != "rest":
- new_note_seq = move_key(note_seq, key)
- new_note_seq_list.append(new_note_seq)
- else:
- new_note_seq_list.append(note_seq)
- i["note_seq"] = " ".join(new_note_seq_list)
-
- f0_seq_list = i["f0_seq"].split(" ")
- f0_seq_list = [float(x) for x in f0_seq_list]
- new_f0_seq_list = []
- for f0_seq in f0_seq_list:
- new_f0_seq = trans_f0_seq(f0_seq, key)
- new_f0_seq_list.append(str(new_f0_seq))
- i["f0_seq"] = " ".join(new_f0_seq_list)
- return raw_data
-
-
-def trans_opencpop(raw_txt, res_txt, key):
- if os.path.exists(raw_txt):
- f_w = open(res_txt, "w", encoding='utf-8')
- with open(raw_txt, "r", encoding='utf-8') as f:
- raw_data = f.readlines()
- for raw in raw_data:
- raw_list = raw.split("|")
- new_note_seq_list = []
- for note_seq in raw_list[3].split(" "):
- if note_seq != "rest":
- note_seq = note_seq.split("/")[0] if "/" in note_seq else note_seq
- new_note_seq = move_key(note_seq, key)
- new_note_seq_list.append(new_note_seq)
- else:
- new_note_seq_list.append(note_seq)
- raw_list[3] = " ".join(new_note_seq_list)
- f_w.write("|".join(raw_list))
- f_w.close()
- print("opencpop标注文件转换完毕")
- else:
- print("未发现opencpop标注文件,请检查路径")
-
diff --git a/spaces/Codecooker/rvcapi/src/webui.py b/spaces/Codecooker/rvcapi/src/webui.py
deleted file mode 100644
index 97a6f84f25cfd2e5cc418d1963f2f34780f31825..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/src/webui.py
+++ /dev/null
@@ -1,309 +0,0 @@
-import json
-import os
-os.system("pip install torchcrepe")
-os.system("pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu")
-import shutil
-import urllib.request
-import zipfile
-from argparse import ArgumentParser
-
-import gradio as gr
-
-from main import song_cover_pipeline
-
-BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-mdxnet_models_dir = os.path.join(BASE_DIR, 'mdxnet_models')
-rvc_models_dir = os.path.join(BASE_DIR, 'rvc_models')
-output_dir = os.path.join(BASE_DIR, 'song_output')
-
-
-def get_current_models(models_dir):
- models_list = os.listdir(models_dir)
- items_to_remove = ['hubert_base.pt', 'MODELS.txt', 'public_models.json', 'rmvpe.pt']
- return [item for item in models_list if item not in items_to_remove]
-
-
-def update_models_list():
- models_l = get_current_models(rvc_models_dir)
- return gr.Dropdown.update(choices=models_l)
-
-
-def load_public_models():
- models_table = []
- for model in public_models['voice_models']:
- if not model['name'] in voice_models:
- model = [model['name'], model['description'], model['credit'], model['url'], ', '.join(model['tags'])]
- models_table.append(model)
-
- tags = list(public_models['tags'].keys())
- return gr.DataFrame.update(value=models_table), gr.CheckboxGroup.update(choices=tags)
-
-
-def extract_zip(extraction_folder, zip_name):
- os.makedirs(extraction_folder)
- with zipfile.ZipFile(zip_name, 'r') as zip_ref:
- zip_ref.extractall(extraction_folder)
- os.remove(zip_name)
-
- index_filepath, model_filepath = None, None
- for root, dirs, files in os.walk(extraction_folder):
- for name in files:
- if name.endswith('.index'):
- index_filepath = os.path.join(root, name)
-
- if name.endswith('.pth'):
- model_filepath = os.path.join(root, name)
-
- if not model_filepath:
- raise gr.Error(f'No .pth model file was found in the extracted zip. Please check {extraction_folder}.')
-
- # move model and index file to extraction folder
- os.rename(model_filepath, os.path.join(extraction_folder, os.path.basename(model_filepath)))
- if index_filepath:
- os.rename(index_filepath, os.path.join(extraction_folder, os.path.basename(index_filepath)))
-
- # remove any unnecessary nested folders
- for filepath in os.listdir(extraction_folder):
- if os.path.isdir(os.path.join(extraction_folder, filepath)):
- shutil.rmtree(os.path.join(extraction_folder, filepath))
-
-
-def download_online_model(url, dir_name, progress=gr.Progress()):
- try:
- progress(0, desc=f'[~] Downloading voice model with name {dir_name}...')
- zip_name = url.split('/')[-1]
- extraction_folder = os.path.join(rvc_models_dir, dir_name)
- if os.path.exists(extraction_folder):
- raise gr.Error(f'Voice model directory {dir_name} already exists! Choose a different name for your voice model.')
-
- if 'pixeldrain.com' in url:
- url = f'https://pixeldrain.com/api/file/{zip_name}'
-
- urllib.request.urlretrieve(url, zip_name)
-
- progress(0.5, desc='[~] Extracting zip...')
- extract_zip(extraction_folder, zip_name)
- return f'[+] {dir_name} Model successfully downloaded!'
-
- except Exception as e:
- raise gr.Error(str(e))
-
-
-def upload_local_model(zip_path, dir_name, progress=gr.Progress()):
- try:
- extraction_folder = os.path.join(rvc_models_dir, dir_name)
- if os.path.exists(extraction_folder):
- raise gr.Error(f'Voice model directory {dir_name} already exists! Choose a different name for your voice model.')
-
- zip_name = zip_path.name
- progress(0.5, desc='[~] Extracting zip...')
- extract_zip(extraction_folder, zip_name)
- return f'[+] {dir_name} Model successfully uploaded!'
-
- except Exception as e:
- raise gr.Error(str(e))
-
-
-def filter_models(tags, query):
- models_table = []
-
- # no filter
- if len(tags) == 0 and len(query) == 0:
- for model in public_models['voice_models']:
- models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']])
-
- # filter based on tags and query
- elif len(tags) > 0 and len(query) > 0:
- for model in public_models['voice_models']:
- if all(tag in model['tags'] for tag in tags):
- model_attributes = f"{model['name']} {model['description']} {model['credit']} {' '.join(model['tags'])}".lower()
- if query.lower() in model_attributes:
- models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']])
-
- # filter based on only tags
- elif len(tags) > 0:
- for model in public_models['voice_models']:
- if all(tag in model['tags'] for tag in tags):
- models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']])
-
- # filter based on only query
- else:
- for model in public_models['voice_models']:
- model_attributes = f"{model['name']} {model['description']} {model['credit']} {' '.join(model['tags'])}".lower()
- if query.lower() in model_attributes:
- models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']])
-
- return gr.DataFrame.update(value=models_table)
-
-
-def pub_dl_autofill(pub_models, event: gr.SelectData):
- return gr.Text.update(value=pub_models.loc[event.index[0], 'URL']), gr.Text.update(value=pub_models.loc[event.index[0], 'Model Name'])
-
-
-def swap_visibility():
- return gr.update(visible=True), gr.update(visible=False), gr.update(value=''), gr.update(value=None)
-
-
-def process_file_upload(file):
- return file.name, gr.update(value=file.name)
-
-
-if __name__ == '__main__':
- os.system("pip install torchcrepe")
- os.system("pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu")
- parser = ArgumentParser(description='Generate a AI cover song in the song_output/id directory.', add_help=True)
- parser.add_argument("--share", action="store_true", dest="share_enabled", default=False, help="Enable sharing")
- parser.add_argument("--listen", action="store_true", default=False, help="Make the WebUI reachable from your local network.")
- parser.add_argument('--listen-host', type=str, help='The hostname that the server will use.')
- parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.')
- args = parser.parse_args()
-
- voice_models = get_current_models(rvc_models_dir)
- with open(os.path.join(rvc_models_dir, 'public_models.json'), encoding='utf8') as infile:
- public_models = json.load(infile)
-
- with gr.Blocks(title='AICoverGenWebUI') as app:
-
- gr.Label('AICoverGen WebUI created with ❤️', show_label=False)
-
- # main tab
- with gr.Tab("Generate"):
-
- with gr.Accordion('Main Options'):
- with gr.Row():
- with gr.Column():
- rvc_model = gr.Dropdown(voice_models, label='Voice Models', info='Models folder "AICoverGen --> rvc_models". After new models are added into this folder, click the refresh button')
- ref_btn = gr.Button('Refresh Models 🔁', variant='primary')
-
- with gr.Column() as yt_link_col:
- song_input = gr.Text(label='Song input', info='Link to a song on YouTube or full path to a local file. For file upload, click the button below.')
- show_file_upload_button = gr.Button('Upload file instead')
-
- with gr.Column(visible=False) as file_upload_col:
- local_file = gr.File(label='Audio file')
- song_input_file = gr.UploadButton('Upload 📂', file_types=['audio'], variant='primary')
- show_yt_link_button = gr.Button('Paste YouTube link/Path to local file instead')
- song_input_file.upload(process_file_upload, inputs=[song_input_file], outputs=[local_file, song_input])
-
- pitch = gr.Slider(-24, 24, value=0, step=1, label='Pitch Change', info='Pitch Change should be set to either -12, 0, or 12 (multiples of 12) to ensure the vocals are not out of tune')
- show_file_upload_button.click(swap_visibility, outputs=[file_upload_col, yt_link_col, song_input, local_file])
- show_yt_link_button.click(swap_visibility, outputs=[yt_link_col, file_upload_col, song_input, local_file])
-
- with gr.Accordion('Voice conversion options', open=False):
- with gr.Row():
- index_rate = gr.Slider(0, 1, value=0.5, label='Index Rate', info="Controls how much of the AI voice's accent to keep in the vocals")
- filter_radius = gr.Slider(0, 7, value=3, step=1, label='Filter radius', info='If >=3: apply median filtering median filtering to the harvested pitch results. Can reduce breathiness')
- rms_mix_rate = gr.Slider(0, 1, value=0.25, label='RMS mix rate', info="Control how much to use the original vocal's loudness (0) or a fixed loudness (1)")
- protect = gr.Slider(0, 0.5, value=0.33, label='Protect rate', info='Protect voiceless consonants and breath sounds. Set to 0.5 to disable.')
- keep_files = gr.Checkbox(label='Keep intermediate files',
- info='Keep all audio files generated in the song_output/id directory, e.g. Isolated Vocals/Instrumentals. Leave unchecked to save space')
-
- with gr.Accordion('Audio mixing options', open=False):
- gr.Markdown('### Volume Change (decibels)')
- with gr.Row():
- main_gain = gr.Slider(-20, 20, value=0, step=1, label='Main Vocals')
- backup_gain = gr.Slider(-20, 20, value=0, step=1, label='Backup Vocals')
- inst_gain = gr.Slider(-20, 20, value=0, step=1, label='Music')
-
- gr.Markdown('### Reverb Control on AI Vocals')
- with gr.Row():
- reverb_rm_size = gr.Slider(0, 1, value=0.15, label='Room size', info='The larger the room, the longer the reverb time')
- reverb_wet = gr.Slider(0, 1, value=0.2, label='Wetness level', info='Level of AI vocals with reverb')
- reverb_dry = gr.Slider(0, 1, value=0.8, label='Dryness level', info='Level of AI vocals without reverb')
- reverb_damping = gr.Slider(0, 1, value=0.7, label='Damping level', info='Absorption of high frequencies in the reverb')
-
- with gr.Row():
- clear_btn = gr.ClearButton(value='Clear', components=[song_input, rvc_model, keep_files, local_file])
- generate_btn = gr.Button("Generate", variant='primary')
- ai_cover = gr.Audio(label='AI Cover', show_share_button=False)
-
- ref_btn.click(update_models_list, None, outputs=rvc_model)
- is_webui = gr.Number(value=1, visible=False)
- generate_btn.click(song_cover_pipeline,
- inputs=[song_input, rvc_model, pitch, keep_files, is_webui, main_gain, backup_gain,
- inst_gain, index_rate, filter_radius, rms_mix_rate, protect, reverb_rm_size,
- reverb_wet, reverb_dry, reverb_damping],
- outputs=[ai_cover])
- clear_btn.click(lambda: [0, 0, 0, 0, 0.5, 3, 0.25, 0.33, 0.15, 0.2, 0.8, 0.7, None],
- outputs=[pitch, main_gain, backup_gain, inst_gain, index_rate, filter_radius, rms_mix_rate,
- protect, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping, ai_cover])
-
- # Download tab
- with gr.Tab('Download model'):
-
- with gr.Tab('From HuggingFace/Pixeldrain URL'):
- with gr.Row():
- model_zip_link = gr.Text(label='Download link to model', info='Should be a zip file containing a .pth model file and an optional .index file.')
- model_name = gr.Text(label='Name your model', info='Give your new model a unique name from your other voice models.')
-
- with gr.Row():
- download_btn = gr.Button('Download 🌐', variant='primary', scale=19)
- dl_output_message = gr.Text(label='Output Message', interactive=False, scale=20)
-
- download_btn.click(download_online_model, inputs=[model_zip_link, model_name], outputs=dl_output_message)
-
- gr.Markdown('## Input Examples')
- gr.Examples(
- [
- ['https://huggingface.co/phant0m4r/LiSA/resolve/main/LiSA.zip', 'Lisa'],
- ['https://pixeldrain.com/u/3tJmABXA', 'Gura'],
- ['https://huggingface.co/Kit-Lemonfoot/kitlemonfoot_rvc_models/resolve/main/AZKi%20(Hybrid).zip', 'Azki']
- ],
- [model_zip_link, model_name],
- [],
- download_online_model,
- )
-
- with gr.Tab('From Public Index'):
-
- gr.Markdown('## How to use')
- gr.Markdown('- Click Initialize public models table')
- gr.Markdown('- Filter models using tags or search bar')
- gr.Markdown('- Select a row to autofill the download link and model name')
- gr.Markdown('- Click Download')
-
- with gr.Row():
- pub_zip_link = gr.Text(label='Download link to model')
- pub_model_name = gr.Text(label='Model name')
-
- with gr.Row():
- download_pub_btn = gr.Button('Download 🌐', variant='primary', scale=19)
- pub_dl_output_message = gr.Text(label='Output Message', interactive=False, scale=20)
-
- filter_tags = gr.CheckboxGroup(value=[], label='Show voice models with tags', choices=[])
- search_query = gr.Text(label='Search')
- load_public_models_button = gr.Button(value='Initialize public models table', variant='primary')
-
- public_models_table = gr.DataFrame(value=[], headers=['Model Name', 'Description', 'Credit', 'URL', 'Tags'], label='Available Public Models', interactive=False)
- public_models_table.select(pub_dl_autofill, inputs=[public_models_table], outputs=[pub_zip_link, pub_model_name])
- load_public_models_button.click(load_public_models, outputs=[public_models_table, filter_tags])
- search_query.change(filter_models, inputs=[filter_tags, search_query], outputs=public_models_table)
- filter_tags.change(filter_models, inputs=[filter_tags, search_query], outputs=public_models_table)
- download_pub_btn.click(download_online_model, inputs=[pub_zip_link, pub_model_name], outputs=pub_dl_output_message)
-
- # Upload tab
- with gr.Tab('Upload model'):
- gr.Markdown('## Upload locally trained RVC v2 model and index file')
- gr.Markdown('- Find model file (weights folder) and optional index file (logs/[name] folder)')
- gr.Markdown('- Compress files into zip file')
- gr.Markdown('- Upload zip file and give unique name for voice')
- gr.Markdown('- Click Upload model')
-
- with gr.Row():
- with gr.Column():
- zip_file = gr.File(label='Zip file')
-
- local_model_name = gr.Text(label='Model name')
-
- with gr.Row():
- model_upload_button = gr.Button('Upload model', variant='primary', scale=19)
- local_upload_output_message = gr.Text(label='Output Message', interactive=False, scale=20)
- model_upload_button.click(upload_local_model, inputs=[zip_file, local_model_name], outputs=local_upload_output_message)
-
- app.launch(
- share=args.share_enabled,
- enable_queue=True,
- server_name=None if not args.listen else (args.listen_host or '0.0.0.0'),
- server_port=args.listen_port,
- )
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/cu2qu/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/cu2qu/__init__.py
deleted file mode 100644
index 4ae6356e44e1fed074b6283bcb4365bf2b770529..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/cu2qu/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright 2016 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .cu2qu import *
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/builder.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/builder.py
deleted file mode 100644
index 42d1f8f24a720a8cbbaf3f7b7344eb4773ca0f4d..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/builder.py
+++ /dev/null
@@ -1,1706 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import Tag, tostr, binary2num, safeEval
-from fontTools.feaLib.error import FeatureLibError
-from fontTools.feaLib.lookupDebugInfo import (
- LookupDebugInfo,
- LOOKUP_DEBUG_INFO_KEY,
- LOOKUP_DEBUG_ENV_VAR,
-)
-from fontTools.feaLib.parser import Parser
-from fontTools.feaLib.ast import FeatureFile
-from fontTools.feaLib.variableScalar import VariableScalar
-from fontTools.otlLib import builder as otl
-from fontTools.otlLib.maxContextCalc import maxCtxFont
-from fontTools.ttLib import newTable, getTableModule
-from fontTools.ttLib.tables import otBase, otTables
-from fontTools.otlLib.builder import (
- AlternateSubstBuilder,
- ChainContextPosBuilder,
- ChainContextSubstBuilder,
- LigatureSubstBuilder,
- MultipleSubstBuilder,
- CursivePosBuilder,
- MarkBasePosBuilder,
- MarkLigPosBuilder,
- MarkMarkPosBuilder,
- ReverseChainSingleSubstBuilder,
- SingleSubstBuilder,
- ClassPairPosSubtableBuilder,
- PairPosBuilder,
- SinglePosBuilder,
- ChainContextualRule,
-)
-from fontTools.otlLib.error import OpenTypeLibError
-from fontTools.varLib.varStore import OnlineVarStoreBuilder
-from fontTools.varLib.builder import buildVarDevTable
-from fontTools.varLib.featureVars import addFeatureVariationsRaw
-from fontTools.varLib.models import normalizeValue, piecewiseLinearMap
-from collections import defaultdict
-import itertools
-from io import StringIO
-import logging
-import warnings
-import os
-
-
-log = logging.getLogger(__name__)
-
-
-def addOpenTypeFeatures(font, featurefile, tables=None, debug=False):
- """Add features from a file to a font. Note that this replaces any features
- currently present.
-
- Args:
- font (feaLib.ttLib.TTFont): The font object.
- featurefile: Either a path or file object (in which case we
- parse it into an AST), or a pre-parsed AST instance.
- tables: If passed, restrict the set of affected tables to those in the
- list.
- debug: Whether to add source debugging information to the font in the
- ``Debg`` table
-
- """
- builder = Builder(font, featurefile)
- builder.build(tables=tables, debug=debug)
-
-
-def addOpenTypeFeaturesFromString(
- font, features, filename=None, tables=None, debug=False
-):
- """Add features from a string to a font. Note that this replaces any
- features currently present.
-
- Args:
- font (feaLib.ttLib.TTFont): The font object.
- features: A string containing feature code.
- filename: The directory containing ``filename`` is used as the root of
- relative ``include()`` paths; if ``None`` is provided, the current
- directory is assumed.
- tables: If passed, restrict the set of affected tables to those in the
- list.
- debug: Whether to add source debugging information to the font in the
- ``Debg`` table
-
- """
-
- featurefile = StringIO(tostr(features))
- if filename:
- featurefile.name = filename
- addOpenTypeFeatures(font, featurefile, tables=tables, debug=debug)
-
-
-class Builder(object):
- supportedTables = frozenset(
- Tag(tag)
- for tag in [
- "BASE",
- "GDEF",
- "GPOS",
- "GSUB",
- "OS/2",
- "head",
- "hhea",
- "name",
- "vhea",
- "STAT",
- ]
- )
-
- def __init__(self, font, featurefile):
- self.font = font
- # 'featurefile' can be either a path or file object (in which case we
- # parse it into an AST), or a pre-parsed AST instance
- if isinstance(featurefile, FeatureFile):
- self.parseTree, self.file = featurefile, None
- else:
- self.parseTree, self.file = None, featurefile
- self.glyphMap = font.getReverseGlyphMap()
- self.varstorebuilder = None
- if "fvar" in font:
- self.axes = font["fvar"].axes
- self.varstorebuilder = OnlineVarStoreBuilder(
- [ax.axisTag for ax in self.axes]
- )
- self.default_language_systems_ = set()
- self.script_ = None
- self.lookupflag_ = 0
- self.lookupflag_markFilterSet_ = None
- self.language_systems = set()
- self.seen_non_DFLT_script_ = False
- self.named_lookups_ = {}
- self.cur_lookup_ = None
- self.cur_lookup_name_ = None
- self.cur_feature_name_ = None
- self.lookups_ = []
- self.lookup_locations = {"GSUB": {}, "GPOS": {}}
- self.features_ = {} # ('latn', 'DEU ', 'smcp') --> [LookupBuilder*]
- self.required_features_ = {} # ('latn', 'DEU ') --> 'scmp'
- self.feature_variations_ = {}
- # for feature 'aalt'
- self.aalt_features_ = [] # [(location, featureName)*], for 'aalt'
- self.aalt_location_ = None
- self.aalt_alternates_ = {}
- # for 'featureNames'
- self.featureNames_ = set()
- self.featureNames_ids_ = {}
- # for 'cvParameters'
- self.cv_parameters_ = set()
- self.cv_parameters_ids_ = {}
- self.cv_num_named_params_ = {}
- self.cv_characters_ = defaultdict(list)
- # for feature 'size'
- self.size_parameters_ = None
- # for table 'head'
- self.fontRevision_ = None # 2.71
- # for table 'name'
- self.names_ = []
- # for table 'BASE'
- self.base_horiz_axis_ = None
- self.base_vert_axis_ = None
- # for table 'GDEF'
- self.attachPoints_ = {} # "a" --> {3, 7}
- self.ligCaretCoords_ = {} # "f_f_i" --> {300, 600}
- self.ligCaretPoints_ = {} # "f_f_i" --> {3, 7}
- self.glyphClassDefs_ = {} # "fi" --> (2, (file, line, column))
- self.markAttach_ = {} # "acute" --> (4, (file, line, column))
- self.markAttachClassID_ = {} # frozenset({"acute", "grave"}) --> 4
- self.markFilterSets_ = {} # frozenset({"acute", "grave"}) --> 4
- # for table 'OS/2'
- self.os2_ = {}
- # for table 'hhea'
- self.hhea_ = {}
- # for table 'vhea'
- self.vhea_ = {}
- # for table 'STAT'
- self.stat_ = {}
- # for conditionsets
- self.conditionsets_ = {}
- # We will often use exactly the same locations (i.e. the font's masters)
- # for a large number of variable scalars. Instead of creating a model
- # for each, let's share the models.
- self.model_cache = {}
-
- def build(self, tables=None, debug=False):
- if self.parseTree is None:
- self.parseTree = Parser(self.file, self.glyphMap).parse()
- self.parseTree.build(self)
- # by default, build all the supported tables
- if tables is None:
- tables = self.supportedTables
- else:
- tables = frozenset(tables)
- unsupported = tables - self.supportedTables
- if unsupported:
- unsupported_string = ", ".join(sorted(unsupported))
- raise NotImplementedError(
- "The following tables were requested but are unsupported: "
- f"{unsupported_string}."
- )
- if "GSUB" in tables:
- self.build_feature_aalt_()
- if "head" in tables:
- self.build_head()
- if "hhea" in tables:
- self.build_hhea()
- if "vhea" in tables:
- self.build_vhea()
- if "name" in tables:
- self.build_name()
- if "OS/2" in tables:
- self.build_OS_2()
- if "STAT" in tables:
- self.build_STAT()
- for tag in ("GPOS", "GSUB"):
- if tag not in tables:
- continue
- table = self.makeTable(tag)
- if self.feature_variations_:
- self.makeFeatureVariations(table, tag)
- if (
- table.ScriptList.ScriptCount > 0
- or table.FeatureList.FeatureCount > 0
- or table.LookupList.LookupCount > 0
- ):
- fontTable = self.font[tag] = newTable(tag)
- fontTable.table = table
- elif tag in self.font:
- del self.font[tag]
- if any(tag in self.font for tag in ("GPOS", "GSUB")) and "OS/2" in self.font:
- self.font["OS/2"].usMaxContext = maxCtxFont(self.font)
- if "GDEF" in tables:
- gdef = self.buildGDEF()
- if gdef:
- self.font["GDEF"] = gdef
- elif "GDEF" in self.font:
- del self.font["GDEF"]
- if "BASE" in tables:
- base = self.buildBASE()
- if base:
- self.font["BASE"] = base
- elif "BASE" in self.font:
- del self.font["BASE"]
- if debug or os.environ.get(LOOKUP_DEBUG_ENV_VAR):
- self.buildDebg()
-
- def get_chained_lookup_(self, location, builder_class):
- result = builder_class(self.font, location)
- result.lookupflag = self.lookupflag_
- result.markFilterSet = self.lookupflag_markFilterSet_
- self.lookups_.append(result)
- return result
-
- def add_lookup_to_feature_(self, lookup, feature_name):
- for script, lang in self.language_systems:
- key = (script, lang, feature_name)
- self.features_.setdefault(key, []).append(lookup)
-
- def get_lookup_(self, location, builder_class):
- if (
- self.cur_lookup_
- and type(self.cur_lookup_) == builder_class
- and self.cur_lookup_.lookupflag == self.lookupflag_
- and self.cur_lookup_.markFilterSet == self.lookupflag_markFilterSet_
- ):
- return self.cur_lookup_
- if self.cur_lookup_name_ and self.cur_lookup_:
- raise FeatureLibError(
- "Within a named lookup block, all rules must be of "
- "the same lookup type and flag",
- location,
- )
- self.cur_lookup_ = builder_class(self.font, location)
- self.cur_lookup_.lookupflag = self.lookupflag_
- self.cur_lookup_.markFilterSet = self.lookupflag_markFilterSet_
- self.lookups_.append(self.cur_lookup_)
- if self.cur_lookup_name_:
- # We are starting a lookup rule inside a named lookup block.
- self.named_lookups_[self.cur_lookup_name_] = self.cur_lookup_
- if self.cur_feature_name_:
- # We are starting a lookup rule inside a feature. This includes
- # lookup rules inside named lookups inside features.
- self.add_lookup_to_feature_(self.cur_lookup_, self.cur_feature_name_)
- return self.cur_lookup_
-
- def build_feature_aalt_(self):
- if not self.aalt_features_ and not self.aalt_alternates_:
- return
- alternates = {g: set(a) for g, a in self.aalt_alternates_.items()}
- for location, name in self.aalt_features_ + [(None, "aalt")]:
- feature = [
- (script, lang, feature, lookups)
- for (script, lang, feature), lookups in self.features_.items()
- if feature == name
- ]
- # "aalt" does not have to specify its own lookups, but it might.
- if not feature and name != "aalt":
- warnings.warn("%s: Feature %s has not been defined" % (location, name))
- continue
- for script, lang, feature, lookups in feature:
- for lookuplist in lookups:
- if not isinstance(lookuplist, list):
- lookuplist = [lookuplist]
- for lookup in lookuplist:
- for glyph, alts in lookup.getAlternateGlyphs().items():
- alternates.setdefault(glyph, set()).update(alts)
- single = {
- glyph: list(repl)[0] for glyph, repl in alternates.items() if len(repl) == 1
- }
- # TODO: Figure out the glyph alternate ordering used by makeotf.
- # https://github.com/fonttools/fonttools/issues/836
- multi = {
- glyph: sorted(repl, key=self.font.getGlyphID)
- for glyph, repl in alternates.items()
- if len(repl) > 1
- }
- if not single and not multi:
- return
- self.features_ = {
- (script, lang, feature): lookups
- for (script, lang, feature), lookups in self.features_.items()
- if feature != "aalt"
- }
- old_lookups = self.lookups_
- self.lookups_ = []
- self.start_feature(self.aalt_location_, "aalt")
- if single:
- single_lookup = self.get_lookup_(location, SingleSubstBuilder)
- single_lookup.mapping = single
- if multi:
- multi_lookup = self.get_lookup_(location, AlternateSubstBuilder)
- multi_lookup.alternates = multi
- self.end_feature()
- self.lookups_.extend(old_lookups)
-
- def build_head(self):
- if not self.fontRevision_:
- return
- table = self.font.get("head")
- if not table: # this only happens for unit tests
- table = self.font["head"] = newTable("head")
- table.decompile(b"\0" * 54, self.font)
- table.tableVersion = 1.0
- table.created = table.modified = 3406620153 # 2011-12-13 11:22:33
- table.fontRevision = self.fontRevision_
-
- def build_hhea(self):
- if not self.hhea_:
- return
- table = self.font.get("hhea")
- if not table: # this only happens for unit tests
- table = self.font["hhea"] = newTable("hhea")
- table.decompile(b"\0" * 36, self.font)
- table.tableVersion = 0x00010000
- if "caretoffset" in self.hhea_:
- table.caretOffset = self.hhea_["caretoffset"]
- if "ascender" in self.hhea_:
- table.ascent = self.hhea_["ascender"]
- if "descender" in self.hhea_:
- table.descent = self.hhea_["descender"]
- if "linegap" in self.hhea_:
- table.lineGap = self.hhea_["linegap"]
-
- def build_vhea(self):
- if not self.vhea_:
- return
- table = self.font.get("vhea")
- if not table: # this only happens for unit tests
- table = self.font["vhea"] = newTable("vhea")
- table.decompile(b"\0" * 36, self.font)
- table.tableVersion = 0x00011000
- if "verttypoascender" in self.vhea_:
- table.ascent = self.vhea_["verttypoascender"]
- if "verttypodescender" in self.vhea_:
- table.descent = self.vhea_["verttypodescender"]
- if "verttypolinegap" in self.vhea_:
- table.lineGap = self.vhea_["verttypolinegap"]
-
- def get_user_name_id(self, table):
- # Try to find first unused font-specific name id
- nameIDs = [name.nameID for name in table.names]
- for user_name_id in range(256, 32767):
- if user_name_id not in nameIDs:
- return user_name_id
-
- def buildFeatureParams(self, tag):
- params = None
- if tag == "size":
- params = otTables.FeatureParamsSize()
- (
- params.DesignSize,
- params.SubfamilyID,
- params.RangeStart,
- params.RangeEnd,
- ) = self.size_parameters_
- if tag in self.featureNames_ids_:
- params.SubfamilyNameID = self.featureNames_ids_[tag]
- else:
- params.SubfamilyNameID = 0
- elif tag in self.featureNames_:
- if not self.featureNames_ids_:
- # name table wasn't selected among the tables to build; skip
- pass
- else:
- assert tag in self.featureNames_ids_
- params = otTables.FeatureParamsStylisticSet()
- params.Version = 0
- params.UINameID = self.featureNames_ids_[tag]
- elif tag in self.cv_parameters_:
- params = otTables.FeatureParamsCharacterVariants()
- params.Format = 0
- params.FeatUILabelNameID = self.cv_parameters_ids_.get(
- (tag, "FeatUILabelNameID"), 0
- )
- params.FeatUITooltipTextNameID = self.cv_parameters_ids_.get(
- (tag, "FeatUITooltipTextNameID"), 0
- )
- params.SampleTextNameID = self.cv_parameters_ids_.get(
- (tag, "SampleTextNameID"), 0
- )
- params.NumNamedParameters = self.cv_num_named_params_.get(tag, 0)
- params.FirstParamUILabelNameID = self.cv_parameters_ids_.get(
- (tag, "ParamUILabelNameID_0"), 0
- )
- params.CharCount = len(self.cv_characters_[tag])
- params.Character = self.cv_characters_[tag]
- return params
-
- def build_name(self):
- if not self.names_:
- return
- table = self.font.get("name")
- if not table: # this only happens for unit tests
- table = self.font["name"] = newTable("name")
- table.names = []
- for name in self.names_:
- nameID, platformID, platEncID, langID, string = name
- # For featureNames block, nameID is 'feature tag'
- # For cvParameters blocks, nameID is ('feature tag', 'block name')
- if not isinstance(nameID, int):
- tag = nameID
- if tag in self.featureNames_:
- if tag not in self.featureNames_ids_:
- self.featureNames_ids_[tag] = self.get_user_name_id(table)
- assert self.featureNames_ids_[tag] is not None
- nameID = self.featureNames_ids_[tag]
- elif tag[0] in self.cv_parameters_:
- if tag not in self.cv_parameters_ids_:
- self.cv_parameters_ids_[tag] = self.get_user_name_id(table)
- assert self.cv_parameters_ids_[tag] is not None
- nameID = self.cv_parameters_ids_[tag]
- table.setName(string, nameID, platformID, platEncID, langID)
- table.names.sort()
-
- def build_OS_2(self):
- if not self.os2_:
- return
- table = self.font.get("OS/2")
- if not table: # this only happens for unit tests
- table = self.font["OS/2"] = newTable("OS/2")
- data = b"\0" * sstruct.calcsize(getTableModule("OS/2").OS2_format_0)
- table.decompile(data, self.font)
- version = 0
- if "fstype" in self.os2_:
- table.fsType = self.os2_["fstype"]
- if "panose" in self.os2_:
- panose = getTableModule("OS/2").Panose()
- (
- panose.bFamilyType,
- panose.bSerifStyle,
- panose.bWeight,
- panose.bProportion,
- panose.bContrast,
- panose.bStrokeVariation,
- panose.bArmStyle,
- panose.bLetterForm,
- panose.bMidline,
- panose.bXHeight,
- ) = self.os2_["panose"]
- table.panose = panose
- if "typoascender" in self.os2_:
- table.sTypoAscender = self.os2_["typoascender"]
- if "typodescender" in self.os2_:
- table.sTypoDescender = self.os2_["typodescender"]
- if "typolinegap" in self.os2_:
- table.sTypoLineGap = self.os2_["typolinegap"]
- if "winascent" in self.os2_:
- table.usWinAscent = self.os2_["winascent"]
- if "windescent" in self.os2_:
- table.usWinDescent = self.os2_["windescent"]
- if "vendor" in self.os2_:
- table.achVendID = safeEval("'''" + self.os2_["vendor"] + "'''")
- if "weightclass" in self.os2_:
- table.usWeightClass = self.os2_["weightclass"]
- if "widthclass" in self.os2_:
- table.usWidthClass = self.os2_["widthclass"]
- if "unicoderange" in self.os2_:
- table.setUnicodeRanges(self.os2_["unicoderange"])
- if "codepagerange" in self.os2_:
- pages = self.build_codepages_(self.os2_["codepagerange"])
- table.ulCodePageRange1, table.ulCodePageRange2 = pages
- version = 1
- if "xheight" in self.os2_:
- table.sxHeight = self.os2_["xheight"]
- version = 2
- if "capheight" in self.os2_:
- table.sCapHeight = self.os2_["capheight"]
- version = 2
- if "loweropsize" in self.os2_:
- table.usLowerOpticalPointSize = self.os2_["loweropsize"]
- version = 5
- if "upperopsize" in self.os2_:
- table.usUpperOpticalPointSize = self.os2_["upperopsize"]
- version = 5
-
- def checkattr(table, attrs):
- for attr in attrs:
- if not hasattr(table, attr):
- setattr(table, attr, 0)
-
- table.version = max(version, table.version)
- # this only happens for unit tests
- if version >= 1:
- checkattr(table, ("ulCodePageRange1", "ulCodePageRange2"))
- if version >= 2:
- checkattr(
- table,
- (
- "sxHeight",
- "sCapHeight",
- "usDefaultChar",
- "usBreakChar",
- "usMaxContext",
- ),
- )
- if version >= 5:
- checkattr(table, ("usLowerOpticalPointSize", "usUpperOpticalPointSize"))
-
- def setElidedFallbackName(self, value, location):
- # ElidedFallbackName is a convenience method for setting
- # ElidedFallbackNameID so only one can be allowed
- for token in ("ElidedFallbackName", "ElidedFallbackNameID"):
- if token in self.stat_:
- raise FeatureLibError(
- f"{token} is already set.",
- location,
- )
- if isinstance(value, int):
- self.stat_["ElidedFallbackNameID"] = value
- elif isinstance(value, list):
- self.stat_["ElidedFallbackName"] = value
- else:
- raise AssertionError(value)
-
- def addDesignAxis(self, designAxis, location):
- if "DesignAxes" not in self.stat_:
- self.stat_["DesignAxes"] = []
- if designAxis.tag in (r.tag for r in self.stat_["DesignAxes"]):
- raise FeatureLibError(
- f'DesignAxis already defined for tag "{designAxis.tag}".',
- location,
- )
- if designAxis.axisOrder in (r.axisOrder for r in self.stat_["DesignAxes"]):
- raise FeatureLibError(
- f"DesignAxis already defined for axis number {designAxis.axisOrder}.",
- location,
- )
- self.stat_["DesignAxes"].append(designAxis)
-
- def addAxisValueRecord(self, axisValueRecord, location):
- if "AxisValueRecords" not in self.stat_:
- self.stat_["AxisValueRecords"] = []
- # Check for duplicate AxisValueRecords
- for record_ in self.stat_["AxisValueRecords"]:
- if (
- {n.asFea() for n in record_.names}
- == {n.asFea() for n in axisValueRecord.names}
- and {n.asFea() for n in record_.locations}
- == {n.asFea() for n in axisValueRecord.locations}
- and record_.flags == axisValueRecord.flags
- ):
- raise FeatureLibError(
- "An AxisValueRecord with these values is already defined.",
- location,
- )
- self.stat_["AxisValueRecords"].append(axisValueRecord)
-
- def build_STAT(self):
- if not self.stat_:
- return
-
- axes = self.stat_.get("DesignAxes")
- if not axes:
- raise FeatureLibError("DesignAxes not defined", None)
- axisValueRecords = self.stat_.get("AxisValueRecords")
- axisValues = {}
- format4_locations = []
- for tag in axes:
- axisValues[tag.tag] = []
- if axisValueRecords is not None:
- for avr in axisValueRecords:
- valuesDict = {}
- if avr.flags > 0:
- valuesDict["flags"] = avr.flags
- if len(avr.locations) == 1:
- location = avr.locations[0]
- values = location.values
- if len(values) == 1: # format1
- valuesDict.update({"value": values[0], "name": avr.names})
- if len(values) == 2: # format3
- valuesDict.update(
- {
- "value": values[0],
- "linkedValue": values[1],
- "name": avr.names,
- }
- )
- if len(values) == 3: # format2
- nominal, minVal, maxVal = values
- valuesDict.update(
- {
- "nominalValue": nominal,
- "rangeMinValue": minVal,
- "rangeMaxValue": maxVal,
- "name": avr.names,
- }
- )
- axisValues[location.tag].append(valuesDict)
- else:
- valuesDict.update(
- {
- "location": {i.tag: i.values[0] for i in avr.locations},
- "name": avr.names,
- }
- )
- format4_locations.append(valuesDict)
-
- designAxes = [
- {
- "ordering": a.axisOrder,
- "tag": a.tag,
- "name": a.names,
- "values": axisValues[a.tag],
- }
- for a in axes
- ]
-
- nameTable = self.font.get("name")
- if not nameTable: # this only happens for unit tests
- nameTable = self.font["name"] = newTable("name")
- nameTable.names = []
-
- if "ElidedFallbackNameID" in self.stat_:
- nameID = self.stat_["ElidedFallbackNameID"]
- name = nameTable.getDebugName(nameID)
- if not name:
- raise FeatureLibError(
- f"ElidedFallbackNameID {nameID} points "
- "to a nameID that does not exist in the "
- '"name" table',
- None,
- )
- elif "ElidedFallbackName" in self.stat_:
- nameID = self.stat_["ElidedFallbackName"]
-
- otl.buildStatTable(
- self.font,
- designAxes,
- locations=format4_locations,
- elidedFallbackName=nameID,
- )
-
- def build_codepages_(self, pages):
- pages2bits = {
- 1252: 0,
- 1250: 1,
- 1251: 2,
- 1253: 3,
- 1254: 4,
- 1255: 5,
- 1256: 6,
- 1257: 7,
- 1258: 8,
- 874: 16,
- 932: 17,
- 936: 18,
- 949: 19,
- 950: 20,
- 1361: 21,
- 869: 48,
- 866: 49,
- 865: 50,
- 864: 51,
- 863: 52,
- 862: 53,
- 861: 54,
- 860: 55,
- 857: 56,
- 855: 57,
- 852: 58,
- 775: 59,
- 737: 60,
- 708: 61,
- 850: 62,
- 437: 63,
- }
- bits = [pages2bits[p] for p in pages if p in pages2bits]
- pages = []
- for i in range(2):
- pages.append("")
- for j in range(i * 32, (i + 1) * 32):
- if j in bits:
- pages[i] += "1"
- else:
- pages[i] += "0"
- return [binary2num(p[::-1]) for p in pages]
-
- def buildBASE(self):
- if not self.base_horiz_axis_ and not self.base_vert_axis_:
- return None
- base = otTables.BASE()
- base.Version = 0x00010000
- base.HorizAxis = self.buildBASEAxis(self.base_horiz_axis_)
- base.VertAxis = self.buildBASEAxis(self.base_vert_axis_)
-
- result = newTable("BASE")
- result.table = base
- return result
-
- def buildBASEAxis(self, axis):
- if not axis:
- return
- bases, scripts = axis
- axis = otTables.Axis()
- axis.BaseTagList = otTables.BaseTagList()
- axis.BaseTagList.BaselineTag = bases
- axis.BaseTagList.BaseTagCount = len(bases)
- axis.BaseScriptList = otTables.BaseScriptList()
- axis.BaseScriptList.BaseScriptRecord = []
- axis.BaseScriptList.BaseScriptCount = len(scripts)
- for script in sorted(scripts):
- record = otTables.BaseScriptRecord()
- record.BaseScriptTag = script[0]
- record.BaseScript = otTables.BaseScript()
- record.BaseScript.BaseLangSysCount = 0
- record.BaseScript.BaseValues = otTables.BaseValues()
- record.BaseScript.BaseValues.DefaultIndex = bases.index(script[1])
- record.BaseScript.BaseValues.BaseCoord = []
- record.BaseScript.BaseValues.BaseCoordCount = len(script[2])
- for c in script[2]:
- coord = otTables.BaseCoord()
- coord.Format = 1
- coord.Coordinate = c
- record.BaseScript.BaseValues.BaseCoord.append(coord)
- axis.BaseScriptList.BaseScriptRecord.append(record)
- return axis
-
- def buildGDEF(self):
- gdef = otTables.GDEF()
- gdef.GlyphClassDef = self.buildGDEFGlyphClassDef_()
- gdef.AttachList = otl.buildAttachList(self.attachPoints_, self.glyphMap)
- gdef.LigCaretList = otl.buildLigCaretList(
- self.ligCaretCoords_, self.ligCaretPoints_, self.glyphMap
- )
- gdef.MarkAttachClassDef = self.buildGDEFMarkAttachClassDef_()
- gdef.MarkGlyphSetsDef = self.buildGDEFMarkGlyphSetsDef_()
- gdef.Version = 0x00010002 if gdef.MarkGlyphSetsDef else 0x00010000
- if self.varstorebuilder:
- store = self.varstorebuilder.finish()
- if store:
- gdef.Version = 0x00010003
- gdef.VarStore = store
- varidx_map = store.optimize()
-
- gdef.remap_device_varidxes(varidx_map)
- if "GPOS" in self.font:
- self.font["GPOS"].table.remap_device_varidxes(varidx_map)
- self.model_cache.clear()
- if any(
- (
- gdef.GlyphClassDef,
- gdef.AttachList,
- gdef.LigCaretList,
- gdef.MarkAttachClassDef,
- gdef.MarkGlyphSetsDef,
- )
- ) or hasattr(gdef, "VarStore"):
- result = newTable("GDEF")
- result.table = gdef
- return result
- else:
- return None
-
- def buildGDEFGlyphClassDef_(self):
- if self.glyphClassDefs_:
- classes = {g: c for (g, (c, _)) in self.glyphClassDefs_.items()}
- else:
- classes = {}
- for lookup in self.lookups_:
- classes.update(lookup.inferGlyphClasses())
- for markClass in self.parseTree.markClasses.values():
- for markClassDef in markClass.definitions:
- for glyph in markClassDef.glyphSet():
- classes[glyph] = 3
- if classes:
- result = otTables.GlyphClassDef()
- result.classDefs = classes
- return result
- else:
- return None
-
- def buildGDEFMarkAttachClassDef_(self):
- classDefs = {g: c for g, (c, _) in self.markAttach_.items()}
- if not classDefs:
- return None
- result = otTables.MarkAttachClassDef()
- result.classDefs = classDefs
- return result
-
- def buildGDEFMarkGlyphSetsDef_(self):
- sets = []
- for glyphs, id_ in sorted(
- self.markFilterSets_.items(), key=lambda item: item[1]
- ):
- sets.append(glyphs)
- return otl.buildMarkGlyphSetsDef(sets, self.glyphMap)
-
- def buildDebg(self):
- if "Debg" not in self.font:
- self.font["Debg"] = newTable("Debg")
- self.font["Debg"].data = {}
- self.font["Debg"].data[LOOKUP_DEBUG_INFO_KEY] = self.lookup_locations
-
- def buildLookups_(self, tag):
- assert tag in ("GPOS", "GSUB"), tag
- for lookup in self.lookups_:
- lookup.lookup_index = None
- lookups = []
- for lookup in self.lookups_:
- if lookup.table != tag:
- continue
- lookup.lookup_index = len(lookups)
- self.lookup_locations[tag][str(lookup.lookup_index)] = LookupDebugInfo(
- location=str(lookup.location),
- name=self.get_lookup_name_(lookup),
- feature=None,
- )
- lookups.append(lookup)
- try:
- otLookups = [l.build() for l in lookups]
- except OpenTypeLibError as e:
- raise FeatureLibError(str(e), e.location) from e
- return otLookups
-
- def makeTable(self, tag):
- table = getattr(otTables, tag, None)()
- table.Version = 0x00010000
- table.ScriptList = otTables.ScriptList()
- table.ScriptList.ScriptRecord = []
- table.FeatureList = otTables.FeatureList()
- table.FeatureList.FeatureRecord = []
- table.LookupList = otTables.LookupList()
- table.LookupList.Lookup = self.buildLookups_(tag)
-
- # Build a table for mapping (tag, lookup_indices) to feature_index.
- # For example, ('liga', (2,3,7)) --> 23.
- feature_indices = {}
- required_feature_indices = {} # ('latn', 'DEU') --> 23
- scripts = {} # 'latn' --> {'DEU': [23, 24]} for feature #23,24
- # Sort the feature table by feature tag:
- # https://github.com/fonttools/fonttools/issues/568
- sortFeatureTag = lambda f: (f[0][2], f[0][1], f[0][0], f[1])
- for key, lookups in sorted(self.features_.items(), key=sortFeatureTag):
- script, lang, feature_tag = key
- # l.lookup_index will be None when a lookup is not needed
- # for the table under construction. For example, substitution
- # rules will have no lookup_index while building GPOS tables.
- lookup_indices = tuple(
- [l.lookup_index for l in lookups if l.lookup_index is not None]
- )
-
- size_feature = tag == "GPOS" and feature_tag == "size"
- force_feature = self.any_feature_variations(feature_tag, tag)
- if len(lookup_indices) == 0 and not size_feature and not force_feature:
- continue
-
- for ix in lookup_indices:
- try:
- self.lookup_locations[tag][str(ix)] = self.lookup_locations[tag][
- str(ix)
- ]._replace(feature=key)
- except KeyError:
- warnings.warn(
- "feaLib.Builder subclass needs upgrading to "
- "stash debug information. See fonttools#2065."
- )
-
- feature_key = (feature_tag, lookup_indices)
- feature_index = feature_indices.get(feature_key)
- if feature_index is None:
- feature_index = len(table.FeatureList.FeatureRecord)
- frec = otTables.FeatureRecord()
- frec.FeatureTag = feature_tag
- frec.Feature = otTables.Feature()
- frec.Feature.FeatureParams = self.buildFeatureParams(feature_tag)
- frec.Feature.LookupListIndex = list(lookup_indices)
- frec.Feature.LookupCount = len(lookup_indices)
- table.FeatureList.FeatureRecord.append(frec)
- feature_indices[feature_key] = feature_index
- scripts.setdefault(script, {}).setdefault(lang, []).append(feature_index)
- if self.required_features_.get((script, lang)) == feature_tag:
- required_feature_indices[(script, lang)] = feature_index
-
- # Build ScriptList.
- for script, lang_features in sorted(scripts.items()):
- srec = otTables.ScriptRecord()
- srec.ScriptTag = script
- srec.Script = otTables.Script()
- srec.Script.DefaultLangSys = None
- srec.Script.LangSysRecord = []
- for lang, feature_indices in sorted(lang_features.items()):
- langrec = otTables.LangSysRecord()
- langrec.LangSys = otTables.LangSys()
- langrec.LangSys.LookupOrder = None
-
- req_feature_index = required_feature_indices.get((script, lang))
- if req_feature_index is None:
- langrec.LangSys.ReqFeatureIndex = 0xFFFF
- else:
- langrec.LangSys.ReqFeatureIndex = req_feature_index
-
- langrec.LangSys.FeatureIndex = [
- i for i in feature_indices if i != req_feature_index
- ]
- langrec.LangSys.FeatureCount = len(langrec.LangSys.FeatureIndex)
-
- if lang == "dflt":
- srec.Script.DefaultLangSys = langrec.LangSys
- else:
- langrec.LangSysTag = lang
- srec.Script.LangSysRecord.append(langrec)
- srec.Script.LangSysCount = len(srec.Script.LangSysRecord)
- table.ScriptList.ScriptRecord.append(srec)
-
- table.ScriptList.ScriptCount = len(table.ScriptList.ScriptRecord)
- table.FeatureList.FeatureCount = len(table.FeatureList.FeatureRecord)
- table.LookupList.LookupCount = len(table.LookupList.Lookup)
- return table
-
- def makeFeatureVariations(self, table, table_tag):
- feature_vars = {}
- has_any_variations = False
- # Sort out which lookups to build, gather their indices
- for (_, _, feature_tag), variations in self.feature_variations_.items():
- feature_vars[feature_tag] = []
- for conditionset, builders in variations.items():
- raw_conditionset = self.conditionsets_[conditionset]
- indices = []
- for b in builders:
- if b.table != table_tag:
- continue
- assert b.lookup_index is not None
- indices.append(b.lookup_index)
- has_any_variations = True
- feature_vars[feature_tag].append((raw_conditionset, indices))
-
- if has_any_variations:
- for feature_tag, conditions_and_lookups in feature_vars.items():
- addFeatureVariationsRaw(
- self.font, table, conditions_and_lookups, feature_tag
- )
-
- def any_feature_variations(self, feature_tag, table_tag):
- for (_, _, feature), variations in self.feature_variations_.items():
- if feature != feature_tag:
- continue
- for conditionset, builders in variations.items():
- if any(b.table == table_tag for b in builders):
- return True
- return False
-
- def get_lookup_name_(self, lookup):
- rev = {v: k for k, v in self.named_lookups_.items()}
- if lookup in rev:
- return rev[lookup]
- return None
-
- def add_language_system(self, location, script, language):
- # OpenType Feature File Specification, section 4.b.i
- if script == "DFLT" and language == "dflt" and self.default_language_systems_:
- raise FeatureLibError(
- 'If "languagesystem DFLT dflt" is present, it must be '
- "the first of the languagesystem statements",
- location,
- )
- if script == "DFLT":
- if self.seen_non_DFLT_script_:
- raise FeatureLibError(
- 'languagesystems using the "DFLT" script tag must '
- "precede all other languagesystems",
- location,
- )
- else:
- self.seen_non_DFLT_script_ = True
- if (script, language) in self.default_language_systems_:
- raise FeatureLibError(
- '"languagesystem %s %s" has already been specified'
- % (script.strip(), language.strip()),
- location,
- )
- self.default_language_systems_.add((script, language))
-
- def get_default_language_systems_(self):
- # OpenType Feature File specification, 4.b.i. languagesystem:
- # If no "languagesystem" statement is present, then the
- # implementation must behave exactly as though the following
- # statement were present at the beginning of the feature file:
- # languagesystem DFLT dflt;
- if self.default_language_systems_:
- return frozenset(self.default_language_systems_)
- else:
- return frozenset({("DFLT", "dflt")})
-
- def start_feature(self, location, name):
- self.language_systems = self.get_default_language_systems_()
- self.script_ = "DFLT"
- self.cur_lookup_ = None
- self.cur_feature_name_ = name
- self.lookupflag_ = 0
- self.lookupflag_markFilterSet_ = None
- if name == "aalt":
- self.aalt_location_ = location
-
- def end_feature(self):
- assert self.cur_feature_name_ is not None
- self.cur_feature_name_ = None
- self.language_systems = None
- self.cur_lookup_ = None
- self.lookupflag_ = 0
- self.lookupflag_markFilterSet_ = None
-
- def start_lookup_block(self, location, name):
- if name in self.named_lookups_:
- raise FeatureLibError(
- 'Lookup "%s" has already been defined' % name, location
- )
- if self.cur_feature_name_ == "aalt":
- raise FeatureLibError(
- "Lookup blocks cannot be placed inside 'aalt' features; "
- "move it out, and then refer to it with a lookup statement",
- location,
- )
- self.cur_lookup_name_ = name
- self.named_lookups_[name] = None
- self.cur_lookup_ = None
- if self.cur_feature_name_ is None:
- self.lookupflag_ = 0
- self.lookupflag_markFilterSet_ = None
-
- def end_lookup_block(self):
- assert self.cur_lookup_name_ is not None
- self.cur_lookup_name_ = None
- self.cur_lookup_ = None
- if self.cur_feature_name_ is None:
- self.lookupflag_ = 0
- self.lookupflag_markFilterSet_ = None
-
- def add_lookup_call(self, lookup_name):
- assert lookup_name in self.named_lookups_, lookup_name
- self.cur_lookup_ = None
- lookup = self.named_lookups_[lookup_name]
- if lookup is not None: # skip empty named lookup
- self.add_lookup_to_feature_(lookup, self.cur_feature_name_)
-
- def set_font_revision(self, location, revision):
- self.fontRevision_ = revision
-
- def set_language(self, location, language, include_default, required):
- assert len(language) == 4
- if self.cur_feature_name_ in ("aalt", "size"):
- raise FeatureLibError(
- "Language statements are not allowed "
- 'within "feature %s"' % self.cur_feature_name_,
- location,
- )
- if self.cur_feature_name_ is None:
- raise FeatureLibError(
- "Language statements are not allowed "
- "within standalone lookup blocks",
- location,
- )
- self.cur_lookup_ = None
-
- key = (self.script_, language, self.cur_feature_name_)
- lookups = self.features_.get((key[0], "dflt", key[2]))
- if (language == "dflt" or include_default) and lookups:
- self.features_[key] = lookups[:]
- else:
- self.features_[key] = []
- self.language_systems = frozenset([(self.script_, language)])
-
- if required:
- key = (self.script_, language)
- if key in self.required_features_:
- raise FeatureLibError(
- "Language %s (script %s) has already "
- "specified feature %s as its required feature"
- % (
- language.strip(),
- self.script_.strip(),
- self.required_features_[key].strip(),
- ),
- location,
- )
- self.required_features_[key] = self.cur_feature_name_
-
- def getMarkAttachClass_(self, location, glyphs):
- glyphs = frozenset(glyphs)
- id_ = self.markAttachClassID_.get(glyphs)
- if id_ is not None:
- return id_
- id_ = len(self.markAttachClassID_) + 1
- self.markAttachClassID_[glyphs] = id_
- for glyph in glyphs:
- if glyph in self.markAttach_:
- _, loc = self.markAttach_[glyph]
- raise FeatureLibError(
- "Glyph %s already has been assigned "
- "a MarkAttachmentType at %s" % (glyph, loc),
- location,
- )
- self.markAttach_[glyph] = (id_, location)
- return id_
-
- def getMarkFilterSet_(self, location, glyphs):
- glyphs = frozenset(glyphs)
- id_ = self.markFilterSets_.get(glyphs)
- if id_ is not None:
- return id_
- id_ = len(self.markFilterSets_)
- self.markFilterSets_[glyphs] = id_
- return id_
-
- def set_lookup_flag(self, location, value, markAttach, markFilter):
- value = value & 0xFF
- if markAttach:
- markAttachClass = self.getMarkAttachClass_(location, markAttach)
- value = value | (markAttachClass << 8)
- if markFilter:
- markFilterSet = self.getMarkFilterSet_(location, markFilter)
- value = value | 0x10
- self.lookupflag_markFilterSet_ = markFilterSet
- else:
- self.lookupflag_markFilterSet_ = None
- self.lookupflag_ = value
-
- def set_script(self, location, script):
- if self.cur_feature_name_ in ("aalt", "size"):
- raise FeatureLibError(
- "Script statements are not allowed "
- 'within "feature %s"' % self.cur_feature_name_,
- location,
- )
- if self.cur_feature_name_ is None:
- raise FeatureLibError(
- "Script statements are not allowed " "within standalone lookup blocks",
- location,
- )
- if self.language_systems == {(script, "dflt")}:
- # Nothing to do.
- return
- self.cur_lookup_ = None
- self.script_ = script
- self.lookupflag_ = 0
- self.lookupflag_markFilterSet_ = None
- self.set_language(location, "dflt", include_default=True, required=False)
-
- def find_lookup_builders_(self, lookups):
- """Helper for building chain contextual substitutions
-
- Given a list of lookup names, finds the LookupBuilder for each name.
- If an input name is None, it gets mapped to a None LookupBuilder.
- """
- lookup_builders = []
- for lookuplist in lookups:
- if lookuplist is not None:
- lookup_builders.append(
- [self.named_lookups_.get(l.name) for l in lookuplist]
- )
- else:
- lookup_builders.append(None)
- return lookup_builders
-
- def add_attach_points(self, location, glyphs, contourPoints):
- for glyph in glyphs:
- self.attachPoints_.setdefault(glyph, set()).update(contourPoints)
-
- def add_feature_reference(self, location, featureName):
- if self.cur_feature_name_ != "aalt":
- raise FeatureLibError(
- 'Feature references are only allowed inside "feature aalt"', location
- )
- self.aalt_features_.append((location, featureName))
-
- def add_featureName(self, tag):
- self.featureNames_.add(tag)
-
- def add_cv_parameter(self, tag):
- self.cv_parameters_.add(tag)
-
- def add_to_cv_num_named_params(self, tag):
- """Adds new items to ``self.cv_num_named_params_``
- or increments the count of existing items."""
- if tag in self.cv_num_named_params_:
- self.cv_num_named_params_[tag] += 1
- else:
- self.cv_num_named_params_[tag] = 1
-
- def add_cv_character(self, character, tag):
- self.cv_characters_[tag].append(character)
-
- def set_base_axis(self, bases, scripts, vertical):
- if vertical:
- self.base_vert_axis_ = (bases, scripts)
- else:
- self.base_horiz_axis_ = (bases, scripts)
-
- def set_size_parameters(
- self, location, DesignSize, SubfamilyID, RangeStart, RangeEnd
- ):
- if self.cur_feature_name_ != "size":
- raise FeatureLibError(
- "Parameters statements are not allowed "
- 'within "feature %s"' % self.cur_feature_name_,
- location,
- )
- self.size_parameters_ = [DesignSize, SubfamilyID, RangeStart, RangeEnd]
- for script, lang in self.language_systems:
- key = (script, lang, self.cur_feature_name_)
- self.features_.setdefault(key, [])
-
- # GSUB rules
-
- # GSUB 1
- def add_single_subst(self, location, prefix, suffix, mapping, forceChain):
- if self.cur_feature_name_ == "aalt":
- for from_glyph, to_glyph in mapping.items():
- alts = self.aalt_alternates_.setdefault(from_glyph, set())
- alts.add(to_glyph)
- return
- if prefix or suffix or forceChain:
- self.add_single_subst_chained_(location, prefix, suffix, mapping)
- return
- lookup = self.get_lookup_(location, SingleSubstBuilder)
- for from_glyph, to_glyph in mapping.items():
- if from_glyph in lookup.mapping:
- if to_glyph == lookup.mapping[from_glyph]:
- log.info(
- "Removing duplicate single substitution from glyph"
- ' "%s" to "%s" at %s',
- from_glyph,
- to_glyph,
- location,
- )
- else:
- raise FeatureLibError(
- 'Already defined rule for replacing glyph "%s" by "%s"'
- % (from_glyph, lookup.mapping[from_glyph]),
- location,
- )
- lookup.mapping[from_glyph] = to_glyph
-
- # GSUB 2
- def add_multiple_subst(
- self, location, prefix, glyph, suffix, replacements, forceChain=False
- ):
- if prefix or suffix or forceChain:
- chain = self.get_lookup_(location, ChainContextSubstBuilder)
- sub = self.get_chained_lookup_(location, MultipleSubstBuilder)
- sub.mapping[glyph] = replacements
- chain.rules.append(ChainContextualRule(prefix, [{glyph}], suffix, [sub]))
- return
- lookup = self.get_lookup_(location, MultipleSubstBuilder)
- if glyph in lookup.mapping:
- if replacements == lookup.mapping[glyph]:
- log.info(
- "Removing duplicate multiple substitution from glyph"
- ' "%s" to %s%s',
- glyph,
- replacements,
- f" at {location}" if location else "",
- )
- else:
- raise FeatureLibError(
- 'Already defined substitution for glyph "%s"' % glyph, location
- )
- lookup.mapping[glyph] = replacements
-
- # GSUB 3
- def add_alternate_subst(self, location, prefix, glyph, suffix, replacement):
- if self.cur_feature_name_ == "aalt":
- alts = self.aalt_alternates_.setdefault(glyph, set())
- alts.update(replacement)
- return
- if prefix or suffix:
- chain = self.get_lookup_(location, ChainContextSubstBuilder)
- lookup = self.get_chained_lookup_(location, AlternateSubstBuilder)
- chain.rules.append(ChainContextualRule(prefix, [{glyph}], suffix, [lookup]))
- else:
- lookup = self.get_lookup_(location, AlternateSubstBuilder)
- if glyph in lookup.alternates:
- raise FeatureLibError(
- 'Already defined alternates for glyph "%s"' % glyph, location
- )
- # We allow empty replacement glyphs here.
- lookup.alternates[glyph] = replacement
-
- # GSUB 4
- def add_ligature_subst(
- self, location, prefix, glyphs, suffix, replacement, forceChain
- ):
- if prefix or suffix or forceChain:
- chain = self.get_lookup_(location, ChainContextSubstBuilder)
- lookup = self.get_chained_lookup_(location, LigatureSubstBuilder)
- chain.rules.append(ChainContextualRule(prefix, glyphs, suffix, [lookup]))
- else:
- lookup = self.get_lookup_(location, LigatureSubstBuilder)
-
- if not all(glyphs):
- raise FeatureLibError("Empty glyph class in substitution", location)
-
- # OpenType feature file syntax, section 5.d, "Ligature substitution":
- # "Since the OpenType specification does not allow ligature
- # substitutions to be specified on target sequences that contain
- # glyph classes, the implementation software will enumerate
- # all specific glyph sequences if glyph classes are detected"
- for g in sorted(itertools.product(*glyphs)):
- lookup.ligatures[g] = replacement
-
- # GSUB 5/6
- def add_chain_context_subst(self, location, prefix, glyphs, suffix, lookups):
- if not all(glyphs) or not all(prefix) or not all(suffix):
- raise FeatureLibError(
- "Empty glyph class in contextual substitution", location
- )
- lookup = self.get_lookup_(location, ChainContextSubstBuilder)
- lookup.rules.append(
- ChainContextualRule(
- prefix, glyphs, suffix, self.find_lookup_builders_(lookups)
- )
- )
-
- def add_single_subst_chained_(self, location, prefix, suffix, mapping):
- if not mapping or not all(prefix) or not all(suffix):
- raise FeatureLibError(
- "Empty glyph class in contextual substitution", location
- )
- # https://github.com/fonttools/fonttools/issues/512
- # https://github.com/fonttools/fonttools/issues/2150
- chain = self.get_lookup_(location, ChainContextSubstBuilder)
- sub = chain.find_chainable_single_subst(mapping)
- if sub is None:
- sub = self.get_chained_lookup_(location, SingleSubstBuilder)
- sub.mapping.update(mapping)
- chain.rules.append(
- ChainContextualRule(prefix, [list(mapping.keys())], suffix, [sub])
- )
-
- # GSUB 8
- def add_reverse_chain_single_subst(self, location, old_prefix, old_suffix, mapping):
- if not mapping:
- raise FeatureLibError("Empty glyph class in substitution", location)
- lookup = self.get_lookup_(location, ReverseChainSingleSubstBuilder)
- lookup.rules.append((old_prefix, old_suffix, mapping))
-
- # GPOS rules
-
- # GPOS 1
- def add_single_pos(self, location, prefix, suffix, pos, forceChain):
- if prefix or suffix or forceChain:
- self.add_single_pos_chained_(location, prefix, suffix, pos)
- else:
- lookup = self.get_lookup_(location, SinglePosBuilder)
- for glyphs, value in pos:
- if not glyphs:
- raise FeatureLibError(
- "Empty glyph class in positioning rule", location
- )
- otValueRecord = self.makeOpenTypeValueRecord(
- location, value, pairPosContext=False
- )
- for glyph in glyphs:
- try:
- lookup.add_pos(location, glyph, otValueRecord)
- except OpenTypeLibError as e:
- raise FeatureLibError(str(e), e.location) from e
-
- # GPOS 2
- def add_class_pair_pos(self, location, glyphclass1, value1, glyphclass2, value2):
- if not glyphclass1 or not glyphclass2:
- raise FeatureLibError("Empty glyph class in positioning rule", location)
- lookup = self.get_lookup_(location, PairPosBuilder)
- v1 = self.makeOpenTypeValueRecord(location, value1, pairPosContext=True)
- v2 = self.makeOpenTypeValueRecord(location, value2, pairPosContext=True)
- lookup.addClassPair(location, glyphclass1, v1, glyphclass2, v2)
-
- def add_specific_pair_pos(self, location, glyph1, value1, glyph2, value2):
- if not glyph1 or not glyph2:
- raise FeatureLibError("Empty glyph class in positioning rule", location)
- lookup = self.get_lookup_(location, PairPosBuilder)
- v1 = self.makeOpenTypeValueRecord(location, value1, pairPosContext=True)
- v2 = self.makeOpenTypeValueRecord(location, value2, pairPosContext=True)
- lookup.addGlyphPair(location, glyph1, v1, glyph2, v2)
-
- # GPOS 3
- def add_cursive_pos(self, location, glyphclass, entryAnchor, exitAnchor):
- if not glyphclass:
- raise FeatureLibError("Empty glyph class in positioning rule", location)
- lookup = self.get_lookup_(location, CursivePosBuilder)
- lookup.add_attachment(
- location,
- glyphclass,
- self.makeOpenTypeAnchor(location, entryAnchor),
- self.makeOpenTypeAnchor(location, exitAnchor),
- )
-
- # GPOS 4
- def add_mark_base_pos(self, location, bases, marks):
- builder = self.get_lookup_(location, MarkBasePosBuilder)
- self.add_marks_(location, builder, marks)
- if not bases:
- raise FeatureLibError("Empty glyph class in positioning rule", location)
- for baseAnchor, markClass in marks:
- otBaseAnchor = self.makeOpenTypeAnchor(location, baseAnchor)
- for base in bases:
- builder.bases.setdefault(base, {})[markClass.name] = otBaseAnchor
-
- # GPOS 5
- def add_mark_lig_pos(self, location, ligatures, components):
- builder = self.get_lookup_(location, MarkLigPosBuilder)
- componentAnchors = []
- if not ligatures:
- raise FeatureLibError("Empty glyph class in positioning rule", location)
- for marks in components:
- anchors = {}
- self.add_marks_(location, builder, marks)
- for ligAnchor, markClass in marks:
- anchors[markClass.name] = self.makeOpenTypeAnchor(location, ligAnchor)
- componentAnchors.append(anchors)
- for glyph in ligatures:
- builder.ligatures[glyph] = componentAnchors
-
- # GPOS 6
- def add_mark_mark_pos(self, location, baseMarks, marks):
- builder = self.get_lookup_(location, MarkMarkPosBuilder)
- self.add_marks_(location, builder, marks)
- if not baseMarks:
- raise FeatureLibError("Empty glyph class in positioning rule", location)
- for baseAnchor, markClass in marks:
- otBaseAnchor = self.makeOpenTypeAnchor(location, baseAnchor)
- for baseMark in baseMarks:
- builder.baseMarks.setdefault(baseMark, {})[
- markClass.name
- ] = otBaseAnchor
-
- # GPOS 7/8
- def add_chain_context_pos(self, location, prefix, glyphs, suffix, lookups):
- if not all(glyphs) or not all(prefix) or not all(suffix):
- raise FeatureLibError(
- "Empty glyph class in contextual positioning rule", location
- )
- lookup = self.get_lookup_(location, ChainContextPosBuilder)
- lookup.rules.append(
- ChainContextualRule(
- prefix, glyphs, suffix, self.find_lookup_builders_(lookups)
- )
- )
-
- def add_single_pos_chained_(self, location, prefix, suffix, pos):
- if not pos or not all(prefix) or not all(suffix):
- raise FeatureLibError(
- "Empty glyph class in contextual positioning rule", location
- )
- # https://github.com/fonttools/fonttools/issues/514
- chain = self.get_lookup_(location, ChainContextPosBuilder)
- targets = []
- for _, _, _, lookups in chain.rules:
- targets.extend(lookups)
- subs = []
- for glyphs, value in pos:
- if value is None:
- subs.append(None)
- continue
- otValue = self.makeOpenTypeValueRecord(
- location, value, pairPosContext=False
- )
- sub = chain.find_chainable_single_pos(targets, glyphs, otValue)
- if sub is None:
- sub = self.get_chained_lookup_(location, SinglePosBuilder)
- targets.append(sub)
- for glyph in glyphs:
- sub.add_pos(location, glyph, otValue)
- subs.append(sub)
- assert len(pos) == len(subs), (pos, subs)
- chain.rules.append(
- ChainContextualRule(prefix, [g for g, v in pos], suffix, subs)
- )
-
- def add_marks_(self, location, lookupBuilder, marks):
- """Helper for add_mark_{base,liga,mark}_pos."""
- for _, markClass in marks:
- for markClassDef in markClass.definitions:
- for mark in markClassDef.glyphs.glyphSet():
- if mark not in lookupBuilder.marks:
- otMarkAnchor = self.makeOpenTypeAnchor(
- location, markClassDef.anchor
- )
- lookupBuilder.marks[mark] = (markClass.name, otMarkAnchor)
- else:
- existingMarkClass = lookupBuilder.marks[mark][0]
- if markClass.name != existingMarkClass:
- raise FeatureLibError(
- "Glyph %s cannot be in both @%s and @%s"
- % (mark, existingMarkClass, markClass.name),
- location,
- )
-
- def add_subtable_break(self, location):
- self.cur_lookup_.add_subtable_break(location)
-
- def setGlyphClass_(self, location, glyph, glyphClass):
- oldClass, oldLocation = self.glyphClassDefs_.get(glyph, (None, None))
- if oldClass and oldClass != glyphClass:
- raise FeatureLibError(
- "Glyph %s was assigned to a different class at %s"
- % (glyph, oldLocation),
- location,
- )
- self.glyphClassDefs_[glyph] = (glyphClass, location)
-
- def add_glyphClassDef(
- self, location, baseGlyphs, ligatureGlyphs, markGlyphs, componentGlyphs
- ):
- for glyph in baseGlyphs:
- self.setGlyphClass_(location, glyph, 1)
- for glyph in ligatureGlyphs:
- self.setGlyphClass_(location, glyph, 2)
- for glyph in markGlyphs:
- self.setGlyphClass_(location, glyph, 3)
- for glyph in componentGlyphs:
- self.setGlyphClass_(location, glyph, 4)
-
- def add_ligatureCaretByIndex_(self, location, glyphs, carets):
- for glyph in glyphs:
- if glyph not in self.ligCaretPoints_:
- self.ligCaretPoints_[glyph] = carets
-
- def makeLigCaret(self, location, caret):
- if not isinstance(caret, VariableScalar):
- return caret
- default, device = self.makeVariablePos(location, caret)
- if device is not None:
- return (default, device)
- return default
-
- def add_ligatureCaretByPos_(self, location, glyphs, carets):
- carets = [self.makeLigCaret(location, caret) for caret in carets]
- for glyph in glyphs:
- if glyph not in self.ligCaretCoords_:
- self.ligCaretCoords_[glyph] = carets
-
- def add_name_record(self, location, nameID, platformID, platEncID, langID, string):
- self.names_.append([nameID, platformID, platEncID, langID, string])
-
- def add_os2_field(self, key, value):
- self.os2_[key] = value
-
- def add_hhea_field(self, key, value):
- self.hhea_[key] = value
-
- def add_vhea_field(self, key, value):
- self.vhea_[key] = value
-
- def add_conditionset(self, location, key, value):
- if "fvar" not in self.font:
- raise FeatureLibError(
- "Cannot add feature variations to a font without an 'fvar' table",
- location,
- )
-
- # Normalize
- axisMap = {
- axis.axisTag: (axis.minValue, axis.defaultValue, axis.maxValue)
- for axis in self.axes
- }
-
- value = {
- tag: (
- normalizeValue(bottom, axisMap[tag]),
- normalizeValue(top, axisMap[tag]),
- )
- for tag, (bottom, top) in value.items()
- }
-
- # NOTE: This might result in rounding errors (off-by-ones) compared to
- # rules in Designspace files, since we're working with what's in the
- # `avar` table rather than the original values.
- if "avar" in self.font:
- mapping = self.font["avar"].segments
- value = {
- axis: tuple(
- piecewiseLinearMap(v, mapping[axis]) if axis in mapping else v
- for v in condition_range
- )
- for axis, condition_range in value.items()
- }
-
- self.conditionsets_[key] = value
-
- def makeVariablePos(self, location, varscalar):
- if not self.varstorebuilder:
- raise FeatureLibError(
- "Can't define a variable scalar in a non-variable font", location
- )
-
- varscalar.axes = self.axes
- if not varscalar.does_vary:
- return varscalar.default, None
-
- default, index = varscalar.add_to_variation_store(
- self.varstorebuilder, self.model_cache, self.font.get("avar")
- )
-
- device = None
- if index is not None and index != 0xFFFFFFFF:
- device = buildVarDevTable(index)
-
- return default, device
-
- def makeOpenTypeAnchor(self, location, anchor):
- """ast.Anchor --> otTables.Anchor"""
- if anchor is None:
- return None
- variable = False
- deviceX, deviceY = None, None
- if anchor.xDeviceTable is not None:
- deviceX = otl.buildDevice(dict(anchor.xDeviceTable))
- if anchor.yDeviceTable is not None:
- deviceY = otl.buildDevice(dict(anchor.yDeviceTable))
- for dim in ("x", "y"):
- varscalar = getattr(anchor, dim)
- if not isinstance(varscalar, VariableScalar):
- continue
- if getattr(anchor, dim + "DeviceTable") is not None:
- raise FeatureLibError(
- "Can't define a device coordinate and variable scalar", location
- )
- default, device = self.makeVariablePos(location, varscalar)
- setattr(anchor, dim, default)
- if device is not None:
- if dim == "x":
- deviceX = device
- else:
- deviceY = device
- variable = True
-
- otlanchor = otl.buildAnchor(
- anchor.x, anchor.y, anchor.contourpoint, deviceX, deviceY
- )
- if variable:
- otlanchor.Format = 3
- return otlanchor
-
- _VALUEREC_ATTRS = {
- name[0].lower() + name[1:]: (name, isDevice)
- for _, name, isDevice, _ in otBase.valueRecordFormat
- if not name.startswith("Reserved")
- }
-
- def makeOpenTypeValueRecord(self, location, v, pairPosContext):
- """ast.ValueRecord --> otBase.ValueRecord"""
- if not v:
- return None
-
- vr = {}
- for astName, (otName, isDevice) in self._VALUEREC_ATTRS.items():
- val = getattr(v, astName, None)
- if not val:
- continue
- if isDevice:
- vr[otName] = otl.buildDevice(dict(val))
- elif isinstance(val, VariableScalar):
- otDeviceName = otName[0:4] + "Device"
- feaDeviceName = otDeviceName[0].lower() + otDeviceName[1:]
- if getattr(v, feaDeviceName):
- raise FeatureLibError(
- "Can't define a device coordinate and variable scalar", location
- )
- vr[otName], device = self.makeVariablePos(location, val)
- if device is not None:
- vr[otDeviceName] = device
- else:
- vr[otName] = val
-
- if pairPosContext and not vr:
- vr = {"YAdvance": 0} if v.vertical else {"XAdvance": 0}
- valRec = otl.buildValue(vr)
- return valRec
diff --git a/spaces/DaleChen/AutoGPT/autogpt/commands/file_operations.py b/spaces/DaleChen/AutoGPT/autogpt/commands/file_operations.py
deleted file mode 100644
index ad145ec956dd9dafd39e09c2244d001cf5febd2f..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/commands/file_operations.py
+++ /dev/null
@@ -1,267 +0,0 @@
-"""File operations for AutoGPT"""
-from __future__ import annotations
-
-import os
-import os.path
-from typing import Generator
-
-import requests
-from colorama import Back, Fore
-from requests.adapters import HTTPAdapter, Retry
-
-from autogpt.spinner import Spinner
-from autogpt.utils import readable_file_size
-from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
-
-LOG_FILE = "file_logger.txt"
-LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE
-
-
-def check_duplicate_operation(operation: str, filename: str) -> bool:
- """Check if the operation has already been performed on the given file
-
- Args:
- operation (str): The operation to check for
- filename (str): The name of the file to check for
-
- Returns:
- bool: True if the operation has already been performed on the file
- """
- log_content = read_file(LOG_FILE)
- log_entry = f"{operation}: {filename}\n"
- return log_entry in log_content
-
-
-def log_operation(operation: str, filename: str) -> None:
- """Log the file operation to the file_logger.txt
-
- Args:
- operation (str): The operation to log
- filename (str): The name of the file the operation was performed on
- """
- log_entry = f"{operation}: {filename}\n"
-
- # Create the log file if it doesn't exist
- if not os.path.exists(LOG_FILE_PATH):
- with open(LOG_FILE_PATH, "w", encoding="utf-8") as f:
- f.write("File Operation Logger ")
-
- append_to_file(LOG_FILE, log_entry, shouldLog=False)
-
-
-def split_file(
- content: str, max_length: int = 4000, overlap: int = 0
-) -> Generator[str, None, None]:
- """
- Split text into chunks of a specified maximum length with a specified overlap
- between chunks.
-
- :param content: The input text to be split into chunks
- :param max_length: The maximum length of each chunk,
- default is 4000 (about 1k token)
- :param overlap: The number of overlapping characters between chunks,
- default is no overlap
- :return: A generator yielding chunks of text
- """
- start = 0
- content_length = len(content)
-
- while start < content_length:
- end = start + max_length
- if end + overlap < content_length:
- chunk = content[start : end + overlap - 1]
- else:
- chunk = content[start:content_length]
-
- # Account for the case where the last chunk is shorter than the overlap, so it has already been consumed
- if len(chunk) <= overlap:
- break
-
- yield chunk
- start += max_length - overlap
-
-
-def read_file(filename: str) -> str:
- """Read a file and return the contents
-
- Args:
- filename (str): The name of the file to read
-
- Returns:
- str: The contents of the file
- """
- try:
- filepath = path_in_workspace(filename)
- with open(filepath, "r", encoding="utf-8") as f:
- content = f.read()
- return content
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def ingest_file(
- filename: str, memory, max_length: int = 4000, overlap: int = 200
-) -> None:
- """
- Ingest a file by reading its content, splitting it into chunks with a specified
- maximum length and overlap, and adding the chunks to the memory storage.
-
- :param filename: The name of the file to ingest
- :param memory: An object with an add() method to store the chunks in memory
- :param max_length: The maximum length of each chunk, default is 4000
- :param overlap: The number of overlapping characters between chunks, default is 200
- """
- try:
- print(f"Working with file {filename}")
- content = read_file(filename)
- content_length = len(content)
- print(f"File length: {content_length} characters")
-
- chunks = list(split_file(content, max_length=max_length, overlap=overlap))
-
- num_chunks = len(chunks)
- for i, chunk in enumerate(chunks):
- print(f"Ingesting chunk {i + 1} / {num_chunks} into memory")
- memory_to_add = (
- f"Filename: {filename}\n" f"Content part#{i + 1}/{num_chunks}: {chunk}"
- )
-
- memory.add(memory_to_add)
-
- print(f"Done ingesting {num_chunks} chunks from {filename}.")
- except Exception as e:
- print(f"Error while ingesting file '{filename}': {str(e)}")
-
-
-def write_to_file(filename: str, text: str) -> str:
- """Write text to a file
-
- Args:
- filename (str): The name of the file to write to
- text (str): The text to write to the file
-
- Returns:
- str: A message indicating success or failure
- """
- if check_duplicate_operation("write", filename):
- return "Error: File has already been updated."
- try:
- filepath = path_in_workspace(filename)
- directory = os.path.dirname(filepath)
- if not os.path.exists(directory):
- os.makedirs(directory)
- with open(filepath, "w", encoding="utf-8") as f:
- f.write(text)
- log_operation("write", filename)
- return "File written to successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str:
- """Append text to a file
-
- Args:
- filename (str): The name of the file to append to
- text (str): The text to append to the file
-
- Returns:
- str: A message indicating success or failure
- """
- try:
- filepath = path_in_workspace(filename)
- with open(filepath, "a") as f:
- f.write(text)
-
- if shouldLog:
- log_operation("append", filename)
-
- return "Text appended successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def delete_file(filename: str) -> str:
- """Delete a file
-
- Args:
- filename (str): The name of the file to delete
-
- Returns:
- str: A message indicating success or failure
- """
- if check_duplicate_operation("delete", filename):
- return "Error: File has already been deleted."
- try:
- filepath = path_in_workspace(filename)
- os.remove(filepath)
- log_operation("delete", filename)
- return "File deleted successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def search_files(directory: str) -> list[str]:
- """Search for files in a directory
-
- Args:
- directory (str): The directory to search in
-
- Returns:
- list[str]: A list of files found in the directory
- """
- found_files = []
-
- if directory in {"", "/"}:
- search_directory = WORKSPACE_PATH
- else:
- search_directory = path_in_workspace(directory)
-
- for root, _, files in os.walk(search_directory):
- for file in files:
- if file.startswith("."):
- continue
- relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH)
- found_files.append(relative_path)
-
- return found_files
-
-
-def download_file(url, filename):
- """Downloads a file
- Args:
- url (str): URL of the file to download
- filename (str): Filename to save the file as
- """
- safe_filename = path_in_workspace(filename)
- try:
- message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}"
- with Spinner(message) as spinner:
- session = requests.Session()
- retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504])
- adapter = HTTPAdapter(max_retries=retry)
- session.mount("http://", adapter)
- session.mount("https://", adapter)
-
- total_size = 0
- downloaded_size = 0
-
- with session.get(url, allow_redirects=True, stream=True) as r:
- r.raise_for_status()
- total_size = int(r.headers.get("Content-Length", 0))
- downloaded_size = 0
-
- with open(safe_filename, "wb") as f:
- for chunk in r.iter_content(chunk_size=8192):
- f.write(chunk)
- downloaded_size += len(chunk)
-
- # Update the progress message
- progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}"
- spinner.update_message(f"{message} {progress}")
-
- return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(total_size)})'
- except requests.HTTPError as e:
- return f"Got an HTTP Error whilst trying to download file: {e}"
- except Exception as e:
- return "Error: " + str(e)
diff --git a/spaces/Datasculptor/DescriptionGPT/tools/merge_lvis_coco.py b/spaces/Datasculptor/DescriptionGPT/tools/merge_lvis_coco.py
deleted file mode 100644
index abc2b673a30541fd71679a549acd9a53f7693183..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/tools/merge_lvis_coco.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from collections import defaultdict
-import torch
-import sys
-import json
-import numpy as np
-
-from detectron2.structures import Boxes, pairwise_iou
-COCO_PATH = 'datasets/coco/annotations/instances_train2017.json'
-IMG_PATH = 'datasets/coco/train2017/'
-LVIS_PATH = 'datasets/lvis/lvis_v1_train.json'
-NO_SEG = False
-if NO_SEG:
- SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_box.json'
-else:
- SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_mask.json'
-THRESH = 0.7
-DEBUG = False
-
-# This mapping is extracted from the official LVIS mapping:
-# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json
-COCO_SYNSET_CATEGORIES = [
- {"synset": "person.n.01", "coco_cat_id": 1},
- {"synset": "bicycle.n.01", "coco_cat_id": 2},
- {"synset": "car.n.01", "coco_cat_id": 3},
- {"synset": "motorcycle.n.01", "coco_cat_id": 4},
- {"synset": "airplane.n.01", "coco_cat_id": 5},
- {"synset": "bus.n.01", "coco_cat_id": 6},
- {"synset": "train.n.01", "coco_cat_id": 7},
- {"synset": "truck.n.01", "coco_cat_id": 8},
- {"synset": "boat.n.01", "coco_cat_id": 9},
- {"synset": "traffic_light.n.01", "coco_cat_id": 10},
- {"synset": "fireplug.n.01", "coco_cat_id": 11},
- {"synset": "stop_sign.n.01", "coco_cat_id": 13},
- {"synset": "parking_meter.n.01", "coco_cat_id": 14},
- {"synset": "bench.n.01", "coco_cat_id": 15},
- {"synset": "bird.n.01", "coco_cat_id": 16},
- {"synset": "cat.n.01", "coco_cat_id": 17},
- {"synset": "dog.n.01", "coco_cat_id": 18},
- {"synset": "horse.n.01", "coco_cat_id": 19},
- {"synset": "sheep.n.01", "coco_cat_id": 20},
- {"synset": "beef.n.01", "coco_cat_id": 21},
- {"synset": "elephant.n.01", "coco_cat_id": 22},
- {"synset": "bear.n.01", "coco_cat_id": 23},
- {"synset": "zebra.n.01", "coco_cat_id": 24},
- {"synset": "giraffe.n.01", "coco_cat_id": 25},
- {"synset": "backpack.n.01", "coco_cat_id": 27},
- {"synset": "umbrella.n.01", "coco_cat_id": 28},
- {"synset": "bag.n.04", "coco_cat_id": 31},
- {"synset": "necktie.n.01", "coco_cat_id": 32},
- {"synset": "bag.n.06", "coco_cat_id": 33},
- {"synset": "frisbee.n.01", "coco_cat_id": 34},
- {"synset": "ski.n.01", "coco_cat_id": 35},
- {"synset": "snowboard.n.01", "coco_cat_id": 36},
- {"synset": "ball.n.06", "coco_cat_id": 37},
- {"synset": "kite.n.03", "coco_cat_id": 38},
- {"synset": "baseball_bat.n.01", "coco_cat_id": 39},
- {"synset": "baseball_glove.n.01", "coco_cat_id": 40},
- {"synset": "skateboard.n.01", "coco_cat_id": 41},
- {"synset": "surfboard.n.01", "coco_cat_id": 42},
- {"synset": "tennis_racket.n.01", "coco_cat_id": 43},
- {"synset": "bottle.n.01", "coco_cat_id": 44},
- {"synset": "wineglass.n.01", "coco_cat_id": 46},
- {"synset": "cup.n.01", "coco_cat_id": 47},
- {"synset": "fork.n.01", "coco_cat_id": 48},
- {"synset": "knife.n.01", "coco_cat_id": 49},
- {"synset": "spoon.n.01", "coco_cat_id": 50},
- {"synset": "bowl.n.03", "coco_cat_id": 51},
- {"synset": "banana.n.02", "coco_cat_id": 52},
- {"synset": "apple.n.01", "coco_cat_id": 53},
- {"synset": "sandwich.n.01", "coco_cat_id": 54},
- {"synset": "orange.n.01", "coco_cat_id": 55},
- {"synset": "broccoli.n.01", "coco_cat_id": 56},
- {"synset": "carrot.n.01", "coco_cat_id": 57},
- # {"synset": "frank.n.02", "coco_cat_id": 58},
- {"synset": "sausage.n.01", "coco_cat_id": 58},
- {"synset": "pizza.n.01", "coco_cat_id": 59},
- {"synset": "doughnut.n.02", "coco_cat_id": 60},
- {"synset": "cake.n.03", "coco_cat_id": 61},
- {"synset": "chair.n.01", "coco_cat_id": 62},
- {"synset": "sofa.n.01", "coco_cat_id": 63},
- {"synset": "pot.n.04", "coco_cat_id": 64},
- {"synset": "bed.n.01", "coco_cat_id": 65},
- {"synset": "dining_table.n.01", "coco_cat_id": 67},
- {"synset": "toilet.n.02", "coco_cat_id": 70},
- {"synset": "television_receiver.n.01", "coco_cat_id": 72},
- {"synset": "laptop.n.01", "coco_cat_id": 73},
- {"synset": "mouse.n.04", "coco_cat_id": 74},
- {"synset": "remote_control.n.01", "coco_cat_id": 75},
- {"synset": "computer_keyboard.n.01", "coco_cat_id": 76},
- {"synset": "cellular_telephone.n.01", "coco_cat_id": 77},
- {"synset": "microwave.n.02", "coco_cat_id": 78},
- {"synset": "oven.n.01", "coco_cat_id": 79},
- {"synset": "toaster.n.02", "coco_cat_id": 80},
- {"synset": "sink.n.01", "coco_cat_id": 81},
- {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82},
- {"synset": "book.n.01", "coco_cat_id": 84},
- {"synset": "clock.n.01", "coco_cat_id": 85},
- {"synset": "vase.n.01", "coco_cat_id": 86},
- {"synset": "scissors.n.01", "coco_cat_id": 87},
- {"synset": "teddy.n.01", "coco_cat_id": 88},
- {"synset": "hand_blower.n.01", "coco_cat_id": 89},
- {"synset": "toothbrush.n.01", "coco_cat_id": 90},
-]
-
-
-def get_bbox(ann):
- bbox = ann['bbox']
- return [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]]
-
-
-if __name__ == '__main__':
- file_name_key = 'file_name' if 'v0.5' in LVIS_PATH else 'coco_url'
- coco_data = json.load(open(COCO_PATH, 'r'))
- lvis_data = json.load(open(LVIS_PATH, 'r'))
-
- coco_cats = coco_data['categories']
- lvis_cats = lvis_data['categories']
-
- num_find = 0
- num_not_find = 0
- num_twice = 0
- coco2lviscats = {}
- synset2lvisid = {x['synset']: x['id'] for x in lvis_cats}
- # cocoid2synset = {x['coco_cat_id']: x['synset'] for x in COCO_SYNSET_CATEGORIES}
- coco2lviscats = {x['coco_cat_id']: synset2lvisid[x['synset']] \
- for x in COCO_SYNSET_CATEGORIES if x['synset'] in synset2lvisid}
- print(len(coco2lviscats))
-
- lvis_file2id = {x[file_name_key][-16:]: x['id'] for x in lvis_data['images']}
- lvis_id2img = {x['id']: x for x in lvis_data['images']}
- lvis_catid2name = {x['id']: x['name'] for x in lvis_data['categories']}
-
- coco_file2anns = {}
- coco_id2img = {x['id']: x for x in coco_data['images']}
- coco_img2anns = defaultdict(list)
- for ann in coco_data['annotations']:
- coco_img = coco_id2img[ann['image_id']]
- file_name = coco_img['file_name'][-16:]
- if ann['category_id'] in coco2lviscats and \
- file_name in lvis_file2id:
- lvis_image_id = lvis_file2id[file_name]
- lvis_image = lvis_id2img[lvis_image_id]
- lvis_cat_id = coco2lviscats[ann['category_id']]
- if lvis_cat_id in lvis_image['neg_category_ids']:
- continue
- if DEBUG:
- import cv2
- img_path = IMG_PATH + file_name
- img = cv2.imread(img_path)
- print(lvis_catid2name[lvis_cat_id])
- print('neg', [lvis_catid2name[x] for x in lvis_image['neg_category_ids']])
- cv2.imshow('img', img)
- cv2.waitKey()
- ann['category_id'] = lvis_cat_id
- ann['image_id'] = lvis_image_id
- coco_img2anns[file_name].append(ann)
-
- lvis_img2anns = defaultdict(list)
- for ann in lvis_data['annotations']:
- lvis_img = lvis_id2img[ann['image_id']]
- file_name = lvis_img[file_name_key][-16:]
- lvis_img2anns[file_name].append(ann)
-
- ann_id_count = 0
- anns = []
- for file_name in lvis_img2anns:
- coco_anns = coco_img2anns[file_name]
- lvis_anns = lvis_img2anns[file_name]
- ious = pairwise_iou(
- Boxes(torch.tensor([get_bbox(x) for x in coco_anns])),
- Boxes(torch.tensor([get_bbox(x) for x in lvis_anns]))
- )
-
- for ann in lvis_anns:
- ann_id_count = ann_id_count + 1
- ann['id'] = ann_id_count
- anns.append(ann)
-
- for i, ann in enumerate(coco_anns):
- if len(ious[i]) == 0 or ious[i].max() < THRESH:
- ann_id_count = ann_id_count + 1
- ann['id'] = ann_id_count
- anns.append(ann)
- else:
- duplicated = False
- for j in range(len(ious[i])):
- if ious[i, j] >= THRESH and \
- coco_anns[i]['category_id'] == lvis_anns[j]['category_id']:
- duplicated = True
- if not duplicated:
- ann_id_count = ann_id_count + 1
- ann['id'] = ann_id_count
- anns.append(ann)
- if NO_SEG:
- for ann in anns:
- del ann['segmentation']
- lvis_data['annotations'] = anns
-
- print('# Images', len(lvis_data['images']))
- print('# Anns', len(lvis_data['annotations']))
- json.dump(lvis_data, open(SAVE_PATH, 'w'))
diff --git a/spaces/Detomo/ai-comic-generation/src/lib/replaceNonWhiteWithTransparent.ts b/spaces/Detomo/ai-comic-generation/src/lib/replaceNonWhiteWithTransparent.ts
deleted file mode 100644
index 6ffe6df050134290d39ee114e427741b26cfb419..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/lib/replaceNonWhiteWithTransparent.ts
+++ /dev/null
@@ -1,46 +0,0 @@
-export function replaceNonWhiteWithTransparent(imageBase64: string): Promise {
- return new Promise((resolve, reject) => {
- const img = new Image();
- img.onload = () => {
- const canvas = document.createElement('canvas');
- const ctx = canvas.getContext('2d');
- if (!ctx) {
- reject('Unable to get canvas context');
- return;
- }
-
- const ratio = window.devicePixelRatio || 1;
- canvas.width = img.width * ratio;
- canvas.height = img.height * ratio;
- ctx.scale(ratio, ratio);
-
- ctx.drawImage(img, 0, 0);
-
- const imageData = ctx.getImageData(0, 0, img.width, img.height);
- const data = imageData.data;
- console.log("ok")
-
- for (let i = 0; i < data.length; i += 4) {
- if (data[i] === 255 && data[i + 1] === 255 && data[i + 2] === 255) {
- // Change white (also shades of grays) pixels to black
- data[i] = 0;
- data[i + 1] = 0;
- data[i + 2] = 0;
- } else {
- // Change all other pixels to transparent
- data[i + 3] = 0;
- }
- }
-
- ctx.putImageData(imageData, 0, 0);
-
- resolve(canvas.toDataURL());
- };
-
- img.onerror = (err) => {
- reject(err);
- };
-
- img.src = imageBase64;
- });
-}
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/coaches/localitly_regulizer.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/coaches/localitly_regulizer.py
deleted file mode 100644
index c4fe05cd2a77113b569587c1b4a5ec358646f7a4..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/training/coaches/localitly_regulizer.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import torch
-import numpy as np
-import wandb
-from pti.pti_configs import hyperparameters, global_config
-l2_criterion = torch.nn.MSELoss(reduction='mean')
-
-
-def l2_loss(real_images, generated_images):
- loss = l2_criterion(real_images, generated_images)
- return loss
-
-
-class Space_Regulizer:
- def __init__(self, original_G, lpips_net):
- self.original_G = original_G
- self.morphing_regulizer_alpha = hyperparameters.regulizer_alpha
- self.lpips_loss = lpips_net
-
- def get_morphed_w_code(self, new_w_code, fixed_w):
- interpolation_direction = new_w_code - fixed_w
- interpolation_direction_norm = torch.norm(interpolation_direction, p=2)
- direction_to_move = hyperparameters.regulizer_alpha * \
- interpolation_direction / interpolation_direction_norm
- result_w = fixed_w + direction_to_move
- self.morphing_regulizer_alpha * fixed_w + \
- (1 - self.morphing_regulizer_alpha) * new_w_code
-
- return result_w
-
- def get_image_from_ws(self, w_codes, G):
- return torch.cat([G.synthesis(w_code, noise_mode='none', force_fp32=True) for w_code in w_codes])
-
- def ball_holder_loss_lazy(self, new_G, num_of_sampled_latents, w_batch, use_wandb=False):
- loss = 0.0
-
- z_samples = np.random.randn(
- num_of_sampled_latents, self.original_G.z_dim)
- w_samples = self.original_G.mapping(torch.from_numpy(z_samples).to(global_config.device), None,
- truncation_psi=0.5)
- territory_indicator_ws = [self.get_morphed_w_code(
- w_code.unsqueeze(0), w_batch) for w_code in w_samples]
-
- for w_code in territory_indicator_ws:
- new_img = new_G.synthesis(
- w_code, noise_mode='none', force_fp32=True)
- with torch.no_grad():
- old_img = self.original_G.synthesis(
- w_code, noise_mode='none', force_fp32=True)
-
- if hyperparameters.regulizer_l2_lambda > 0:
- l2_loss_val = l2_loss.l2_loss(old_img, new_img)
- if use_wandb:
- wandb.log({f'space_regulizer_l2_loss_val': l2_loss_val.detach().cpu()},
- step=global_config.training_step)
- loss += l2_loss_val * hyperparameters.regulizer_l2_lambda
-
- if hyperparameters.regulizer_lpips_lambda > 0:
- loss_lpips = self.lpips_loss(old_img, new_img)
- loss_lpips = torch.mean(torch.squeeze(loss_lpips))
- if use_wandb:
- wandb.log({f'space_regulizer_lpips_loss_val': loss_lpips.detach().cpu()},
- step=global_config.training_step)
- loss += loss_lpips * hyperparameters.regulizer_lpips_lambda
-
- return loss / len(territory_indicator_ws)
-
- def space_regulizer_loss(self, new_G, w_batch, use_wandb):
- ret_val = self.ball_holder_loss_lazy(
- new_G, hyperparameters.latent_ball_num_of_samples, w_batch, use_wandb)
- return ret_val
diff --git a/spaces/ECCV2022/bytetrack/exps/default/yolox_m.py b/spaces/ECCV2022/bytetrack/exps/default/yolox_m.py
deleted file mode 100644
index 9666a31177b9cc1c94978f9867aaceac8ddebce2..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/exps/default/yolox_m.py
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import os
-
-from yolox.exp import Exp as MyExp
-
-
-class Exp(MyExp):
- def __init__(self):
- super(Exp, self).__init__()
- self.depth = 0.67
- self.width = 0.75
- self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
diff --git a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/F0Predictor.py
deleted file mode 100644
index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000
--- a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/F0Predictor.py
+++ /dev/null
@@ -1,16 +0,0 @@
-class F0Predictor(object):
- def compute_f0(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length]
- """
- pass
-
- def compute_f0_uv(self, wav, p_len):
- """
- input: wav:[signal_length]
- p_len:int
- output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
- """
- pass
diff --git a/spaces/EsoCode/text-generation-webui/extensions/elevenlabs_tts/script.py b/spaces/EsoCode/text-generation-webui/extensions/elevenlabs_tts/script.py
deleted file mode 100644
index 1a24db1f31eacf4da6b68a99dce03ddc0213ca95..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/elevenlabs_tts/script.py
+++ /dev/null
@@ -1,180 +0,0 @@
-import re
-from pathlib import Path
-
-import elevenlabs
-import gradio as gr
-from modules import chat, shared
-
-params = {
- 'activate': True,
- 'api_key': None,
- 'selected_voice': 'None',
- 'autoplay': False,
- 'show_text': True,
-}
-
-voices = None
-wav_idx = 0
-
-
-def update_api_key(key):
- params['api_key'] = key
- if key is not None:
- elevenlabs.set_api_key(key)
-
-
-def refresh_voices():
- global params
- your_voices = elevenlabs.voices()
- voice_names = [voice.name for voice in your_voices]
- return voice_names
-
-
-def refresh_voices_dd():
- all_voices = refresh_voices()
- return gr.Dropdown.update(value=all_voices[0], choices=all_voices)
-
-
-def remove_tts_from_history():
- for i, entry in enumerate(shared.history['internal']):
- shared.history['visible'][i] = [shared.history['visible'][i][0], entry[1]]
-
-
-def toggle_text_in_history():
- for i, entry in enumerate(shared.history['visible']):
- visible_reply = entry[1]
- if visible_reply.startswith('\n\n{reply}"
- ]
- else:
- shared.history['visible'][i] = [
- shared.history['visible'][i][0], f"{visible_reply.split('')[0]}"
- ]
-
-
-def remove_surrounded_chars(string):
- # this expression matches to 'as few symbols as possible (0 upwards) between any asterisks' OR
- # 'as few symbols as possible (0 upwards) between an asterisk and the end of the string'
- return re.sub('\*[^\*]*?(\*|$)', '', string)
-
-
-def state_modifier(state):
- if not params['activate']:
- return state
-
- state['stream'] = False
- return state
-
-
-def input_modifier(string):
- if not params['activate']:
- return string
-
- shared.processing_message = "*Is recording a voice message...*"
- return string
-
-
-def history_modifier(history):
- # Remove autoplay from the last reply
- if len(history['internal']) > 0:
- history['visible'][-1] = [
- history['visible'][-1][0],
- history['visible'][-1][1].replace('controls autoplay>', 'controls>')
- ]
-
- return history
-
-
-def output_modifier(string):
- global params, wav_idx
-
- if not params['activate']:
- return string
-
- original_string = string
- string = remove_surrounded_chars(string)
- string = string.replace('"', '')
- string = string.replace('“', '')
- string = string.replace('\n', ' ')
- string = string.strip()
- if string == '':
- string = 'empty reply, try regenerating'
-
- output_file = Path(f'extensions/elevenlabs_tts/outputs/{wav_idx:06d}.mp3'.format(wav_idx))
- print(f'Outputting audio to {str(output_file)}')
- try:
- audio = elevenlabs.generate(text=string, voice=params['selected_voice'], model="eleven_monolingual_v1")
- elevenlabs.save(audio, str(output_file))
-
- autoplay = 'autoplay' if params['autoplay'] else ''
- string = f''
- wav_idx += 1
- except elevenlabs.api.error.UnauthenticatedRateLimitError:
- string = "🤖 ElevenLabs Unauthenticated Rate Limit Reached - Please create an API key to continue\n\n"
- except elevenlabs.api.error.RateLimitError:
- string = "🤖 ElevenLabs API Tier Limit Reached\n\n"
- except elevenlabs.api.error.APIError as err:
- string = f"🤖 ElevenLabs Error: {err}\n\n"
-
- if params['show_text']:
- string += f'\n\n{original_string}'
-
- shared.processing_message = "*Is typing...*"
- return string
-
-
-def ui():
- global voices
- if not voices:
- voices = refresh_voices()
- params['selected_voice'] = voices[0]
-
- # Gradio elements
- with gr.Row():
- activate = gr.Checkbox(value=params['activate'], label='Activate TTS')
- autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically')
- show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player')
-
- with gr.Row():
- voice = gr.Dropdown(value=params['selected_voice'], choices=voices, label='TTS Voice')
- refresh = gr.Button(value='Refresh')
-
- with gr.Row():
- api_key = gr.Textbox(placeholder="Enter your API key.", label='API Key')
-
- with gr.Row():
- convert = gr.Button('Permanently replace audios with the message texts')
- convert_cancel = gr.Button('Cancel', visible=False)
- convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False)
-
- # Convert history with confirmation
- convert_arr = [convert_confirm, convert, convert_cancel]
- convert.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr)
- convert_confirm.click(
- lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr).then(
- remove_tts_from_history, None, None).then(
- chat.save_history, shared.gradio['mode'], None, show_progress=False).then(
- chat.redraw_html, shared.reload_inputs, shared.gradio['display'])
-
- convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr)
-
- # Toggle message text in history
- show_text.change(
- lambda x: params.update({"show_text": x}), show_text, None).then(
- toggle_text_in_history, None, None).then(
- chat.save_history, shared.gradio['mode'], None, show_progress=False).then(
- chat.redraw_html, shared.reload_inputs, shared.gradio['display'])
-
- convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr)
-
- # Event functions to update the parameters in the backend
- activate.change(lambda x: params.update({'activate': x}), activate, None)
- voice.change(lambda x: params.update({'selected_voice': x}), voice, None)
- api_key.change(update_api_key, api_key, None)
- # connect.click(check_valid_api, [], connection_status)
- refresh.click(refresh_voices_dd, [], voice)
- # Event functions to update the parameters in the backend
- autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None)
diff --git a/spaces/Flux9665/Blizzard2023IMS/README.md b/spaces/Flux9665/Blizzard2023IMS/README.md
deleted file mode 100644
index e5120ba640df47de0a9c430adb3b726c69691be3..0000000000000000000000000000000000000000
--- a/spaces/Flux9665/Blizzard2023IMS/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Blizzard2023IMS
-emoji: 🥐🦜
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.30.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/nvSTFT.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/nvSTFT.py
deleted file mode 100644
index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vdecoder/hifigan/nvSTFT.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import math
-import os
-os.environ["LRU_CACHE_CAPACITY"] = "3"
-import random
-import torch
-import torch.utils.data
-import numpy as np
-import librosa
-from librosa.util import normalize
-from librosa.filters import mel as librosa_mel_fn
-from scipy.io.wavfile import read
-import soundfile as sf
-
-def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False):
- sampling_rate = None
- try:
- data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile.
- except Exception as ex:
- print(f"'{full_path}' failed to load.\nException:")
- print(ex)
- if return_empty_on_exception:
- return [], sampling_rate or target_sr or 32000
- else:
- raise Exception(ex)
-
- if len(data.shape) > 1:
- data = data[:, 0]
- assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension)
-
- if np.issubdtype(data.dtype, np.integer): # if audio data is type int
- max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX
- else: # if audio data is type fp32
- max_mag = max(np.amax(data), -np.amin(data))
- max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32
-
- data = torch.FloatTensor(data.astype(np.float32))/max_mag
-
- if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except
- return [], sampling_rate or target_sr or 32000
- if target_sr is not None and sampling_rate != target_sr:
- data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr))
- sampling_rate = target_sr
-
- return data, sampling_rate
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
-
-def dynamic_range_decompression(x, C=1):
- return np.exp(x) / C
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-def dynamic_range_decompression_torch(x, C=1):
- return torch.exp(x) / C
-
-class STFT():
- def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5):
- self.target_sr = sr
-
- self.n_mels = n_mels
- self.n_fft = n_fft
- self.win_size = win_size
- self.hop_length = hop_length
- self.fmin = fmin
- self.fmax = fmax
- self.clip_val = clip_val
- self.mel_basis = {}
- self.hann_window = {}
-
- def get_mel(self, y, center=False):
- sampling_rate = self.target_sr
- n_mels = self.n_mels
- n_fft = self.n_fft
- win_size = self.win_size
- hop_length = self.hop_length
- fmin = self.fmin
- fmax = self.fmax
- clip_val = self.clip_val
-
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- if fmax not in self.mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax)
- self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device)
- self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
- # print(111,spec)
- spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9))
- # print(222,spec)
- spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec)
- # print(333,spec)
- spec = dynamic_range_compression_torch(spec, clip_val=clip_val)
- # print(444,spec)
- return spec
-
- def __call__(self, audiopath):
- audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr)
- spect = self.get_mel(audio.unsqueeze(0)).squeeze(0)
- return spect
-
-stft = STFT()
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py
deleted file mode 100644
index 6e18f71b31b9fb85a6ca7a6b05ff4d2313951750..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# model settings
-norm_cfg = dict(type='BN', requires_grad=False)
-model = dict(
- type='FasterRCNN',
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=3,
- strides=(1, 2, 2),
- dilations=(1, 1, 1),
- out_indices=(2, ),
- frozen_stages=1,
- norm_cfg=norm_cfg,
- norm_eval=True,
- style='caffe'),
- rpn_head=dict(
- type='RPNHead',
- in_channels=1024,
- feat_channels=1024,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[2, 4, 8, 16, 32],
- ratios=[0.5, 1.0, 2.0],
- strides=[16]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
- roi_head=dict(
- type='StandardRoIHead',
- shared_head=dict(
- type='ResLayer',
- depth=50,
- stage=3,
- stride=2,
- dilation=1,
- style='caffe',
- norm_cfg=norm_cfg,
- norm_eval=True),
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=1024,
- featmap_strides=[16]),
- bbox_head=dict(
- type='BBoxHead',
- with_avg_pool=True,
- roi_feat_size=7,
- in_channels=2048,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=0,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_pre=12000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=6000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py
deleted file mode 100644
index 2fa2a807190427c857ddbea8ed7efd9434e5ef0f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py
+++ /dev/null
@@ -1,23 +0,0 @@
-_base_ = './sparse_rcnn_r50_fpn_1x_coco.py'
-
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-min_values = (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, value) for value in min_values],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
-]
-
-data = dict(train=dict(pipeline=train_pipeline))
-lr_config = dict(policy='step', step=[27, 33])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/__init__.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_mt5_small_continue.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_mt5_small_continue.sh
deleted file mode 100644
index 0a539a7e6a7fb4b750b441df98dd49f166c3c49b..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/pretrain_t5/pretrain_mt5_small_continue.sh
+++ /dev/null
@@ -1,120 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=t5_cn_small_pretrain_v2
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=8
-#SBATCH --gres=gpu:8 # number of gpus
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH -o %x-%j.log
-#SBATCH -e %x-%j.err
-#SBATCH -x dgx050
-
-set -x -e
-source activate base
-
-echo "START TIME: $(date)"
-MICRO_BATCH_SIZE=32
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/t5_cn_small_pretrain_v2/
-
-ZERO_STAGE=1
-
-config_json="$ROOT_DIR/ds_config.t5_cn_small_pretrain_v2.$SLURM_JOBID.json"
-export MASTER_PORT=$[RANDOM%10000+30000]
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-
-cat < $config_json
-{
- "zero_optimization": {
- "stage": 1
- },
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "initial_scale_power": 16,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "optimizer": {
- "params": {
- "betas": [
- 0.9,
- 0.95
- ],
- "eps": 1e-08,
- "lr": 1e-04,
- "weight_decay": 0.01
- },
- "type": "AdamW"
- },
- "scheduler": {
- "type": "WarmupLR",
- "params":{
- "warmup_min_lr": 0,
- "warmup_max_lr": 1e-4,
- "warmup_num_steps": 10000
- }
- },
- "steps_per_print": 100,
- "gradient_clipping": 1,
- "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE,
- "zero_allow_untested_optimizer": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-# strategy=ddp
-strategy=deepspeed_stage_1
-
-TRAINER_ARGS="
- --max_epochs 1 \
- --gpus 8 \
- --num_nodes 1 \
- --strategy ${strategy} \
- --default_root_dir $ROOT_DIR \
- --dirpath $ROOT_DIR/ckpt \
- --save_top_k 3 \
- --every_n_train_steps 0 \
- --monitor train_loss \
- --mode min \
- --save_last \
- --val_check_interval 0.01 \
- --preprocessing_num_workers 20 \
-"
-# --accumulate_grad_batches 8 \
-DATA_DIR=wudao_180g_mt5_tokenized
-
-DATA_ARGS="
- --train_batchsize $MICRO_BATCH_SIZE \
- --valid_batchsize $MICRO_BATCH_SIZE \
- --train_data ${DATA_DIR} \
- --train_split_size 0.999 \
- --max_seq_length 1024 \
-"
-
-MODEL_ARGS="
- --pretrained_model_path /cognitive_comp/ganruyi/experiments/t5_cn_small_pretrain/Randeng-T5-77M \
- --learning_rate 1e-4 \
- --weight_decay 0.1 \
- --keep_tokens_path /cognitive_comp/ganruyi/hf_models/t5_cn_small/sentencepiece_cn_keep_tokens.json \
-"
-# --resume_from_checkpoint /cognitive_comp/ganruyi/fengshen/t5_cn_small_pretrain/ckpt/last.ckpt \
-
-SCRIPTS_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/pretrain_t5/pretrain_t5.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-
-SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-
-# to debug - add echo (it exits and prints what it would have launched)
-#run_cmd="$PY_LAUNCHER $CMD"
-# salloc --nodes=1 --gres=gpu:2 --cpus-per-gpu=20 -t 24:00:00
-clear; srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD'
-# clear; srun --job-name=t5_cn_small_pretrain_v2 --jobid=153124 --nodes=1 --ntasks-per-node=8 --gres=gpu:8 --cpus-per-task=30 -o %x-%j.log -e %x-%j.err singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH bash -c '/home/ganruyi/anaconda3/bin/python $CMD'
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/data/ofa_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/data/ofa_dataset.py
deleted file mode 100644
index 028e92c6edc0834403559a85c55bb259622a8462..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/data/ofa_dataset.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright 2022 The OFA-Sys Team.
-# All rights reserved.
-# This source code is licensed under the Apache 2.0 license
-# found in the LICENSE file in the root directory.
-
-import logging
-import re
-import torch.utils.data
-from fairseq.data import FairseqDataset
-
-logger = logging.getLogger(__name__)
-
-
-class OFADataset(FairseqDataset):
- def __init__(self, split, dataset, bpe, src_dict, tgt_dict):
- self.split = split
- self.dataset = dataset
- self.bpe = bpe
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- self.bos = src_dict.bos()
- self.eos = src_dict.eos()
- self.pad = src_dict.pad()
- self.bos_item = torch.LongTensor([self.bos])
- self.eos_item = torch.LongTensor([self.eos])
-
- def __len__(self):
- return len(self.dataset)
-
- def encode_text(self, text, length=None, append_bos=False, append_eos=False, use_bpe=True):
- s = self.tgt_dict.encode_line(
- line=self.bpe.encode(text) if use_bpe else text,
- add_if_not_exist=False,
- append_eos=False
- ).long()
- if length is not None:
- s = s[:length]
- if append_bos:
- s = torch.cat([self.bos_item, s])
- if append_eos:
- s = torch.cat([s, self.eos_item])
- return s
-
- def pre_question(self, question, max_ques_words):
- question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ')
-
- question = re.sub(
- r"\s{2,}",
- ' ',
- question,
- )
- question = question.rstrip('\n')
- question = question.strip(' ')
-
- # truncate question
- question_words = question.split(' ')
- if len(question_words) > max_ques_words:
- question = ' '.join(question_words[:max_ques_words])
-
- return question
-
- def pre_caption(self, caption, max_words):
- caption = caption.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ').replace('', 'person')
-
- caption = re.sub(
- r"\s{2,}",
- ' ',
- caption,
- )
- caption = caption.rstrip('\n')
- caption = caption.strip(' ')
-
- # truncate caption
- caption_words = caption.split(' ')
- if len(caption_words) > max_words:
- caption = ' '.join(caption_words[:max_words])
-
- return caption
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_sequence_scorer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_sequence_scorer.py
deleted file mode 100644
index 42f9447b599bcd7a9913aec37d94ea5078ff43a3..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_sequence_scorer.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import unittest
-
-import tests.utils as test_utils
-import torch
-from fairseq.sequence_scorer import SequenceScorer
-
-
-class TestSequenceScorer(unittest.TestCase):
- def test_sequence_scorer(self):
- # construct dummy dictionary
- d = test_utils.dummy_dictionary(vocab_size=2)
- self.assertEqual(d.pad(), 1)
- self.assertEqual(d.eos(), 2)
- self.assertEqual(d.unk(), 3)
- eos = d.eos()
- w1 = 4
- w2 = 5
-
- # construct dataloader
- data = [
- {
- "source": torch.LongTensor([w1, w2, eos]),
- "target": torch.LongTensor([w1, w2, w1, eos]),
- },
- {
- "source": torch.LongTensor([w2, eos]),
- "target": torch.LongTensor([w2, w1, eos]),
- },
- {
- "source": torch.LongTensor([w2, eos]),
- "target": torch.LongTensor([w2, eos]),
- },
- ]
- data_itr = test_utils.dummy_dataloader(data)
-
- # specify expected output probabilities
- args = argparse.Namespace()
- unk = 0.0
- args.beam_probs = [
- # step 0:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.0, unk, 0.6, 0.4], # sentence 1
- [0.0, unk, 0.4, 0.6], # sentence 2
- [0.0, unk, 0.7, 0.3], # sentence 3
- ]
- ),
- # step 1:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.0, unk, 0.2, 0.7], # sentence 1
- [0.0, unk, 0.8, 0.2], # sentence 2
- [0.7, unk, 0.1, 0.2], # sentence 3
- ]
- ),
- # step 2:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.10, unk, 0.50, 0.4], # sentence 1
- [0.15, unk, 0.15, 0.7], # sentence 2
- [0.00, unk, 0.00, 0.0], # sentence 3
- ]
- ),
- # step 3:
- torch.FloatTensor(
- [
- # eos w1 w2
- [0.9, unk, 0.05, 0.05], # sentence 1
- [0.0, unk, 0.00, 0.0], # sentence 2
- [0.0, unk, 0.00, 0.0], # sentence 3
- ]
- ),
- ]
- expected_scores = [
- [0.6, 0.7, 0.5, 0.9], # sentence 1
- [0.6, 0.8, 0.15], # sentence 2
- [0.3, 0.7], # sentence 3
- ]
-
- task = test_utils.TestTranslationTask.setup_task(args, d, d)
- model = task.build_model(args)
- scorer = SequenceScorer(task.target_dictionary)
- for sample in data_itr:
- hypos = task.inference_step(scorer, [model], sample)
- for id, hypos_id in zip(sample["id"].tolist(), hypos):
- self.assertHypoTokens(hypos_id[0], data[id]["target"])
- self.assertHypoScore(hypos_id[0], expected_scores[id])
-
- def assertHypoTokens(self, hypo, tokens):
- self.assertTensorEqual(hypo["tokens"], torch.LongTensor(tokens))
-
- def assertHypoScore(self, hypo, pos_probs, normalized=True, lenpen=1.0):
- pos_scores = torch.FloatTensor(pos_probs).log()
- self.assertAlmostEqual(hypo["positional_scores"], pos_scores)
- self.assertEqual(pos_scores.numel(), hypo["tokens"].numel())
- score = pos_scores.sum()
- if normalized:
- score /= pos_scores.numel() ** lenpen
- self.assertLess(abs(score - hypo["score"]), 1e-6)
-
- def assertAlmostEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertLess((t1 - t2).abs().max(), 1e-4)
-
- def assertTensorEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertEqual(t1.ne(t2).long().sum(), 0)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/HugoDzz/spaceship_drift/src/app.css b/spaces/HugoDzz/spaceship_drift/src/app.css
deleted file mode 100644
index 6155d5df62ef089fd9fced5266de10df57e1c670..0000000000000000000000000000000000000000
--- a/spaces/HugoDzz/spaceship_drift/src/app.css
+++ /dev/null
@@ -1,14 +0,0 @@
-@tailwind base;
-@tailwind components;
-@tailwind utilities;
-
-@layer base {
-
- @font-face {
- font-family: "Hellovetica";
- font-weight: 300;
- src : local("Hellovetica"), url("/fonts/hellovetica.ttf");
- font-display: swap;
- }
-
- }
\ No newline at end of file
diff --git a/spaces/HugoDzz/spaceship_drift/tailwind.config.js b/spaces/HugoDzz/spaceship_drift/tailwind.config.js
deleted file mode 100644
index 186f731d89a9ad8fca203cd5150f41eed9aca2e1..0000000000000000000000000000000000000000
--- a/spaces/HugoDzz/spaceship_drift/tailwind.config.js
+++ /dev/null
@@ -1,12 +0,0 @@
-/** @type {import('tailwindcss').Config} */
-export default {
- content: ["./src/**/*.{html,js,svelte,ts}"],
- theme: {
- extend: {
- fontFamily: {
- Hellovetica: ["Hellovetica"]
- },
- },
- },
- plugins: [],
-};
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/__init__.py
deleted file mode 100644
index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/IanNathaniel/Zero-DCE/app.py b/spaces/IanNathaniel/Zero-DCE/app.py
deleted file mode 100644
index 0a346dc8fba32a07b014aeaaa8c18f112271d038..0000000000000000000000000000000000000000
--- a/spaces/IanNathaniel/Zero-DCE/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-import torch
-import torch.nn as nn
-import torchvision
-import torch.backends.cudnn as cudnn
-import torch.optim
-import os
-import sys
-import argparse
-import dataloader
-import model
-import numpy as np
-from torchvision import transforms
-from PIL import Image
-import glob
-
-
-def lowlight(image):
- os.environ['CUDA_VISIBLE_DEVICES']=''
- data_lowlight = Image.open(image)
-
- data_lowlight = (np.asarray(data_lowlight)/255.0)
- data_lowlight = torch.from_numpy(data_lowlight).float()
- data_lowlight = data_lowlight.permute(2,0,1)
- data_lowlight = data_lowlight.cpu().unsqueeze(0)
-
- DCE_net = model.enhance_net_nopool().cpu()
- DCE_net.load_state_dict(torch.load('Epoch99.pth', map_location=torch.device('cpu')))
-
- _,enhanced_image,_ = DCE_net(data_lowlight)
-
- torchvision.utils.save_image(enhanced_image, f'01.png')
-
- return '01.png'
-
-
-title = "Low-Light Image Enhancement using Zero-DCE"
-description = "Gradio Demo for Low-Light Enhancement using Zero-DCE. The model improves the quality of images that have poor contrast, low brightness, and suboptimal exposure. To use it, simply upload your image, or click one of the examples to load them. Check out the original paper and the GitHub repo at the links below. "
-article = "
- ) : null
-}
diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/__init__.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/__init__.py
deleted file mode 100644
index 6b8594f470200ff5c000542ef115375ed69b749c..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/audiocraft/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from . import data, modules, models
-
-__version__ = '0.0.2a2'
diff --git a/spaces/MEKHANE/3D_Ken_Burns/README.md b/spaces/MEKHANE/3D_Ken_Burns/README.md
deleted file mode 100644
index 5ebf6e1bd079a31bd7a04c1c19a14fd4daa94b8f..0000000000000000000000000000000000000000
--- a/spaces/MEKHANE/3D_Ken_Burns/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 3D Ken Burns
-emoji: 📉
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MGLDZM/chgpt/Dockerfile b/spaces/MGLDZM/chgpt/Dockerfile
deleted file mode 100644
index 1afcab8b618328ab31269e95983b502b6b4baecb..0000000000000000000000000000000000000000
--- a/spaces/MGLDZM/chgpt/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM python:3.9
-
-WORKDIR /code
-COPY ./requirements.txt /code/requirements.txt
-
-
-RUN useradd -m -u 1000 user
-
-USER user
-
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-WORKDIR $HOME/app
-
-
-COPY --chown=user . $HOME/app
-RUN pip install --user --no-cache-dir --upgrade -r /code/requirements.txt
-#CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
-CMD ["hypercorn", "main:app", "--workers", "6", "--bind", "0.0.0.0:7860"]
-
diff --git a/spaces/MRiwu/Collection/text/shanghainese.py b/spaces/MRiwu/Collection/text/shanghainese.py
deleted file mode 100644
index 1c28c17d0dc0d920fd222c909a53d703c95e043b..0000000000000000000000000000000000000000
--- a/spaces/MRiwu/Collection/text/shanghainese.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('chinese_dialect_lexicons/zaonhe')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ᴇ'),
- ('B', 'bi'),
- ('C', 'si'),
- ('D', 'di'),
- ('E', 'i'),
- ('F', 'ᴇf'),
- ('G', 'dʑi'),
- ('H', 'ᴇtɕʰ'),
- ('I', 'ᴀi'),
- ('J', 'dʑᴇ'),
- ('K', 'kʰᴇ'),
- ('L', 'ᴇl'),
- ('M', 'ᴇm'),
- ('N', 'ᴇn'),
- ('O', 'o'),
- ('P', 'pʰi'),
- ('Q', 'kʰiu'),
- ('R', 'ᴀl'),
- ('S', 'ᴇs'),
- ('T', 'tʰi'),
- ('U', 'ɦiu'),
- ('V', 'vi'),
- ('W', 'dᴀbɤliu'),
- ('X', 'ᴇks'),
- ('Y', 'uᴀi'),
- ('Z', 'zᴇ')
-]]
-
-
-def _number_to_shanghainese(num):
- num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两')
- return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num)
-
-
-def number_to_shanghainese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def shanghainese_to_ipa(text):
- text = number_to_shanghainese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/transforms.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/transforms.py
deleted file mode 100644
index 3ad346661f84b0647026e130a552c4b38b83e2ac..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/transforms.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-from torch.nn import functional as F
-from torchvision.transforms.functional import resize, to_pil_image # type: ignore
-
-from copy import deepcopy
-from typing import Tuple
-
-
-class ResizeLongestSide:
- """
- Resizes images to longest side 'target_length', as well as provides
- methods for resizing coordinates and boxes. Provides methods for
- transforming both numpy array and batched torch tensors.
- """
-
- def __init__(self, target_length: int) -> None:
- self.target_length = target_length
-
- def apply_image(self, image: np.ndarray) -> np.ndarray:
- """
- Expects a numpy array with shape HxWxC in uint8 format.
- """
- target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length)
- return np.array(resize(to_pil_image(image), target_size))
-
- def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray:
- """
- Expects a numpy array of length 2 in the final dimension. Requires the
- original image size in (H, W) format.
- """
- old_h, old_w = original_size
- new_h, new_w = self.get_preprocess_shape(
- original_size[0], original_size[1], self.target_length
- )
- coords = deepcopy(coords).astype(float)
- coords[..., 0] = coords[..., 0] * (new_w / old_w)
- coords[..., 1] = coords[..., 1] * (new_h / old_h)
- return coords
-
- def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray:
- """
- Expects a numpy array shape Bx4. Requires the original image size
- in (H, W) format.
- """
- boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size)
- return boxes.reshape(-1, 4)
-
- def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor:
- """
- Expects batched images with shape BxCxHxW and float format. This
- transformation may not exactly match apply_image. apply_image is
- the transformation expected by the model.
- """
- # Expects an image in BCHW format. May not exactly match apply_image.
- target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length)
- return F.interpolate(
- image, target_size, mode="bilinear", align_corners=False, antialias=True
- )
-
- def apply_coords_torch(
- self, coords: torch.Tensor, original_size: Tuple[int, ...]
- ) -> torch.Tensor:
- """
- Expects a torch tensor with length 2 in the last dimension. Requires the
- original image size in (H, W) format.
- """
- old_h, old_w = original_size
- new_h, new_w = self.get_preprocess_shape(
- original_size[0], original_size[1], self.target_length
- )
- coords = deepcopy(coords).to(torch.float)
- coords[..., 0] = coords[..., 0] * (new_w / old_w)
- coords[..., 1] = coords[..., 1] * (new_h / old_h)
- return coords
-
- def apply_boxes_torch(
- self, boxes: torch.Tensor, original_size: Tuple[int, ...]
- ) -> torch.Tensor:
- """
- Expects a torch tensor with shape Bx4. Requires the original image
- size in (H, W) format.
- """
- boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size)
- return boxes.reshape(-1, 4)
-
- @staticmethod
- def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]:
- """
- Compute the output size given input size and target long side length.
- """
- scale = long_side_length * 1.0 / max(oldh, oldw)
- newh, neww = oldh * scale, oldw * scale
- neww = int(neww + 0.5)
- newh = int(newh + 0.5)
- return (newh, neww)
diff --git a/spaces/Makiing/coolb-in-gtest/src/pages/api/sydney.ts b/spaces/Makiing/coolb-in-gtest/src/pages/api/sydney.ts
deleted file mode 100644
index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/pages/api/sydney.ts
+++ /dev/null
@@ -1,62 +0,0 @@
-import { NextApiRequest, NextApiResponse } from 'next'
-import { WebSocket, debug } from '@/lib/isomorphic'
-import { BingWebBot } from '@/lib/bots/bing'
-import { websocketUtils } from '@/lib/bots/bing/utils'
-import { WatchDog, createHeaders } from '@/lib/utils'
-
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const conversationContext = req.body
- const headers = createHeaders(req.cookies)
- debug(headers)
- res.setHeader('Content-Type', 'text/stream; charset=UTF-8')
-
- const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', {
- headers: {
- ...headers,
- 'accept-language': 'zh-CN,zh;q=0.9',
- 'cache-control': 'no-cache',
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- pragma: 'no-cache',
- }
- })
-
- const closeDog = new WatchDog()
- const timeoutDog = new WatchDog()
- ws.onmessage = (event) => {
- timeoutDog.watch(() => {
- ws.send(websocketUtils.packMessage({ type: 6 }))
- }, 1500)
- closeDog.watch(() => {
- ws.close()
- }, 10000)
- res.write(event.data)
- if (/\{"type":([367])\}/.test(String(event.data))) {
- const type = parseInt(RegExp.$1, 10)
- debug('connection type', type)
- if (type === 3) {
- ws.close()
- } else {
- ws.send(websocketUtils.packMessage({ type }))
- }
- }
- }
-
- ws.onclose = () => {
- timeoutDog.reset()
- closeDog.reset()
- debug('connection close')
- res.end()
- }
-
- await new Promise((resolve) => ws.onopen = resolve)
- ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 }))
- ws.send(websocketUtils.packMessage({ type: 6 }))
- ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!)))
- req.socket.once('close', () => {
- ws.close()
- if (!res.closed) {
- res.end()
- }
- })
-}
diff --git a/spaces/MathysL/AutoGPT4/autogpt/chat.py b/spaces/MathysL/AutoGPT4/autogpt/chat.py
deleted file mode 100644
index 1f6bca96eb216c667656b50f131006b83c681065..0000000000000000000000000000000000000000
--- a/spaces/MathysL/AutoGPT4/autogpt/chat.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import time
-
-from openai.error import RateLimitError
-
-from autogpt import token_counter
-from autogpt.config import Config
-from autogpt.llm_utils import create_chat_completion
-from autogpt.logs import logger
-
-cfg = Config()
-
-
-def create_chat_message(role, content):
- """
- Create a chat message with the given role and content.
-
- Args:
- role (str): The role of the message sender, e.g., "system", "user", or "assistant".
- content (str): The content of the message.
-
- Returns:
- dict: A dictionary containing the role and content of the message.
- """
- return {"role": role, "content": content}
-
-
-def generate_context(prompt, relevant_memory, full_message_history, model):
- current_context = [
- create_chat_message("system", prompt),
- create_chat_message(
- "system", f"The current time and date is {time.strftime('%c')}"
- ),
- create_chat_message(
- "system",
- f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
- ),
- ]
-
- # Add messages from the full message history until we reach the token limit
- next_message_to_add_index = len(full_message_history) - 1
- insertion_index = len(current_context)
- # Count the currently used tokens
- current_tokens_used = token_counter.count_message_tokens(current_context, model)
- return (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- )
-
-
-# TODO: Change debug from hardcode to argument
-def chat_with_ai(
- prompt, user_input, full_message_history, permanent_memory, token_limit
-):
- """Interact with the OpenAI API, sending the prompt, user input, message history,
- and permanent memory."""
- while True:
- try:
- """
- Interact with the OpenAI API, sending the prompt, user input,
- message history, and permanent memory.
-
- Args:
- prompt (str): The prompt explaining the rules to the AI.
- user_input (str): The input from the user.
- full_message_history (list): The list of all messages sent between the
- user and the AI.
- permanent_memory (Obj): The memory object containing the permanent
- memory.
- token_limit (int): The maximum number of tokens allowed in the API call.
-
- Returns:
- str: The AI's response.
- """
- model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
- # Reserve 1000 tokens for the response
-
- logger.debug(f"Token limit: {token_limit}")
- send_token_limit = token_limit - 1000
-
- relevant_memory = (
- ""
- if len(full_message_history) == 0
- else permanent_memory.get_relevant(str(full_message_history[-9:]), 10)
- )
-
- logger.debug(f"Memory Stats: {permanent_memory.get_stats()}")
-
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(prompt, relevant_memory, full_message_history, model)
-
- while current_tokens_used > 2500:
- # remove memories until we are under 2500 tokens
- relevant_memory = relevant_memory[:-1]
- (
- next_message_to_add_index,
- current_tokens_used,
- insertion_index,
- current_context,
- ) = generate_context(
- prompt, relevant_memory, full_message_history, model
- )
-
- current_tokens_used += token_counter.count_message_tokens(
- [create_chat_message("user", user_input)], model
- ) # Account for user input (appended later)
-
- while next_message_to_add_index >= 0:
- # print (f"CURRENT TOKENS USED: {current_tokens_used}")
- message_to_add = full_message_history[next_message_to_add_index]
-
- tokens_to_add = token_counter.count_message_tokens(
- [message_to_add], model
- )
- if current_tokens_used + tokens_to_add > send_token_limit:
- break
-
- # Add the most recent message to the start of the current context,
- # after the two system prompts.
- current_context.insert(
- insertion_index, full_message_history[next_message_to_add_index]
- )
-
- # Count the currently used tokens
- current_tokens_used += tokens_to_add
-
- # Move to the next most recent message in the full message history
- next_message_to_add_index -= 1
-
- # Append user input, the length of this is accounted for above
- current_context.extend([create_chat_message("user", user_input)])
-
- # Calculate remaining tokens
- tokens_remaining = token_limit - current_tokens_used
- # assert tokens_remaining >= 0, "Tokens remaining is negative.
- # This should never happen, please submit a bug report at
- # https://www.github.com/Torantulino/Auto-GPT"
-
- # Debug print the current context
- logger.debug(f"Token limit: {token_limit}")
- logger.debug(f"Send Token Count: {current_tokens_used}")
- logger.debug(f"Tokens remaining for response: {tokens_remaining}")
- logger.debug("------------ CONTEXT SENT TO AI ---------------")
- for message in current_context:
- # Skip printing the prompt
- if message["role"] == "system" and message["content"] == prompt:
- continue
- logger.debug(f"{message['role'].capitalize()}: {message['content']}")
- logger.debug("")
- logger.debug("----------- END OF CONTEXT ----------------")
-
- # TODO: use a model defined elsewhere, so that model can contain
- # temperature and other settings we care about
- assistant_reply = create_chat_completion(
- model=model,
- messages=current_context,
- max_tokens=tokens_remaining,
- )
-
- # Update full message history
- full_message_history.append(create_chat_message("user", user_input))
- full_message_history.append(
- create_chat_message("assistant", assistant_reply)
- )
-
- return assistant_reply
- except RateLimitError:
- # TODO: When we switch to langchain, this is built in
- print("Error: ", "API Rate Limit Reached. Waiting 10 seconds...")
- time.sleep(10)
diff --git a/spaces/MetaWabbit/Auto-GPT/tests.py b/spaces/MetaWabbit/Auto-GPT/tests.py
deleted file mode 100644
index 62f76da8ac4925ef6cdfcce0484612cf70959862..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/tests.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import unittest
-
-import coverage
-
-if __name__ == "__main__":
- # Start coverage collection
- cov = coverage.Coverage()
- cov.start()
-
- # Load all tests from the 'autogpt/tests' package
- suite = unittest.defaultTestLoader.discover("./tests")
-
- # Run the tests
- unittest.TextTestRunner().run(suite)
-
- # Stop coverage collection
- cov.stop()
- cov.save()
-
- # Report the coverage
- cov.report(show_missing=True)
diff --git a/spaces/Mileena/CLIP/Dockerfile b/spaces/Mileena/CLIP/Dockerfile
deleted file mode 100644
index f082970621660a3a398d4266140ceb3a4baa4895..0000000000000000000000000000000000000000
--- a/spaces/Mileena/CLIP/Dockerfile
+++ /dev/null
@@ -1,6 +0,0 @@
-FROM argilla/argilla-quickstart:latest
-
-# Define datasets to preload: full=all datasets, single=one dataset, and none=no datasets.
-ENV LOAD_DATASETS=single
-
-CMD whoami && /start_quickstart_argilla.sh
\ No newline at end of file
diff --git a/spaces/Mileena/CLIP/app.py b/spaces/Mileena/CLIP/app.py
deleted file mode 100644
index 218bf797ab1ac09e10b9f4e722d56662b5fcc540..0000000000000000000000000000000000000000
--- a/spaces/Mileena/CLIP/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/microsoft/SportsBERT").launch()
\ No newline at end of file
diff --git a/spaces/Mk-ai/README/README.md b/spaces/Mk-ai/README/README.md
deleted file mode 100644
index 8cfe544c3ca1b8fa51f253f2c40f40f43f736fae..0000000000000000000000000000000000000000
--- a/spaces/Mk-ai/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: ⚡
-colorFrom: gray
-colorTo: yellow
-sdk: static
-pinned: false
----
-
-Edit this `README.md` markdown file to author your organization card 🔥
diff --git a/spaces/MrVicente/RA-BART/custom_bart/bart_attention.py b/spaces/MrVicente/RA-BART/custom_bart/bart_attention.py
deleted file mode 100644
index 5c3fbb2107513cf645f7d13dc56e86b9f4b8fe15..0000000000000000000000000000000000000000
--- a/spaces/MrVicente/RA-BART/custom_bart/bart_attention.py
+++ /dev/null
@@ -1,313 +0,0 @@
-#############################
-# Imports
-#############################
-
-# Python modules
-from typing import Optional, Tuple
-# Remote modules
-import torch
-from torch import nn
-
-# Local modules
-from .attention_utils import (
- create_layer_with_commonsense_on_specific_head,
- find_head_to_mask,
- convert_relations_to_binary_mask,
- update_weights_regarding_relations_on_specific_head
-)
-
-
-class BartCustomAttention(nn.Module):
- """Multi-headed attention from 'Attention Is All You Need' paper"""
-
- def __init__(
- self,
- embed_dim: int,
- num_heads: int,
- dropout: float = 0.0,
- is_decoder: bool = False,
- bias: bool = True,
- num_relation_kinds: int = 0,
- use_same_relation_kv_emb: bool = True,
- heads_mask: Optional[torch.Tensor] = None,
- ):
- super().__init__()
- self.embed_dim = embed_dim
- self.num_heads = num_heads
- self.dropout = dropout
- self.head_dim = embed_dim // num_heads
-
- if (self.head_dim * num_heads) != self.embed_dim:
- raise ValueError(
- f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
- f" and `num_heads`: {num_heads})."
- )
- if heads_mask.size() != (self.num_heads,):
- raise ValueError(
- f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {heads_mask.size()}"
- )
- self.heads_mask = heads_mask
-
- self.scaling = self.head_dim**-0.5
- self.is_decoder = is_decoder
-
- self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
- self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
- self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
- self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
-
- self.num_relation_kinds = num_relation_kinds
- self.relation_k_emb = nn.Embedding(num_relation_kinds + 1, self.head_dim, padding_idx=0)
- if use_same_relation_kv_emb:
- self.relation_v_emb = self.relation_k_emb
- else:
- self.relation_v_emb = nn.Embedding(num_relation_kinds + 1, self.head_dim, padding_idx=0)
-
- self.k_rel_scale = 0.0
- self.v_rel_scale = 1.0
-
-
- def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
- return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- key_value_states: Optional[torch.Tensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- attention_mask: Optional[torch.Tensor] = None,
- layer_head_mask: Optional[torch.Tensor] = None,
- output_attentions: bool = False,
- relation_inputs: Optional[torch.Tensor] = None,
- ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- """Input shape: Batch x Time x Channel"""
-
- #print('device:', hidden_states.device)
- # if key_value_states are provided this layer is used as a cross-attention layer
- # for the decoder
- is_cross_attention = key_value_states is not None
-
- bsz, tgt_len, embed_dim = hidden_states.size()
-
- #print(relation_inputs.shape, 'VS ', (bsz, tgt_len, tgt_len))
- if relation_inputs is None:
- # TODO
- print('oh no')
- relation_inputs = torch.zeros((bsz, tgt_len, tgt_len)).to('cuda').long()
- #print(relation_inputs.shape, ' | ', (bsz, tgt_len, tgt_len))
- assert relation_inputs.shape == (bsz, tgt_len, tgt_len)
-
- # (batch_size, seq_length, seq_length, self.num_relation_kinds, self.inner_dim // num_relation_kinds)
- relation_k_embeds = self.relation_k_emb(relation_inputs)
- relation_v_embeds = self.relation_v_emb(relation_inputs)
-
- # get query proj
- query_states = self.q_proj(hidden_states) * self.scaling
- # get key, value proj
- if is_cross_attention and past_key_value is not None:
- # reuse k,v, cross_attentions
- key_states = past_key_value[0]
- value_states = past_key_value[1]
- elif is_cross_attention:
- # cross_attentions
- key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
- value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
- elif past_key_value is not None:
- # reuse k, v, self_attention
- key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
- value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
- key_states = torch.cat([past_key_value[0], key_states], dim=2)
- value_states = torch.cat([past_key_value[1], value_states], dim=2)
- else:
- # self_attention
- key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
- value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
-
- if self.is_decoder:
- # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
- # Further calls to cross_attention layer can then reuse all cross-attention
- # key/value_states (first "if" case)
- # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
- # all previous decoder key/value_states. Further calls to uni-directional self-attention
- # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
- # if encoder bi-directional self-attention `past_key_value` is always `None`
- past_key_value = (key_states, value_states)
-
- proj_shape = (bsz * self.num_heads, -1, self.head_dim)
- query_states = self._shape(query_states, tgt_len, bsz)
- src_len = key_states.size(2)
-
- # compute scores
- attn_weights = torch.matmul(
- query_states, key_states.transpose(3, 2)
- ) # equivalent of torch.einsum("bnqd,bnkd->bnqk", query_states, key_states), compatible with onnx op>9
-
- # q_t is [batch, seq_length, n_heads, dim_per_head]
- q_t = query_states.permute(0, 2, 1, 3)
- #print('qt.shape: ', q_t.shape)
- # r_t is [batch, seq_length, dim_per_head, seq_length]
- r_t = relation_k_embeds.transpose(-2, -1)
- #print('rt.shape: ', r_t.shape)
-
- q_tr_t_matmul = torch.matmul(q_t, r_t) # [batch, seq_length, n_heads, seq_length]
- q_tr_tmatmul_t = q_tr_t_matmul.permute(0, 2, 1, 3) # [batch, n_heads, seq_length, seq_length]
-
- # Make sure impact of relation-aware only apllicable on specific heads (k-part)
-
- #print("==========")
- #print('first K: ', q_tr_tmatmul_t.sum())
- """
- q_tr_tmatmul_t = self.layer_heads_relation_attention_update(
- self.heads_mask,
- q_tr_tmatmul_t,
- )
- """
- #print('second K: ', q_tr_tmatmul_t.sum())
- #print("==========")
-
- # give weight to influence
- #q_tr_tmatmul_t = 100.0 * q_tr_tmatmul_t
-
- # Add to scores
- #print('attn_weights k [before]', attn_weights)
- #print('attn_weights sum k [before]', attn_weights.sum())
- attn_weights += self.k_rel_scale * q_tr_tmatmul_t
- #attn_weights += 100.0 * q_tr_tmatmul_t
- #print('attn_weights k [after]: ', attn_weights)
- #print('attn_weights sum k [after]', attn_weights.sum())
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
-
- if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
- raise ValueError(
- f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}"
- )
-
- if attention_mask is not None:
- if attention_mask.size() != (bsz, 1, tgt_len, src_len):
- raise ValueError(
- f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
- )
- attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
-
- attn_weights = nn.functional.softmax(attn_weights, dim=-1)
-
- # Wrong place... gonna comment
- """
- attn_weights = self.layer_heads_relation_attention_update(layer_head_mask,
- relation_inputs,
- attn_weights,
- bsz,
- tgt_len,
- src_len)
- """
- if layer_head_mask is not None:
- if layer_head_mask.size() != (self.num_heads,):
- raise ValueError(
- f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
- )
- attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
-
-
- if output_attentions:
- # this operation is a bit awkward, but it's required to
- # make sure that attn_weights keeps its gradient.
- # In order to do so, attn_weights have to be reshaped
- # twice and have to be reused in the following
- attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
- attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
- else:
- attn_weights_reshaped = None
-
- attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
-
- attn_output = torch.bmm(attn_probs, value_states.view(*proj_shape))
-
- #print('attn_probs.shape', attn_probs.shape)
- # w_t is [batch, seq_length, n_heads, seq_length]
- w_t = attn_probs.view(bsz, self.num_heads, tgt_len, src_len).permute(0, 2, 1, 3)
- #print('w_t.shape 1:', w_t.shape)
- #print('relation_v_embeds.shape', relation_v_embeds.shape)
- # [batch, seq_length, n_heads, seq_length]
- w_tr_matmul = torch.matmul(w_t, relation_v_embeds)
- #print('w_tr_matmul.shape 1:', w_tr_matmul.shape)
- #print('w_tr_matmul.shape 2:', w_tr_matmul.shape)
- # Make sure impact of relation-aware only apllicable on specific heads (v-part)
-
- #print("==========")
- #print('first V sum: ', w_tr_matmul.sum())
- #print('first V: ', w_tr_matmul[0])
- """
- w_tr_matmul = self.layer_heads_relation_attention_v_update(
- self.heads_mask,
- w_tr_matmul,
- bsz,
- tgt_len,
- )
- """
- w_tr_matmul = self.v_rel_scale * w_tr_matmul
- #print('second V sum: ', w_tr_matmul.sum())
- #print('second V: ', w_tr_matmul[0])
- #print("==========")
-
- w_tr_matmul = w_tr_matmul.permute(0, 2, 1, 3)
- w_tr_matmul = w_tr_matmul.reshape(bsz * self.num_heads, tgt_len, self.head_dim)
-
- #print('attn_output v [before]', attn_output)
- #print('attn_output sum v [before]', attn_output.sum())
- attn_output += w_tr_matmul
- #attn_output += 100.0 * w_tr_matmul
- #print('attn_output v [after]', attn_output)
- #print('attn_output sum v [after]', attn_output.sum())
- #raise Exception()
-
-
- if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
- raise ValueError(
- f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output.size()}"
- )
-
- attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
- attn_output = attn_output.transpose(1, 2)
-
- # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
- # partitioned aross GPUs when using tensor-parallelism.
- attn_output = attn_output.reshape(bsz, tgt_len, embed_dim)
-
- attn_output = self.out_proj(attn_output)
-
- return attn_output, attn_weights_reshaped, past_key_value
-
- def layer_heads_relation_attention_update(self,
- layer_head_mask,
- data,
- ):
- if layer_head_mask is not None:
- if layer_head_mask.size() != (self.num_heads,):
- raise ValueError(
- f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
- )
- #print('layer_head_mask:', layer_head_mask)
- masked_weights = layer_head_mask.view(self.num_heads, 1, 1) * data
- return masked_weights
- return data
-
- def layer_heads_relation_attention_v_update(self,
- layer_head_mask,
- data,
- bsz,
- tgt_len,
- ):
- if layer_head_mask is not None:
- if layer_head_mask.size() != (self.num_heads,):
- raise ValueError(
- f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
- )
- #relation_binary_mask = convert_relations_to_binary_mask(relation_inputs)
- #one_dimension_mask = relation_binary_mask.sum(-1)
- #relation_binary_mask = convert_relations_to_binary_mask(one_dimension_mask)
- # [16, 128, 16, 64]
- masked_weights = layer_head_mask.view(self.num_heads, 1, 1) * data.view(bsz, self.num_heads, tgt_len, self.head_dim)
- return masked_weights.view(bsz, tgt_len, self.num_heads, self.head_dim)
- return data
\ No newline at end of file
diff --git a/spaces/NATSpeech/PortaSpeech/utils/commons/meters.py b/spaces/NATSpeech/PortaSpeech/utils/commons/meters.py
deleted file mode 100644
index e38790e9f292ec843a820dad73c9795eb2ab8daa..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/utils/commons/meters.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import time
-import torch
-
-
-class AvgrageMeter(object):
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.avg = 0
- self.sum = 0
- self.cnt = 0
-
- def update(self, val, n=1):
- self.sum += val * n
- self.cnt += n
- self.avg = self.sum / self.cnt
-
-
-class Timer:
- timer_map = {}
-
- def __init__(self, name, enable=False):
- if name not in Timer.timer_map:
- Timer.timer_map[name] = 0
- self.name = name
- self.enable = enable
-
- def __enter__(self):
- if self.enable:
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- self.t = time.time()
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- if self.enable:
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- Timer.timer_map[self.name] += time.time() - self.t
- if self.enable:
- print(f'[Timer] {self.name}: {Timer.timer_map[self.name]}')
diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/configs/factory.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/configs/factory.py
deleted file mode 100644
index d60ea1e01133fdfffd76ad54daf4ee20ed1e46e0..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/configs/factory.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Factory to provide model configs."""
-
-from official.modeling.hyperparams import params_dict
-from official.vision.detection.configs import maskrcnn_config
-from official.vision.detection.configs import retinanet_config
-from official.vision.detection.configs import shapemask_config
-
-
-def config_generator(model):
- """Model function generator."""
- if model == 'retinanet':
- default_config = retinanet_config.RETINANET_CFG
- restrictions = retinanet_config.RETINANET_RESTRICTIONS
- elif model == 'mask_rcnn':
- default_config = maskrcnn_config.MASKRCNN_CFG
- restrictions = maskrcnn_config.MASKRCNN_RESTRICTIONS
- elif model == 'shapemask':
- default_config = shapemask_config.SHAPEMASK_CFG
- restrictions = shapemask_config.SHAPEMASK_RESTRICTIONS
- else:
- raise ValueError('Model %s is not supported.' % model)
-
- return params_dict.ParamsDict(default_config, restrictions)
diff --git a/spaces/Nattiman/chatsummarizercapstoneproject/README.md b/spaces/Nattiman/chatsummarizercapstoneproject/README.md
deleted file mode 100644
index de82b482c1d758571841ee1b8708d80139c04bfb..0000000000000000000000000000000000000000
--- a/spaces/Nattiman/chatsummarizercapstoneproject/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Chatsummarizercapstoneproject
-emoji: 🏃
-colorFrom: yellow
-colorTo: red
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Not-Grim-Refer/huggingface-transformers-agents/readme.md b/spaces/Not-Grim-Refer/huggingface-transformers-agents/readme.md
deleted file mode 100644
index a6a2e6530d5a4864099858b01e5c54a5be9d202e..0000000000000000000000000000000000000000
--- a/spaces/Not-Grim-Refer/huggingface-transformers-agents/readme.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: huggingface-transformers-agents
-emoji: 🌍
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.30.3
-app_file: app.py
-pinned: true
-license: mit
-
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-refer
\ No newline at end of file
diff --git a/spaces/Nultx/VITS-TTS/text/korean.py b/spaces/Nultx/VITS-TTS/text/korean.py
deleted file mode 100644
index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000
--- a/spaces/Nultx/VITS-TTS/text/korean.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import re
-from jamo import h2j, j2hcj
-import ko_pron
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (ipa, lazy ipa) pairs:
-_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('t͡ɕ','ʧ'),
- ('d͡ʑ','ʥ'),
- ('ɲ','n^'),
- ('ɕ','ʃ'),
- ('ʷ','w'),
- ('ɭ','l`'),
- ('ʎ','ɾ'),
- ('ɣ','ŋ'),
- ('ɰ','ɯ'),
- ('ʝ','j'),
- ('ʌ','ə'),
- ('ɡ','g'),
- ('\u031a','#'),
- ('\u0348','='),
- ('\u031e',''),
- ('\u0320',''),
- ('\u0339','')
-]]
-
-
-def latin_to_hangul(text):
- for regex, replacement in _latin_to_hangul:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def divide_hangul(text):
- text = j2hcj(h2j(text))
- for regex, replacement in _hangul_divided:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def hangul_number(num, sino=True):
- '''Reference https://github.com/Kyubyong/g2pK'''
- num = re.sub(',', '', num)
-
- if num == '0':
- return '영'
- if not sino and num == '20':
- return '스무'
-
- digits = '123456789'
- names = '일이삼사오육칠팔구'
- digit2name = {d: n for d, n in zip(digits, names)}
-
- modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉'
- decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔'
- digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())}
- digit2dec = {d: dec for d, dec in zip(digits, decimals.split())}
-
- spelledout = []
- for i, digit in enumerate(num):
- i = len(num) - i - 1
- if sino:
- if i == 0:
- name = digit2name.get(digit, '')
- elif i == 1:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- else:
- if i == 0:
- name = digit2mod.get(digit, '')
- elif i == 1:
- name = digit2dec.get(digit, '')
- if digit == '0':
- if i % 4 == 0:
- last_three = spelledout[-min(3, len(spelledout)):]
- if ''.join(last_three) == '':
- spelledout.append('')
- continue
- else:
- spelledout.append('')
- continue
- if i == 2:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 3:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 4:
- name = digit2name.get(digit, '') + '만'
- name = name.replace('일만', '만')
- elif i == 5:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- elif i == 6:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 7:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 8:
- name = digit2name.get(digit, '') + '억'
- elif i == 9:
- name = digit2name.get(digit, '') + '십'
- elif i == 10:
- name = digit2name.get(digit, '') + '백'
- elif i == 11:
- name = digit2name.get(digit, '') + '천'
- elif i == 12:
- name = digit2name.get(digit, '') + '조'
- elif i == 13:
- name = digit2name.get(digit, '') + '십'
- elif i == 14:
- name = digit2name.get(digit, '') + '백'
- elif i == 15:
- name = digit2name.get(digit, '') + '천'
- spelledout.append(name)
- return ''.join(elem for elem in spelledout)
-
-
-def number_to_hangul(text):
- '''Reference https://github.com/Kyubyong/g2pK'''
- tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text))
- for token in tokens:
- num, classifier = token
- if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers:
- spelledout = hangul_number(num, sino=False)
- else:
- spelledout = hangul_number(num, sino=True)
- text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}')
- # digit by digit for remaining digits
- digits = '0123456789'
- names = '영일이삼사오육칠팔구'
- for d, n in zip(digits, names):
- text = text.replace(d, n)
- return text
-
-
-def korean_to_lazy_ipa(text):
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text)
- for regex, replacement in _ipa_to_lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def korean_to_ipa(text):
- text = korean_to_lazy_ipa(text)
- return text.replace('ʧ','tʃ').replace('ʥ','dʑ')
diff --git a/spaces/OAOA/DifFace/models/swinir.py b/spaces/OAOA/DifFace/models/swinir.py
deleted file mode 100644
index 083c8ab342b5aba3c9fb6e3a610bad9041b380d4..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/models/swinir.py
+++ /dev/null
@@ -1,1122 +0,0 @@
-# -----------------------------------------------------------------------------------
-# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257
-# Originally Written by Ze Liu, Modified by Jingyun Liang.
-# -----------------------------------------------------------------------------------
-
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
-
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
-
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
-
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
-
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
- def extra_repr(self) -> str:
- return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
-
- def flops(self, N):
- # calculate flops for 1 window with token length of N
- flops = 0
- # qkv = self.qkv(x)
- flops += N * self.dim * 3 * self.dim
- # attn = (q @ k.transpose(-2, -1))
- flops += self.num_heads * N * (self.dim // self.num_heads) * N
- # x = (attn @ v)
- flops += self.num_heads * N * N * (self.dim // self.num_heads)
- # x = self.proj(x)
- flops += N * self.dim * self.dim
- return flops
-
-
-class SwinTransformerBlock(nn.Module):
- r""" Swin Transformer Block.
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resulotion.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- if min(self.input_resolution) <= self.window_size:
- # if window size is larger than input resolution, we don't partition windows
- self.shift_size = 0
- self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if self.shift_size > 0:
- attn_mask = self.calculate_mask(self.input_resolution)
- else:
- attn_mask = None
-
- self.register_buffer("attn_mask", attn_mask)
-
- def calculate_mask(self, x_size):
- # calculate attention mask for SW-MSA
- H, W = x_size
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- return attn_mask
-
- def forward(self, x, x_size):
- H, W = x_size
- B, L, C = x.shape
- # assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size
- if self.input_resolution == x_size:
- attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
- else:
- attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
- f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
-
- def flops(self):
- flops = 0
- H, W = self.input_resolution
- # norm1
- flops += self.dim * H * W
- # W-MSA/SW-MSA
- nW = H * W / self.window_size / self.window_size
- flops += nW * self.attn.flops(self.window_size * self.window_size)
- # mlp
- flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
- # norm2
- flops += self.dim * H * W
- return flops
-
-
-class PatchMerging(nn.Module):
- r""" Patch Merging Layer.
-
- Args:
- input_resolution (tuple[int]): Resolution of input feature.
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.input_resolution = input_resolution
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x):
- """
- x: B, H*W, C
- """
- H, W = self.input_resolution
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
- assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
-
- x = x.view(B, H, W, C)
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
- def extra_repr(self) -> str:
- return f"input_resolution={self.input_resolution}, dim={self.dim}"
-
- def flops(self):
- H, W = self.input_resolution
- flops = H * W * self.dim
- flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
- return flops
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
-
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
- num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, x_size):
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, x_size)
- else:
- x = blk(x, x_size)
- if self.downsample is not None:
- x = self.downsample(x)
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
-
- def flops(self):
- flops = 0
- for blk in self.blocks:
- flops += blk.flops()
- if self.downsample is not None:
- flops += self.downsample.flops()
- return flops
-
-
-class RSTB(nn.Module):
- """Residual Swin Transformer Block (RSTB).
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- img_size: Input image size.
- patch_size: Patch size.
- resi_connection: The convolutional block before residual connection.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
- img_size=224, patch_size=4, resi_connection='1conv'):
- super(RSTB, self).__init__()
-
- self.dim = dim
- self.input_resolution = input_resolution
-
- self.residual_group = BasicLayer(dim=dim,
- input_resolution=input_resolution,
- depth=depth,
- num_heads=num_heads,
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path,
- norm_layer=norm_layer,
- downsample=downsample,
- use_checkpoint=use_checkpoint)
-
- if resi_connection == '1conv':
- self.conv = nn.Conv2d(dim, dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(dim // 4, dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(dim // 4, dim, 3, 1, 1))
-
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
- norm_layer=None)
-
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
- norm_layer=None)
-
- def forward(self, x, x_size):
- return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x
-
- def flops(self):
- flops = 0
- flops += self.residual_group.flops()
- H, W = self.input_resolution
- flops += H * W * self.dim * self.dim * 9
- flops += self.patch_embed.flops()
- flops += self.patch_unembed.flops()
-
- return flops
-
-
-class PatchEmbed(nn.Module):
- r""" Image to Patch Embedding
-
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- x = x.flatten(2).transpose(1, 2) # B Ph*Pw C
- if self.norm is not None:
- x = self.norm(x)
- return x
-
- def flops(self):
- flops = 0
- H, W = self.img_size
- if self.norm is not None:
- flops += H * W * self.embed_dim
- return flops
-
-
-class PatchUnEmbed(nn.Module):
- r""" Image to Patch Unembedding
-
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- def forward(self, x, x_size):
- B, HW, C = x.shape
- x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C
- return x
-
- def flops(self):
- flops = 0
- return flops
-
-
-class Upsample(nn.Sequential):
- """Upsample module.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
- """
-
- def __init__(self, scale, num_feat):
- m = []
- if (scale & (scale - 1)) == 0: # scale = 2^n
- for _ in range(int(math.log(scale, 2))):
- m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(2))
- elif scale == 3:
- m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(3))
- else:
- raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
- super(Upsample, self).__init__(*m)
-
-
-class UpsampleOneStep(nn.Sequential):
- """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle)
- Used in lightweight SR to save parameters.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
-
- """
-
- def __init__(self, scale, num_feat, num_out_ch, input_resolution=None):
- self.num_feat = num_feat
- self.input_resolution = input_resolution
- m = []
- m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1))
- m.append(nn.PixelShuffle(scale))
- super(UpsampleOneStep, self).__init__(*m)
-
- def flops(self):
- H, W = self.input_resolution
- flops = H * W * self.num_feat * 3 * 9
- return flops
-
-
-class SwinIR(nn.Module):
- r""" SwinIR
- A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer.
-
- Args:
- img_size (int | tuple(int)): Input image size. Default 64
- patch_size (int | tuple(int)): Patch size. Default: 1
- in_chans (int): Number of input image channels. Default: 3
- embed_dim (int): Patch embedding dimension. Default: 96
- depths (tuple(int)): Depth of each Swin Transformer layer.
- num_heads (tuple(int)): Number of attention heads in different layers.
- window_size (int): Window size. Default: 7
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
- drop_rate (float): Dropout rate. Default: 0
- attn_drop_rate (float): Attention dropout rate. Default: 0
- drop_path_rate (float): Stochastic depth rate. Default: 0.1
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
- patch_norm (bool): If True, add normalization after patch embedding. Default: True
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
- sf: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
- img_range: Image range. 1. or 255.
- upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
- resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
- """
-
- def __init__(self, img_size=64, patch_size=1, in_chans=3,
- embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6],
- window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
- norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
- use_checkpoint=False, sf=4, img_range=1., upsampler='',
- resi_connection='1conv', unshuffle=False, unshuffle_scale=None,
- **kwargs):
- super(SwinIR, self).__init__()
- num_in_ch = in_chans * (unshuffle_scale**2) if unshuffle else in_chans
- num_out_ch = in_chans
- num_feat = 64
- self.img_range = img_range
- if in_chans == 3:
- rgb_mean = (0.4488, 0.4371, 0.4040)
- self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1)
- else:
- self.mean = torch.zeros(1, 1, 1, 1)
- self.upscale = sf
- self.upsampler = upsampler
- self.window_size = window_size
- self.unshuffle_scale = unshuffle_scale
- self.unshuffle = unshuffle
-
- #####################################################################################################
- ################################### 1, shallow feature extraction ###################################
- if unshuffle:
- assert unshuffle_scale is not None
- self.conv_first = nn.Sequential(
- nn.PixelUnshuffle(sf),
- nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1),
- )
- else:
- self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
-
- #####################################################################################################
- ################################### 2, deep feature extraction ######################################
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.num_features = embed_dim
- self.mlp_ratio = mlp_ratio
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
- num_patches = self.patch_embed.num_patches
- patches_resolution = self.patch_embed.patches_resolution
- self.patches_resolution = patches_resolution
-
- # merge non-overlapping patches into image
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build Residual Swin Transformer blocks (RSTB)
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = RSTB(dim=embed_dim,
- input_resolution=(patches_resolution[0],
- patches_resolution[1]),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
- norm_layer=norm_layer,
- downsample=None,
- use_checkpoint=use_checkpoint,
- img_size=img_size,
- patch_size=patch_size,
- resi_connection=resi_connection
-
- )
- self.layers.append(layer)
- self.norm = norm_layer(self.num_features)
-
- # build the last conv layer in deep feature extraction
- if resi_connection == '1conv':
- self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
-
- #####################################################################################################
- ################################ 3, high quality image reconstruction ################################
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.upsample = Upsample(sf, num_feat)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR (to save parameters)
- self.upsample = UpsampleOneStep(sf, embed_dim, num_out_ch,
- (patches_resolution[0], patches_resolution[1]))
- elif self.upsampler == 'nearest+conv':
- # for real-world SR (less artifacts)
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- if self.upscale == 4:
- self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- elif self.upscale == 8:
- self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_up3 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- else:
- # for image denoising and JPEG compression artifact reduction
- self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table'}
-
- def check_image_size(self, x):
- _, _, h, w = x.size()
- mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
- mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
- x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
- return x
-
- def forward_features(self, x):
- x_size = (x.shape[2], x.shape[3])
- x = self.patch_embed(x)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
-
- for layer in self.layers:
- x = layer(x, x_size)
-
- x = self.norm(x) # B L C
- x = self.patch_unembed(x, x_size)
-
- return x
-
- def forward(self, x):
- '''
- Args:
- x: b x c x h x w, range [0,1].
- To keep consistance with diffusion, we require the input image in range [-1, 1]
- '''
- H, W = x.shape[2:]
- x = self.check_image_size(x)
-
- self.mean = self.mean.type_as(x)
- x = (x - self.mean) * self.img_range
-
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.conv_last(self.upsample(x))
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.upsample(x)
- elif self.upsampler == 'nearest+conv':
- # for real-world SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- if self.upscale == 4:
- x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- elif self.upscale == 8:
- x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.lrelu(self.conv_up3(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.conv_last(self.lrelu(self.conv_hr(x)))
- else:
- # for image denoising and JPEG compression artifact reduction
- x_first = self.conv_first(x)
- res = self.conv_after_body(self.forward_features(x_first)) + x_first
- x = x + self.conv_last(res)
-
- x = x / self.img_range + self.mean
-
- return x[:, :, :H*self.upscale, :W*self.upscale]
-
- def flops(self):
- flops = 0
- H, W = self.patches_resolution
- flops += H * W * 3 * self.embed_dim * 9
- flops += self.patch_embed.flops()
- for i, layer in enumerate(self.layers):
- flops += layer.flops()
- flops += H * W * 3 * self.embed_dim * self.embed_dim
- flops += self.upsample.flops()
- return flops
-
-class SwinIRLatent(nn.Module):
- r""" SwinIR
- A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer.
-
- Args:
- img_size (int | tuple(int)): Input image size. Default 64
- patch_size (int | tuple(int)): Patch size. Default: 1
- in_chans (int): Number of input image channels. Default: 3
- embed_dim (int): Patch embedding dimension. Default: 96
- depths (tuple(int)): Depth of each Swin Transformer layer.
- num_heads (tuple(int)): Number of attention heads in different layers.
- window_size (int): Window size. Default: 7
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
- drop_rate (float): Dropout rate. Default: 0
- attn_drop_rate (float): Attention dropout rate. Default: 0
- drop_path_rate (float): Stochastic depth rate. Default: 0.1
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
- patch_norm (bool): If True, add normalization after patch embedding. Default: True
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
- sf: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
- img_range: Image range. 1. or 255.
- upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
- resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
- """
-
- def __init__(self, img_size=64, patch_size=1, in_chans=3,
- embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6],
- window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
- norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
- use_checkpoint=False, sf=4, upsampler='',
- resi_connection='1conv', unshuffle=True, unshuffle_scale=None,
- **kwargs):
- super().__init__()
- num_in_ch = in_chans * (unshuffle_scale**2) if unshuffle else in_chans
- num_out_ch = in_chans
- num_feat = 64
- self.upscale = sf
- self.upsampler = upsampler
- self.window_size = window_size
- self.unshuffle = unshuffle
- self.unshuffle_scale = unshuffle_scale
-
- #####################################################################################################
- ################################### 1, shallow feature extraction ###################################
- if unshuffle:
- assert unshuffle_scale is not None
- self.conv_first = nn.Sequential(
- nn.PixelUnshuffle(unshuffle_scale),
- nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1),
- )
- else:
- self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
-
- #####################################################################################################
- ################################### 2, deep feature extraction ######################################
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.num_features = embed_dim
- self.mlp_ratio = mlp_ratio
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
- num_patches = self.patch_embed.num_patches
- patches_resolution = self.patch_embed.patches_resolution
- self.patches_resolution = patches_resolution
-
- # merge non-overlapping patches into image
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build Residual Swin Transformer blocks (RSTB)
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = RSTB(dim=embed_dim,
- input_resolution=(patches_resolution[0],
- patches_resolution[1]),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
- norm_layer=norm_layer,
- downsample=None,
- use_checkpoint=use_checkpoint,
- img_size=img_size,
- patch_size=patch_size,
- resi_connection=resi_connection
-
- )
- self.layers.append(layer)
- self.norm = norm_layer(self.num_features)
-
- # build the last conv layer in deep feature extraction
- if resi_connection == '1conv':
- self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
-
- #####################################################################################################
- ################################ 3, high quality image reconstruction ################################
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.upsample = Upsample(sf, num_feat)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR (to save parameters)
- self.upsample = UpsampleOneStep(sf, embed_dim, num_out_ch,
- (patches_resolution[0], patches_resolution[1]))
- elif self.upsampler == 'nearest+conv':
- # for real-world SR (less artifacts)
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- if self.upscale == 4:
- self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- elif self.upscale == 8:
- self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_up3 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- else:
- # for image denoising and JPEG compression artifact reduction
- self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table'}
-
- def check_image_size(self, x):
- _, _, h, w = x.size()
- if self.unshuffle:
- assert h % (self.unshuffle_scale * self.window_size) == 0
- assert w % (self.unshuffle_scale * self.window_size) == 0
- else:
- assert h % self.window_size == 0
- assert w % self.window_size == 0
-
- def forward_features(self, x):
- x_size = (x.shape[2], x.shape[3])
- x = self.patch_embed(x)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
-
- for layer in self.layers:
- x = layer(x, x_size)
-
- x = self.norm(x) # B L C
- x = self.patch_unembed(x, x_size)
-
- return x
-
- def forward(self, x):
- '''
- Args:
- x: b x c x h x w, range [-1,1].
- '''
- self.check_image_size(x)
-
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.conv_last(self.upsample(x))
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.upsample(x)
- elif self.upsampler == 'nearest+conv':
- # for real-world SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- if self.upscale == 4:
- x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- elif self.upscale == 8:
- x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.lrelu(self.conv_up3(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.conv_last(self.lrelu(self.conv_hr(x)))
- else:
- # for image denoising and JPEG compression artifact reduction
- x_first = self.conv_first(x)
- res = self.conv_after_body(self.forward_features(x_first)) + x_first
- x = self.conv_last(res)
-
- return x
-
-
-if __name__ == '__main__':
- upscale = 4
- window_size = 8
- height = (1024 // upscale // window_size + 1) * window_size
- width = (720 // upscale // window_size + 1) * window_size
- model = SwinIR(upscale=2, img_size=(height, width),
- window_size=window_size, img_range=1., depths=[6, 6, 6, 6],
- embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect')
- print(model)
- print(height, width, model.flops() / 1e9)
-
- x = torch.randn((1, 3, height, width))
- x = model(x)
- print(x.shape)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/composite_loss.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/composite_loss.py
deleted file mode 100644
index 98e835fa6e4c0bcad062df9c519701bf795c98be..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/composite_loss.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import utils
-from fairseq.criterions import LegacyFairseqCriterion, register_criterion
-from torch import nn
-
-
-@register_criterion("composite_loss")
-class CompositeLoss(LegacyFairseqCriterion):
- """This is a composite loss that, given a list of model outputs and a list of targets,
- computes an average of losses for each output-target pair"""
-
- def __init__(self, args, task):
- super().__init__(args, task)
- self.underlying_criterion = args.underlying_criterion
-
- @staticmethod
- def add_args(parser):
- """Add criterion-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True,
- help='underlying criterion to use for the composite loss')
- # fmt: on
-
- @staticmethod
- def build_underlying_criterion(args, task):
- saved_criterion = args.criterion
- args.criterion = args.underlying_criterion
- assert saved_criterion != args.underlying_criterion
- underlying_criterion = task.build_criterion(args)
- args.criterion = saved_criterion
- return underlying_criterion
-
- @classmethod
- def build_criterion(cls, args, task):
- underlying_criterion = CompositeLoss.build_underlying_criterion(args, task)
-
- class FakeModel(nn.Module):
- def __init__(self, model, net_out, target):
- super().__init__()
- self.model = model
- self.net_out = net_out
- self.target = target
-
- def forward(self, **unused):
- return self.net_out
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- return self.model.get_normalized_probs(
- net_output, log_probs, sample=sample
- )
-
- def get_targets(self, *unused):
- return self.target
-
- @property
- def decoder(self):
- return self.model.decoder
-
- class _CompositeLoss(LegacyFairseqCriterion):
- def __init__(self, args, task, underlying_criterion):
- super().__init__(args, task)
- self.underlying_criterion = underlying_criterion
-
- def forward(self, model, sample, reduce=True):
- net_outputs = model(**sample["net_input"])
- targets = sample["target"]
-
- bsz = targets[0].size(0)
- loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_()
-
- sample_size = 0
- logging_output = {}
- for o, t in zip(net_outputs[0], targets):
- m = FakeModel(model, (o, net_outputs[1]), t)
- sample["target"] = t
- l, ss, logging_output = self.underlying_criterion(m, sample, reduce)
- loss += l
- sample_size += ss
-
- loss.div_(len(targets))
- sample_size /= len(targets)
-
- logging_output["loss"] = utils.item(loss.data) if reduce else loss.data
- return loss, sample_size, logging_output
-
- @staticmethod
- def aggregate_logging_outputs(logging_outputs):
- return underlying_criterion.__class__.aggregate_logging_outputs(
- logging_outputs
- )
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- underlying_criterion.__class__.reduce_metrics(logging_outputs)
-
- return _CompositeLoss(args, task, underlying_criterion)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/token_block_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/token_block_dataset.py
deleted file mode 100644
index d2c65fd7e058072911c3aa60bfc760288a0f83e5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/token_block_dataset.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-from fairseq.data import FairseqDataset, plasma_utils
-from fairseq.data.indexed_dataset import best_fitting_int_dtype
-from typing import Tuple
-
-
-class TokenBlockDataset(FairseqDataset):
- """Break a Dataset of tokens into blocks.
-
- Args:
- dataset (~torch.utils.data.Dataset): dataset to break into blocks
- sizes (List[int]): sentence lengths (required for 'complete' and 'eos')
- block_size (int): maximum block size (ignored in 'eos' break mode)
- break_mode (str, optional): Mode used for breaking tokens. Values can
- be one of:
- - 'none': break tokens into equally sized blocks (up to block_size)
- - 'complete': break tokens into blocks (up to block_size) such that
- blocks contains complete sentences, although block_size may be
- exceeded if some sentences exceed block_size
- - 'complete_doc': similar to 'complete' mode, but do not
- cross document boundaries
- - 'eos': each block contains one sentence (block_size is ignored)
- include_targets (bool, optional): return next tokens as targets
- (default: False).
- document_sep_len (int, optional): document separator size (required for
- 'complete_doc' break mode). Typically 1 if the sentences have eos
- and 0 otherwise.
- """
-
- def __init__(
- self,
- dataset,
- sizes,
- block_size,
- pad,
- eos,
- break_mode=None,
- include_targets=False,
- document_sep_len=1,
- use_plasma_view=False,
- split_path=None,
- plasma_path=None,
- ):
-
- super().__init__()
- self.dataset = dataset
- self.pad = pad
- self.eos = eos
- self.include_targets = include_targets
-
- assert len(dataset) > 0
-
- assert len(dataset) == len(sizes)
- _sizes, block_to_dataset_index, slice_indices = self._build_slice_indices(
- sizes, break_mode, document_sep_len, block_size
- )
- if use_plasma_view:
- plasma_id = (block_size, document_sep_len, str(break_mode), len(dataset))
- self._slice_indices = plasma_utils.PlasmaView(
- slice_indices, split_path, (plasma_id, 0), plasma_path=plasma_path
- )
- self._sizes = plasma_utils.PlasmaView(
- _sizes, split_path, (plasma_id, 1), plasma_path=plasma_path
- )
- self._block_to_dataset_index = plasma_utils.PlasmaView(
- block_to_dataset_index, split_path, (plasma_id, 2), plasma_path=plasma_path,
- )
- else:
- self._slice_indices = plasma_utils.PlasmaArray(slice_indices)
- self._sizes = plasma_utils.PlasmaArray(_sizes)
- self._block_to_dataset_index = plasma_utils.PlasmaArray(
- block_to_dataset_index
- )
-
- @staticmethod
- def _build_slice_indices(
- sizes, break_mode, document_sep_len, block_size
- ) -> Tuple[np.ndarray]:
- """Use token_block_utils_fast to build arrays for indexing into self.dataset"""
- try:
- from fairseq.data.token_block_utils_fast import (
- _get_slice_indices_fast,
- _get_block_to_dataset_index_fast,
- )
- except ImportError:
- raise ImportError(
- "Please build Cython components with: `pip install --editable .` "
- "or `python setup.py build_ext --inplace`"
- )
-
- if isinstance(sizes, list):
- sizes = np.array(sizes, dtype=np.int64)
- else:
- if torch.is_tensor(sizes):
- sizes = sizes.numpy()
- sizes = sizes.astype(np.int64)
-
- break_mode = break_mode if break_mode is not None else "none"
-
- # For "eos" break-mode, block_size is not required parameters.
- if break_mode == "eos" and block_size is None:
- block_size = 0
-
- slice_indices = _get_slice_indices_fast(
- sizes, str(break_mode), block_size, document_sep_len
- )
- _sizes = slice_indices[:, 1] - slice_indices[:, 0]
-
- # build index mapping block indices to the underlying dataset indices
- if break_mode == "eos":
- # much faster version for eos break mode
- block_to_dataset_index = np.stack(
- [
- np.arange(len(sizes)), # starting index in dataset
- np.zeros(
- len(sizes), dtype=np.compat.long
- ), # starting offset within starting index
- np.arange(len(sizes)), # ending index in dataset
- ],
- 1,
- )
- else:
- block_to_dataset_index = _get_block_to_dataset_index_fast(
- sizes, slice_indices,
- )
- size_dtype = np.uint16 if block_size < 65535 else np.uint32
- num_tokens = slice_indices[-1].max()
- slice_indices_dtype = best_fitting_int_dtype(num_tokens)
- slice_indices = slice_indices.astype(slice_indices_dtype)
- _sizes = _sizes.astype(size_dtype)
- block_to_dataset_index = block_to_dataset_index.astype(slice_indices_dtype)
- return _sizes, block_to_dataset_index, slice_indices
-
- @property
- def slice_indices(self):
- return self._slice_indices.array
-
- @property
- def sizes(self):
- return self._sizes.array
-
- @property
- def block_to_dataset_index(self):
- return self._block_to_dataset_index.array
-
- def attr(self, attr: str, index: int):
- start_ds_idx, _, _ = self.block_to_dataset_index[index]
- return self.dataset.attr(attr, start_ds_idx)
-
- def __getitem__(self, index):
- start_ds_idx, start_offset, end_ds_idx = self.block_to_dataset_index[index]
-
- buffer = torch.cat(
- [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)]
- )
- slice_s, slice_e = self.slice_indices[index]
- length = slice_e - slice_s
- s, e = start_offset, start_offset + length
- item = buffer[s:e]
-
- if self.include_targets:
- # *target* is the original sentence (=item)
- # *source* is shifted right by 1 (maybe left-padded with eos)
- # *past_target* is shifted right by 2 (left-padded as needed)
- if s == 0:
- source = torch.cat([item.new([self.eos]), buffer[0 : e - 1]])
- past_target = torch.cat(
- [item.new([self.pad, self.eos]), buffer[0 : e - 2]]
- )
- else:
- source = buffer[s - 1 : e - 1]
- if s == 1:
- past_target = torch.cat([item.new([self.eos]), buffer[0 : e - 2]])
- else:
- past_target = buffer[s - 2 : e - 2]
-
- return source, item, past_target
-
- return item
-
- def __len__(self):
- return len(self.slice_indices)
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- self.dataset.prefetch(
- {
- ds_idx
- for index in indices
- for start_ds_idx, _, end_ds_idx in [self.block_to_dataset_index[index]]
- for ds_idx in range(start_ds_idx, end_ds_idx + 1)
- }
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py
deleted file mode 100644
index 688d4e36e358df2dcc432d37d3e57bd81e2f1ed1..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py
+++ /dev/null
@@ -1,140 +0,0 @@
-"""
-Byte pair encoding utilities from GPT-2.
-
-Original source: https://github.com/openai/gpt-2/blob/master/src/encoder.py
-Original license: MIT
-"""
-
-import json
-from functools import lru_cache
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-class Encoder:
- def __init__(self, encoder, bpe_merges, errors="replace"):
- self.encoder = encoder
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.errors = errors # how to handle errors in decoding
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
- self.cache = {}
-
- try:
- import regex as re
-
- self.re = re
- except ImportError:
- raise ImportError("Please install regex with: pip install regex")
-
- # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
- self.pat = self.re.compile(
- r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+"""
- )
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token)
- pairs = get_pairs(word)
-
- if not pairs:
- return token
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- for token in self.re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(
- self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")
- )
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder.get(token, token) for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode(
- "utf-8", errors=self.errors
- )
- return text
-
-
-def get_encoder(encoder_json_path, vocab_bpe_path):
- with open(encoder_json_path, "r") as f:
- encoder = json.load(f)
- with open(vocab_bpe_path, "r", encoding="utf-8") as f:
- bpe_data = f.read()
- bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]]
- return Encoder(
- encoder=encoder,
- bpe_merges=bpe_merges,
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/dataclass/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/dataclass/__init__.py
deleted file mode 100644
index 25408d28ec44cee56eb5fb3ab0c817dc04159e95..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/dataclass/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .configs import FairseqDataclass
-from .constants import ChoiceEnum
-
-
-__all__ = [
- "FairseqDataclass",
- "ChoiceEnum",
-]
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/hf_byte_bpe.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/hf_byte_bpe.py
deleted file mode 100644
index c508578d41bf6b7ce0a847e0797d71b19beb393d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/encoders/hf_byte_bpe.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-
-from fairseq.data.encoders import register_bpe
-from fairseq.dataclass import FairseqDataclass
-from fairseq import file_utils
-
-
-@dataclass
-class HuggingFaceByteLevelBPEConfig(FairseqDataclass):
- bpe_merges: str = field(default="???", metadata={"help": "path to merges.txt"})
- bpe_vocab: str = field(default="???", metadata={"help": "path to vocab.json"})
- bpe_add_prefix_space: bool = field(
- default=False, metadata={"help": "add prefix space before encoding"}
- )
-
-
-@register_bpe("hf_byte_bpe", dataclass=HuggingFaceByteLevelBPEConfig)
-class HuggingFaceByteLevelBPE(object):
- def __init__(self, cfg):
- try:
- from tokenizers import ByteLevelBPETokenizer
- except ImportError:
- raise ImportError(
- "Please install huggingface/tokenizers with: " "pip install tokenizers"
- )
-
- bpe_vocab = file_utils.cached_path(cfg.bpe_vocab)
- bpe_merges = file_utils.cached_path(cfg.bpe_merges)
-
- self.bpe = ByteLevelBPETokenizer(
- bpe_vocab,
- bpe_merges,
- add_prefix_space=cfg.bpe_add_prefix_space,
- )
-
- def encode(self, x: str) -> str:
- return " ".join(map(str, self.bpe.encode(x).ids))
-
- def decode(self, x: str) -> str:
- return self.bpe.decode(
- [int(tok) if tok not in {"", ""} else tok for tok in x.split()]
- )
-
- def is_beginning_of_word(self, x: str) -> bool:
- return self.decode(x).startswith(" ")
diff --git a/spaces/ORI-Muchim/MinamiTTS/modules.py b/spaces/ORI-Muchim/MinamiTTS/modules.py
deleted file mode 100644
index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/MinamiTTS/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/modules/depthwise_sep_conv.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/modules/depthwise_sep_conv.py
deleted file mode 100644
index 83dd15c3df1d9f40baf0091a373fa224532c9ddd..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/modules/depthwise_sep_conv.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import torch
-import torch.nn as nn
-
-class DepthWiseSeperableConv(nn.Module):
- def __init__(self, in_dim, out_dim, *args, **kwargs):
- super().__init__()
- if 'groups' in kwargs:
- # ignoring groups for Depthwise Sep Conv
- del kwargs['groups']
-
- self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs)
- self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1)
-
- def forward(self, x):
- out = self.depthwise(x)
- out = self.pointwise(out)
- return out
\ No newline at end of file
diff --git a/spaces/PSLD/PSLD/README.md b/spaces/PSLD/PSLD/README.md
deleted file mode 100644
index 3e0ea2b7782d5ebab887e063fb7d8d311f637fbe..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PSLD
-emoji: 📈
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: bigscience-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/PaddlePaddle/U2Net/README.md b/spaces/PaddlePaddle/U2Net/README.md
deleted file mode 100644
index 38460ee2903402211623fea43c0f288ae6a21d20..0000000000000000000000000000000000000000
--- a/spaces/PaddlePaddle/U2Net/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: U2Net
-emoji: 📚
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/ParthRangarajan/Centauri_Pilot/app.py b/spaces/ParthRangarajan/Centauri_Pilot/app.py
deleted file mode 100644
index c353d3ade5b327e513feb2330065bf514c6af66b..0000000000000000000000000000000000000000
--- a/spaces/ParthRangarajan/Centauri_Pilot/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-from aitextgen import aitextgen
-# Downloading gpt neo from hugging face model hub
-ai= aitextgen(model='EleutherAI/gpt-neo-125M', to_gpu=False)
-def ai_text(Input):
- # returning text as string for gradio
- generated_text= ai.generate_one(max_length=1000, prompt=Input, no_repeat_ngram_size=3)
- print(generated_text)
- return generated_text
-
-output_text=gr.outputs.Textbox()
-gr.Interface(ai_text, "textbox",
- output_text, title="Centauri Pilot",
- examples= [
- ['Chocolate gives bad skin'],
- ['India has the highest population'],
- ['The moon is not a planet'],
- ['Who is Alexander the Great?']],
- theme='dark-peach',
- description="V1 of Generating Blog Content using GPT-Neo by implementing aitextgen and Gradio").launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/layout-slur.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/layout-slur.go
deleted file mode 100644
index 5d75522aca64a0e4eaf874b3129cf83ab7f68eb2..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/layout-slur.go and /dev/null differ
diff --git a/spaces/PeepDaSlan9/ToyWorld/style.css b/spaces/PeepDaSlan9/ToyWorld/style.css
deleted file mode 100644
index 07f8d9fc7f44dc2b3e44d622ef522a614ac7ce03..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/ToyWorld/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-.gradio-container {
- background-image: linear-gradient(#660099, #000000) !important;
- }
\ No newline at end of file
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py
deleted file mode 100644
index ee0dc6bdd8df5775857028aaed5444c0f59caf80..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class DistSamplerSeedHook(Hook):
- """Data-loading sampler for distributed training.
-
- When distributed training, it is only useful in conjunction with
- :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same
- purpose with :obj:`IterLoader`.
- """
-
- def before_epoch(self, runner):
- if hasattr(runner.data_loader.sampler, 'set_epoch'):
- # in case the data loader uses `SequentialSampler` in Pytorch
- runner.data_loader.sampler.set_epoch(runner.epoch)
- elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'):
- # batch sampler in pytorch warps the sampler as its attributes.
- runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch)
diff --git a/spaces/RamAnanth1/FairDiffusion/app.py b/spaces/RamAnanth1/FairDiffusion/app.py
deleted file mode 100644
index 1f5dbbd1237d70c303a7b1db287c4f306008b5a8..0000000000000000000000000000000000000000
--- a/spaces/RamAnanth1/FairDiffusion/app.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import gradio as gr
-import torch
-from semdiffusers import SemanticEditPipeline
-device='cuda'
-
-pipe = SemanticEditPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
-).to(device)
-
-def infer(prompt,seed):
-
- gen = torch.Generator(device=device)
-
- gen.manual_seed(seed)
- out = pipe(prompt=prompt, generator=gen, num_images_per_prompt=1, guidance_scale=7)
- images = out.images[0]
- out_edit = pipe(prompt=prompt, generator=gen, num_images_per_prompt=1, guidance_scale=7,
- editing_prompt=['male person', # Concepts to apply
- 'female person'],
- reverse_editing_direction=[True, False], # Direction of guidance i.e. decrease the first and increase the second concept
- edit_warmup_steps=[10, 10], # Warmup period for each concept
- edit_guidance_scale=[4, 4], # Guidance scale for each concept
- edit_threshold=[0.95, 0.95], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
- edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
- edit_mom_beta=0.6, # Momentum beta
- edit_weights=[1,1] # Weights of the individual concepts against each other
- )
- images_edited = out_edit.images[0]
-
- return [(images, 'Stable Diffusion'), (images_edited, 'Fair Diffusion')]
-
-
-css = """
- .gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
- }
- .gr-button {
- color: white;
- border-color: black;
- background: black;
- }
- input[type='range'] {
- accent-color: black;
- }
- .dark input[type='range'] {
- accent-color: #dfdfdf;
- }
- .container {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
- }
- #gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
- }
- #gallery>div>.h-full {
- min-height: 20rem;
- }
- .details:hover {
- text-decoration: underline;
- }
- .gr-button {
- white-space: nowrap;
- }
- .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
- }
- #advanced-btn {
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 12px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
- .footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
- .acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
- }
- .animate-spin {
- animation: spin 1s linear infinite;
- }
- @keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
- }
- #share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
- margin-top: 10px;
- margin-left: auto;
- }
- #share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0;
- }
- #share-btn * {
- all: unset;
- }
- #share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
- }
- #share-btn-container .wrap {
- display: none !important;
- }
-
- .gr-form{
- flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0;
- }
- #prompt-container{
- gap: 0;
- }
- #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem}
- #component-16{border-top-width: 1px!important;margin-top: 1em}
- .image_duplication{position: absolute; width: 100px; left: 50px}
-"""
-
-block = gr.Blocks(css=css)
-
-examples = [
- [
- 'A photo of the face of a firefighter',
- 21
- ]
-
-]
-
-
-with block:
- gr.HTML(
- """
-
-
-
-
- FairDiffusion Demo
-
-
-
- FairDiffusion is the latest strategy to introduce fairness after the deployment of generative text-to-image models
- This unofficial demo is based on the Github Implementation.
-
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True):
- with gr.Column():
- text = gr.Textbox(
- label="Enter your prompt",
- show_label=False,
- max_lines=1,
- placeholder="Enter your prompt",
- elem_id="prompt-text-input",
- ).style(
- border=(True, False, True, True),
- rounded=(True, False, False, True),
- container=False,
- )
-
- btn = gr.Button("Generate image").style(
- margin=False,
- rounded=(False, True, True, False),
- full_width=False,
- )
-
- gallery = gr.Gallery(
- label="Generated images", show_label=False, elem_id="gallery"
- ).style(height="auto")
-
- with gr.Accordion("Advanced settings", open=False):
- # with gr.Group(elem_id="container-advanced-btns"):
- # #advanced_button = gr.Button("Advanced options", elem_id="advanced-btn")
- # with gr.Group(elem_id="share-btn-container"):
- # community_icon = gr.HTML(community_icon_html)
- # loading_icon = gr.HTML(loading_icon_html)
- # share_button = gr.Button("Share to community", elem_id="share-btn")
-
- seed = gr.Slider(
- label="Seed",
- minimum=0,
- maximum=2147483647,
- step=1,
- randomize=True,
- )
-
- ex = gr.Examples(examples=examples, fn=infer, inputs=[text, seed], outputs=[gallery], cache_examples=True)
- ex.dataset.headers = [""]
-
-
- text.submit(infer, inputs=[text,seed], outputs=[gallery])
- btn.click(infer, inputs=[text,seed], outputs=[gallery])
-
-
-
-block.queue().launch()
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/expand.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/expand.py
deleted file mode 100644
index c8db2c4b4993cb010fdad537055671fdd1880a87..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/expand.py
+++ /dev/null
@@ -1,462 +0,0 @@
-"""Utility functions to expand configuration directives or special values
-(such glob patterns).
-
-We can split the process of interpreting configuration files into 2 steps:
-
-1. The parsing the file contents from strings to value objects
- that can be understand by Python (for example a string with a comma
- separated list of keywords into an actual Python list of strings).
-
-2. The expansion (or post-processing) of these values according to the
- semantics ``setuptools`` assign to them (for example a configuration field
- with the ``file:`` directive should be expanded from a list of file paths to
- a single string with the contents of those files concatenated)
-
-This module focus on the second step, and therefore allow sharing the expansion
-functions among several configuration file formats.
-
-**PRIVATE MODULE**: API reserved for setuptools internal usage only.
-"""
-import ast
-import importlib
-import io
-import os
-import pathlib
-import sys
-import warnings
-from glob import iglob
-from configparser import ConfigParser
-from importlib.machinery import ModuleSpec
-from itertools import chain
-from typing import (
- TYPE_CHECKING,
- Callable,
- Dict,
- Iterable,
- Iterator,
- List,
- Mapping,
- Optional,
- Tuple,
- TypeVar,
- Union,
- cast
-)
-from pathlib import Path
-from types import ModuleType
-
-from distutils.errors import DistutilsOptionError
-
-from .._path import same_path as _same_path
-
-if TYPE_CHECKING:
- from setuptools.dist import Distribution # noqa
- from setuptools.discovery import ConfigDiscovery # noqa
- from distutils.dist import DistributionMetadata # noqa
-
-chain_iter = chain.from_iterable
-_Path = Union[str, os.PathLike]
-_K = TypeVar("_K")
-_V = TypeVar("_V", covariant=True)
-
-
-class StaticModule:
- """Proxy to a module object that avoids executing arbitrary code."""
-
- def __init__(self, name: str, spec: ModuleSpec):
- module = ast.parse(pathlib.Path(spec.origin).read_bytes())
- vars(self).update(locals())
- del self.self
-
- def _find_assignments(self) -> Iterator[Tuple[ast.AST, ast.AST]]:
- for statement in self.module.body:
- if isinstance(statement, ast.Assign):
- yield from ((target, statement.value) for target in statement.targets)
- elif isinstance(statement, ast.AnnAssign) and statement.value:
- yield (statement.target, statement.value)
-
- def __getattr__(self, attr):
- """Attempt to load an attribute "statically", via :func:`ast.literal_eval`."""
- try:
- return next(
- ast.literal_eval(value)
- for target, value in self._find_assignments()
- if isinstance(target, ast.Name) and target.id == attr
- )
- except Exception as e:
- raise AttributeError(f"{self.name} has no attribute {attr}") from e
-
-
-def glob_relative(
- patterns: Iterable[str], root_dir: Optional[_Path] = None
-) -> List[str]:
- """Expand the list of glob patterns, but preserving relative paths.
-
- :param list[str] patterns: List of glob patterns
- :param str root_dir: Path to which globs should be relative
- (current directory by default)
- :rtype: list
- """
- glob_characters = {'*', '?', '[', ']', '{', '}'}
- expanded_values = []
- root_dir = root_dir or os.getcwd()
- for value in patterns:
-
- # Has globby characters?
- if any(char in value for char in glob_characters):
- # then expand the glob pattern while keeping paths *relative*:
- glob_path = os.path.abspath(os.path.join(root_dir, value))
- expanded_values.extend(sorted(
- os.path.relpath(path, root_dir).replace(os.sep, "/")
- for path in iglob(glob_path, recursive=True)))
-
- else:
- # take the value as-is
- path = os.path.relpath(value, root_dir).replace(os.sep, "/")
- expanded_values.append(path)
-
- return expanded_values
-
-
-def read_files(filepaths: Union[str, bytes, Iterable[_Path]], root_dir=None) -> str:
- """Return the content of the files concatenated using ``\n`` as str
-
- This function is sandboxed and won't reach anything outside ``root_dir``
-
- (By default ``root_dir`` is the current directory).
- """
- from setuptools.extern.more_itertools import always_iterable
-
- root_dir = os.path.abspath(root_dir or os.getcwd())
- _filepaths = (os.path.join(root_dir, path) for path in always_iterable(filepaths))
- return '\n'.join(
- _read_file(path)
- for path in _filter_existing_files(_filepaths)
- if _assert_local(path, root_dir)
- )
-
-
-def _filter_existing_files(filepaths: Iterable[_Path]) -> Iterator[_Path]:
- for path in filepaths:
- if os.path.isfile(path):
- yield path
- else:
- warnings.warn(f"File {path!r} cannot be found")
-
-
-def _read_file(filepath: Union[bytes, _Path]) -> str:
- with io.open(filepath, encoding='utf-8') as f:
- return f.read()
-
-
-def _assert_local(filepath: _Path, root_dir: str):
- if Path(os.path.abspath(root_dir)) not in Path(os.path.abspath(filepath)).parents:
- msg = f"Cannot access {filepath!r} (or anything outside {root_dir!r})"
- raise DistutilsOptionError(msg)
-
- return True
-
-
-def read_attr(
- attr_desc: str,
- package_dir: Optional[Mapping[str, str]] = None,
- root_dir: Optional[_Path] = None
-):
- """Reads the value of an attribute from a module.
-
- This function will try to read the attributed statically first
- (via :func:`ast.literal_eval`), and only evaluate the module if it fails.
-
- Examples:
- read_attr("package.attr")
- read_attr("package.module.attr")
-
- :param str attr_desc: Dot-separated string describing how to reach the
- attribute (see examples above)
- :param dict[str, str] package_dir: Mapping of package names to their
- location in disk (represented by paths relative to ``root_dir``).
- :param str root_dir: Path to directory containing all the packages in
- ``package_dir`` (current directory by default).
- :rtype: str
- """
- root_dir = root_dir or os.getcwd()
- attrs_path = attr_desc.strip().split('.')
- attr_name = attrs_path.pop()
- module_name = '.'.join(attrs_path)
- module_name = module_name or '__init__'
- _parent_path, path, module_name = _find_module(module_name, package_dir, root_dir)
- spec = _find_spec(module_name, path)
-
- try:
- return getattr(StaticModule(module_name, spec), attr_name)
- except Exception:
- # fallback to evaluate module
- module = _load_spec(spec, module_name)
- return getattr(module, attr_name)
-
-
-def _find_spec(module_name: str, module_path: Optional[_Path]) -> ModuleSpec:
- spec = importlib.util.spec_from_file_location(module_name, module_path)
- spec = spec or importlib.util.find_spec(module_name)
-
- if spec is None:
- raise ModuleNotFoundError(module_name)
-
- return spec
-
-
-def _load_spec(spec: ModuleSpec, module_name: str) -> ModuleType:
- name = getattr(spec, "__name__", module_name)
- if name in sys.modules:
- return sys.modules[name]
- module = importlib.util.module_from_spec(spec)
- sys.modules[name] = module # cache (it also ensures `==` works on loaded items)
- spec.loader.exec_module(module) # type: ignore
- return module
-
-
-def _find_module(
- module_name: str, package_dir: Optional[Mapping[str, str]], root_dir: _Path
-) -> Tuple[_Path, Optional[str], str]:
- """Given a module (that could normally be imported by ``module_name``
- after the build is complete), find the path to the parent directory where
- it is contained and the canonical name that could be used to import it
- considering the ``package_dir`` in the build configuration and ``root_dir``
- """
- parent_path = root_dir
- module_parts = module_name.split('.')
- if package_dir:
- if module_parts[0] in package_dir:
- # A custom path was specified for the module we want to import
- custom_path = package_dir[module_parts[0]]
- parts = custom_path.rsplit('/', 1)
- if len(parts) > 1:
- parent_path = os.path.join(root_dir, parts[0])
- parent_module = parts[1]
- else:
- parent_module = custom_path
- module_name = ".".join([parent_module, *module_parts[1:]])
- elif '' in package_dir:
- # A custom parent directory was specified for all root modules
- parent_path = os.path.join(root_dir, package_dir[''])
-
- path_start = os.path.join(parent_path, *module_name.split("."))
- candidates = chain(
- (f"{path_start}.py", os.path.join(path_start, "__init__.py")),
- iglob(f"{path_start}.*")
- )
- module_path = next((x for x in candidates if os.path.isfile(x)), None)
- return parent_path, module_path, module_name
-
-
-def resolve_class(
- qualified_class_name: str,
- package_dir: Optional[Mapping[str, str]] = None,
- root_dir: Optional[_Path] = None
-) -> Callable:
- """Given a qualified class name, return the associated class object"""
- root_dir = root_dir or os.getcwd()
- idx = qualified_class_name.rfind('.')
- class_name = qualified_class_name[idx + 1 :]
- pkg_name = qualified_class_name[:idx]
-
- _parent_path, path, module_name = _find_module(pkg_name, package_dir, root_dir)
- module = _load_spec(_find_spec(module_name, path), module_name)
- return getattr(module, class_name)
-
-
-def cmdclass(
- values: Dict[str, str],
- package_dir: Optional[Mapping[str, str]] = None,
- root_dir: Optional[_Path] = None
-) -> Dict[str, Callable]:
- """Given a dictionary mapping command names to strings for qualified class
- names, apply :func:`resolve_class` to the dict values.
- """
- return {k: resolve_class(v, package_dir, root_dir) for k, v in values.items()}
-
-
-def find_packages(
- *,
- namespaces=True,
- fill_package_dir: Optional[Dict[str, str]] = None,
- root_dir: Optional[_Path] = None,
- **kwargs
-) -> List[str]:
- """Works similarly to :func:`setuptools.find_packages`, but with all
- arguments given as keyword arguments. Moreover, ``where`` can be given
- as a list (the results will be simply concatenated).
-
- When the additional keyword argument ``namespaces`` is ``True``, it will
- behave like :func:`setuptools.find_namespace_packages`` (i.e. include
- implicit namespaces as per :pep:`420`).
-
- The ``where`` argument will be considered relative to ``root_dir`` (or the current
- working directory when ``root_dir`` is not given).
-
- If the ``fill_package_dir`` argument is passed, this function will consider it as a
- similar data structure to the ``package_dir`` configuration parameter add fill-in
- any missing package location.
-
- :rtype: list
- """
- from setuptools.discovery import construct_package_dir
- from setuptools.extern.more_itertools import unique_everseen, always_iterable
-
- if namespaces:
- from setuptools.discovery import PEP420PackageFinder as PackageFinder
- else:
- from setuptools.discovery import PackageFinder # type: ignore
-
- root_dir = root_dir or os.curdir
- where = kwargs.pop('where', ['.'])
- packages: List[str] = []
- fill_package_dir = {} if fill_package_dir is None else fill_package_dir
- search = list(unique_everseen(always_iterable(where)))
-
- if len(search) == 1 and all(not _same_path(search[0], x) for x in (".", root_dir)):
- fill_package_dir.setdefault("", search[0])
-
- for path in search:
- package_path = _nest_path(root_dir, path)
- pkgs = PackageFinder.find(package_path, **kwargs)
- packages.extend(pkgs)
- if pkgs and not (
- fill_package_dir.get("") == path
- or os.path.samefile(package_path, root_dir)
- ):
- fill_package_dir.update(construct_package_dir(pkgs, path))
-
- return packages
-
-
-def _nest_path(parent: _Path, path: _Path) -> str:
- path = parent if path in {".", ""} else os.path.join(parent, path)
- return os.path.normpath(path)
-
-
-def version(value: Union[Callable, Iterable[Union[str, int]], str]) -> str:
- """When getting the version directly from an attribute,
- it should be normalised to string.
- """
- if callable(value):
- value = value()
-
- value = cast(Iterable[Union[str, int]], value)
-
- if not isinstance(value, str):
- if hasattr(value, '__iter__'):
- value = '.'.join(map(str, value))
- else:
- value = '%s' % value
-
- return value
-
-
-def canonic_package_data(package_data: dict) -> dict:
- if "*" in package_data:
- package_data[""] = package_data.pop("*")
- return package_data
-
-
-def canonic_data_files(
- data_files: Union[list, dict], root_dir: Optional[_Path] = None
-) -> List[Tuple[str, List[str]]]:
- """For compatibility with ``setup.py``, ``data_files`` should be a list
- of pairs instead of a dict.
-
- This function also expands glob patterns.
- """
- if isinstance(data_files, list):
- return data_files
-
- return [
- (dest, glob_relative(patterns, root_dir))
- for dest, patterns in data_files.items()
- ]
-
-
-def entry_points(text: str, text_source="entry-points") -> Dict[str, dict]:
- """Given the contents of entry-points file,
- process it into a 2-level dictionary (``dict[str, dict[str, str]]``).
- The first level keys are entry-point groups, the second level keys are
- entry-point names, and the second level values are references to objects
- (that correspond to the entry-point value).
- """
- parser = ConfigParser(default_section=None, delimiters=("=",)) # type: ignore
- parser.optionxform = str # case sensitive
- parser.read_string(text, text_source)
- groups = {k: dict(v.items()) for k, v in parser.items()}
- groups.pop(parser.default_section, None)
- return groups
-
-
-class EnsurePackagesDiscovered:
- """Some expand functions require all the packages to already be discovered before
- they run, e.g. :func:`read_attr`, :func:`resolve_class`, :func:`cmdclass`.
-
- Therefore in some cases we will need to run autodiscovery during the evaluation of
- the configuration. However, it is better to postpone calling package discovery as
- much as possible, because some parameters can influence it (e.g. ``package_dir``),
- and those might not have been processed yet.
- """
-
- def __init__(self, distribution: "Distribution"):
- self._dist = distribution
- self._called = False
-
- def __call__(self):
- """Trigger the automatic package discovery, if it is still necessary."""
- if not self._called:
- self._called = True
- self._dist.set_defaults(name=False) # Skip name, we can still be parsing
-
- def __enter__(self):
- return self
-
- def __exit__(self, _exc_type, _exc_value, _traceback):
- if self._called:
- self._dist.set_defaults.analyse_name() # Now we can set a default name
-
- def _get_package_dir(self) -> Mapping[str, str]:
- self()
- pkg_dir = self._dist.package_dir
- return {} if pkg_dir is None else pkg_dir
-
- @property
- def package_dir(self) -> Mapping[str, str]:
- """Proxy to ``package_dir`` that may trigger auto-discovery when used."""
- return LazyMappingProxy(self._get_package_dir)
-
-
-class LazyMappingProxy(Mapping[_K, _V]):
- """Mapping proxy that delays resolving the target object, until really needed.
-
- >>> def obtain_mapping():
- ... print("Running expensive function!")
- ... return {"key": "value", "other key": "other value"}
- >>> mapping = LazyMappingProxy(obtain_mapping)
- >>> mapping["key"]
- Running expensive function!
- 'value'
- >>> mapping["other key"]
- 'other value'
- """
-
- def __init__(self, obtain_mapping_value: Callable[[], Mapping[_K, _V]]):
- self._obtain = obtain_mapping_value
- self._value: Optional[Mapping[_K, _V]] = None
-
- def _target(self) -> Mapping[_K, _V]:
- if self._value is None:
- self._value = self._obtain()
- return self._value
-
- def __getitem__(self, key: _K) -> _V:
- return self._target()[key]
-
- def __len__(self) -> int:
- return len(self._target())
-
- def __iter__(self) -> Iterator[_K]:
- return iter(self._target())
diff --git a/spaces/Realcat/image-matching-webui/hloc/utils/database.py b/spaces/Realcat/image-matching-webui/hloc/utils/database.py
deleted file mode 100644
index 050f5ec414d132fb17ba4a51f1e6d0da649a6f2a..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/hloc/utils/database.py
+++ /dev/null
@@ -1,430 +0,0 @@
-# Copyright (c) 2018, ETH Zurich and UNC Chapel Hill.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-#
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-#
-# * Neither the name of ETH Zurich and UNC Chapel Hill nor the names of
-# its contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-#
-# Author: Johannes L. Schoenberger (jsch-at-demuc-dot-de)
-
-# This script is based on an original implementation by True Price.
-
-import sys
-import sqlite3
-import numpy as np
-
-
-IS_PYTHON3 = sys.version_info[0] >= 3
-
-MAX_IMAGE_ID = 2**31 - 1
-
-CREATE_CAMERAS_TABLE = """CREATE TABLE IF NOT EXISTS cameras (
- camera_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
- model INTEGER NOT NULL,
- width INTEGER NOT NULL,
- height INTEGER NOT NULL,
- params BLOB,
- prior_focal_length INTEGER NOT NULL)"""
-
-CREATE_DESCRIPTORS_TABLE = """CREATE TABLE IF NOT EXISTS descriptors (
- image_id INTEGER PRIMARY KEY NOT NULL,
- rows INTEGER NOT NULL,
- cols INTEGER NOT NULL,
- data BLOB,
- FOREIGN KEY(image_id) REFERENCES images(image_id) ON DELETE CASCADE)"""
-
-CREATE_IMAGES_TABLE = """CREATE TABLE IF NOT EXISTS images (
- image_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
- name TEXT NOT NULL UNIQUE,
- camera_id INTEGER NOT NULL,
- prior_qw REAL,
- prior_qx REAL,
- prior_qy REAL,
- prior_qz REAL,
- prior_tx REAL,
- prior_ty REAL,
- prior_tz REAL,
- CONSTRAINT image_id_check CHECK(image_id >= 0 and image_id < {}),
- FOREIGN KEY(camera_id) REFERENCES cameras(camera_id))
-""".format(
- MAX_IMAGE_ID
-)
-
-CREATE_TWO_VIEW_GEOMETRIES_TABLE = """
-CREATE TABLE IF NOT EXISTS two_view_geometries (
- pair_id INTEGER PRIMARY KEY NOT NULL,
- rows INTEGER NOT NULL,
- cols INTEGER NOT NULL,
- data BLOB,
- config INTEGER NOT NULL,
- F BLOB,
- E BLOB,
- H BLOB,
- qvec BLOB,
- tvec BLOB)
-"""
-
-CREATE_KEYPOINTS_TABLE = """CREATE TABLE IF NOT EXISTS keypoints (
- image_id INTEGER PRIMARY KEY NOT NULL,
- rows INTEGER NOT NULL,
- cols INTEGER NOT NULL,
- data BLOB,
- FOREIGN KEY(image_id) REFERENCES images(image_id) ON DELETE CASCADE)
-"""
-
-CREATE_MATCHES_TABLE = """CREATE TABLE IF NOT EXISTS matches (
- pair_id INTEGER PRIMARY KEY NOT NULL,
- rows INTEGER NOT NULL,
- cols INTEGER NOT NULL,
- data BLOB)"""
-
-CREATE_NAME_INDEX = (
- "CREATE UNIQUE INDEX IF NOT EXISTS index_name ON images(name)"
-)
-
-CREATE_ALL = "; ".join(
- [
- CREATE_CAMERAS_TABLE,
- CREATE_IMAGES_TABLE,
- CREATE_KEYPOINTS_TABLE,
- CREATE_DESCRIPTORS_TABLE,
- CREATE_MATCHES_TABLE,
- CREATE_TWO_VIEW_GEOMETRIES_TABLE,
- CREATE_NAME_INDEX,
- ]
-)
-
-
-def image_ids_to_pair_id(image_id1, image_id2):
- if image_id1 > image_id2:
- image_id1, image_id2 = image_id2, image_id1
- return image_id1 * MAX_IMAGE_ID + image_id2
-
-
-def pair_id_to_image_ids(pair_id):
- image_id2 = pair_id % MAX_IMAGE_ID
- image_id1 = (pair_id - image_id2) / MAX_IMAGE_ID
- return image_id1, image_id2
-
-
-def array_to_blob(array):
- if IS_PYTHON3:
- return array.tobytes()
- else:
- return np.getbuffer(array)
-
-
-def blob_to_array(blob, dtype, shape=(-1,)):
- if IS_PYTHON3:
- return np.fromstring(blob, dtype=dtype).reshape(*shape)
- else:
- return np.frombuffer(blob, dtype=dtype).reshape(*shape)
-
-
-class COLMAPDatabase(sqlite3.Connection):
- @staticmethod
- def connect(database_path):
- return sqlite3.connect(str(database_path), factory=COLMAPDatabase)
-
- def __init__(self, *args, **kwargs):
- super(COLMAPDatabase, self).__init__(*args, **kwargs)
-
- self.create_tables = lambda: self.executescript(CREATE_ALL)
- self.create_cameras_table = lambda: self.executescript(
- CREATE_CAMERAS_TABLE
- )
- self.create_descriptors_table = lambda: self.executescript(
- CREATE_DESCRIPTORS_TABLE
- )
- self.create_images_table = lambda: self.executescript(
- CREATE_IMAGES_TABLE
- )
- self.create_two_view_geometries_table = lambda: self.executescript(
- CREATE_TWO_VIEW_GEOMETRIES_TABLE
- )
- self.create_keypoints_table = lambda: self.executescript(
- CREATE_KEYPOINTS_TABLE
- )
- self.create_matches_table = lambda: self.executescript(
- CREATE_MATCHES_TABLE
- )
- self.create_name_index = lambda: self.executescript(CREATE_NAME_INDEX)
-
- def add_camera(
- self,
- model,
- width,
- height,
- params,
- prior_focal_length=False,
- camera_id=None,
- ):
- params = np.asarray(params, np.float64)
- cursor = self.execute(
- "INSERT INTO cameras VALUES (?, ?, ?, ?, ?, ?)",
- (
- camera_id,
- model,
- width,
- height,
- array_to_blob(params),
- prior_focal_length,
- ),
- )
- return cursor.lastrowid
-
- def add_image(
- self,
- name,
- camera_id,
- prior_q=np.full(4, np.NaN),
- prior_t=np.full(3, np.NaN),
- image_id=None,
- ):
- cursor = self.execute(
- "INSERT INTO images VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
- (
- image_id,
- name,
- camera_id,
- prior_q[0],
- prior_q[1],
- prior_q[2],
- prior_q[3],
- prior_t[0],
- prior_t[1],
- prior_t[2],
- ),
- )
- return cursor.lastrowid
-
- def add_keypoints(self, image_id, keypoints):
- assert len(keypoints.shape) == 2
- assert keypoints.shape[1] in [2, 4, 6]
-
- keypoints = np.asarray(keypoints, np.float32)
- self.execute(
- "INSERT INTO keypoints VALUES (?, ?, ?, ?)",
- (image_id,) + keypoints.shape + (array_to_blob(keypoints),),
- )
-
- def add_descriptors(self, image_id, descriptors):
- descriptors = np.ascontiguousarray(descriptors, np.uint8)
- self.execute(
- "INSERT INTO descriptors VALUES (?, ?, ?, ?)",
- (image_id,) + descriptors.shape + (array_to_blob(descriptors),),
- )
-
- def add_matches(self, image_id1, image_id2, matches):
- assert len(matches.shape) == 2
- assert matches.shape[1] == 2
-
- if image_id1 > image_id2:
- matches = matches[:, ::-1]
-
- pair_id = image_ids_to_pair_id(image_id1, image_id2)
- matches = np.asarray(matches, np.uint32)
- self.execute(
- "INSERT INTO matches VALUES (?, ?, ?, ?)",
- (pair_id,) + matches.shape + (array_to_blob(matches),),
- )
-
- def add_two_view_geometry(
- self,
- image_id1,
- image_id2,
- matches,
- F=np.eye(3),
- E=np.eye(3),
- H=np.eye(3),
- qvec=np.array([1.0, 0.0, 0.0, 0.0]),
- tvec=np.zeros(3),
- config=2,
- ):
- assert len(matches.shape) == 2
- assert matches.shape[1] == 2
-
- if image_id1 > image_id2:
- matches = matches[:, ::-1]
-
- pair_id = image_ids_to_pair_id(image_id1, image_id2)
- matches = np.asarray(matches, np.uint32)
- F = np.asarray(F, dtype=np.float64)
- E = np.asarray(E, dtype=np.float64)
- H = np.asarray(H, dtype=np.float64)
- qvec = np.asarray(qvec, dtype=np.float64)
- tvec = np.asarray(tvec, dtype=np.float64)
- self.execute(
- "INSERT INTO two_view_geometries VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
- (pair_id,)
- + matches.shape
- + (
- array_to_blob(matches),
- config,
- array_to_blob(F),
- array_to_blob(E),
- array_to_blob(H),
- array_to_blob(qvec),
- array_to_blob(tvec),
- ),
- )
-
-
-def example_usage():
- import os
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--database_path", default="database.db")
- args = parser.parse_args()
-
- if os.path.exists(args.database_path):
- print("ERROR: database path already exists -- will not modify it.")
- return
-
- # Open the database.
-
- db = COLMAPDatabase.connect(args.database_path)
-
- # For convenience, try creating all the tables upfront.
-
- db.create_tables()
-
- # Create dummy cameras.
-
- model1, width1, height1, params1 = (
- 0,
- 1024,
- 768,
- np.array((1024.0, 512.0, 384.0)),
- )
- model2, width2, height2, params2 = (
- 2,
- 1024,
- 768,
- np.array((1024.0, 512.0, 384.0, 0.1)),
- )
-
- camera_id1 = db.add_camera(model1, width1, height1, params1)
- camera_id2 = db.add_camera(model2, width2, height2, params2)
-
- # Create dummy images.
-
- image_id1 = db.add_image("image1.png", camera_id1)
- image_id2 = db.add_image("image2.png", camera_id1)
- image_id3 = db.add_image("image3.png", camera_id2)
- image_id4 = db.add_image("image4.png", camera_id2)
-
- # Create dummy keypoints.
- #
- # Note that COLMAP supports:
- # - 2D keypoints: (x, y)
- # - 4D keypoints: (x, y, theta, scale)
- # - 6D affine keypoints: (x, y, a_11, a_12, a_21, a_22)
-
- num_keypoints = 1000
- keypoints1 = np.random.rand(num_keypoints, 2) * (width1, height1)
- keypoints2 = np.random.rand(num_keypoints, 2) * (width1, height1)
- keypoints3 = np.random.rand(num_keypoints, 2) * (width2, height2)
- keypoints4 = np.random.rand(num_keypoints, 2) * (width2, height2)
-
- db.add_keypoints(image_id1, keypoints1)
- db.add_keypoints(image_id2, keypoints2)
- db.add_keypoints(image_id3, keypoints3)
- db.add_keypoints(image_id4, keypoints4)
-
- # Create dummy matches.
-
- M = 50
- matches12 = np.random.randint(num_keypoints, size=(M, 2))
- matches23 = np.random.randint(num_keypoints, size=(M, 2))
- matches34 = np.random.randint(num_keypoints, size=(M, 2))
-
- db.add_matches(image_id1, image_id2, matches12)
- db.add_matches(image_id2, image_id3, matches23)
- db.add_matches(image_id3, image_id4, matches34)
-
- # Commit the data to the file.
-
- db.commit()
-
- # Read and check cameras.
-
- rows = db.execute("SELECT * FROM cameras")
-
- camera_id, model, width, height, params, prior = next(rows)
- params = blob_to_array(params, np.float64)
- assert camera_id == camera_id1
- assert model == model1 and width == width1 and height == height1
- assert np.allclose(params, params1)
-
- camera_id, model, width, height, params, prior = next(rows)
- params = blob_to_array(params, np.float64)
- assert camera_id == camera_id2
- assert model == model2 and width == width2 and height == height2
- assert np.allclose(params, params2)
-
- # Read and check keypoints.
-
- keypoints = dict(
- (image_id, blob_to_array(data, np.float32, (-1, 2)))
- for image_id, data in db.execute("SELECT image_id, data FROM keypoints")
- )
-
- assert np.allclose(keypoints[image_id1], keypoints1)
- assert np.allclose(keypoints[image_id2], keypoints2)
- assert np.allclose(keypoints[image_id3], keypoints3)
- assert np.allclose(keypoints[image_id4], keypoints4)
-
- # Read and check matches.
-
- pair_ids = [
- image_ids_to_pair_id(*pair)
- for pair in (
- (image_id1, image_id2),
- (image_id2, image_id3),
- (image_id3, image_id4),
- )
- ]
-
- matches = dict(
- (pair_id_to_image_ids(pair_id), blob_to_array(data, np.uint32, (-1, 2)))
- for pair_id, data in db.execute("SELECT pair_id, data FROM matches")
- )
-
- assert np.all(matches[(image_id1, image_id2)] == matches12)
- assert np.all(matches[(image_id2, image_id3)] == matches23)
- assert np.all(matches[(image_id3, image_id4)] == matches34)
-
- # Clean up.
-
- db.close()
-
- if os.path.exists(args.database_path):
- os.remove(args.database_path)
-
-
-if __name__ == "__main__":
- example_usage()
diff --git a/spaces/RikyXDZ/NesiaChan/README.md b/spaces/RikyXDZ/NesiaChan/README.md
deleted file mode 100644
index 56f7e450969124d2d7ac839efc4a3a4654b54e70..0000000000000000000000000000000000000000
--- a/spaces/RikyXDZ/NesiaChan/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: NesiaChan
-emoji: 🔥
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ga_retina_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ga_retina_head.py
deleted file mode 100644
index 8822d1ca78ee2fa2f304a0649e81274830383533..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/ga_retina_head.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init
-from mmcv.ops import MaskedConv2d
-
-from ..builder import HEADS
-from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead
-
-
-@HEADS.register_module()
-class GARetinaHead(GuidedAnchorHead):
- """Guided-Anchor-based RetinaNet head."""
-
- def __init__(self,
- num_classes,
- in_channels,
- stacked_convs=4,
- conv_cfg=None,
- norm_cfg=None,
- **kwargs):
- self.stacked_convs = stacked_convs
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- super(GARetinaHead, self).__init__(num_classes, in_channels, **kwargs)
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self.relu = nn.ReLU(inplace=True)
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- self.cls_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.reg_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
-
- self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1)
- self.conv_shape = nn.Conv2d(self.feat_channels, self.num_anchors * 2,
- 1)
- self.feature_adaption_cls = FeatureAdaption(
- self.feat_channels,
- self.feat_channels,
- kernel_size=3,
- deform_groups=self.deform_groups)
- self.feature_adaption_reg = FeatureAdaption(
- self.feat_channels,
- self.feat_channels,
- kernel_size=3,
- deform_groups=self.deform_groups)
- self.retina_cls = MaskedConv2d(
- self.feat_channels,
- self.num_anchors * self.cls_out_channels,
- 3,
- padding=1)
- self.retina_reg = MaskedConv2d(
- self.feat_channels, self.num_anchors * 4, 3, padding=1)
-
- def init_weights(self):
- """Initialize weights of the layer."""
- for m in self.cls_convs:
- normal_init(m.conv, std=0.01)
- for m in self.reg_convs:
- normal_init(m.conv, std=0.01)
-
- self.feature_adaption_cls.init_weights()
- self.feature_adaption_reg.init_weights()
-
- bias_cls = bias_init_with_prob(0.01)
- normal_init(self.conv_loc, std=0.01, bias=bias_cls)
- normal_init(self.conv_shape, std=0.01)
- normal_init(self.retina_cls, std=0.01, bias=bias_cls)
- normal_init(self.retina_reg, std=0.01)
-
- def forward_single(self, x):
- """Forward feature map of a single scale level."""
- cls_feat = x
- reg_feat = x
- for cls_conv in self.cls_convs:
- cls_feat = cls_conv(cls_feat)
- for reg_conv in self.reg_convs:
- reg_feat = reg_conv(reg_feat)
-
- loc_pred = self.conv_loc(cls_feat)
- shape_pred = self.conv_shape(reg_feat)
-
- cls_feat = self.feature_adaption_cls(cls_feat, shape_pred)
- reg_feat = self.feature_adaption_reg(reg_feat, shape_pred)
-
- if not self.training:
- mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr
- else:
- mask = None
- cls_score = self.retina_cls(cls_feat, mask)
- bbox_pred = self.retina_reg(reg_feat, mask)
- return cls_score, bbox_pred, shape_pred, loc_pred
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/transformer_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/transformer_head.py
deleted file mode 100644
index 820fd069fcca295f6102f0d27366158a8c640249..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/transformer_head.py
+++ /dev/null
@@ -1,654 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import Conv2d, Linear, build_activation_layer
-from mmcv.runner import force_fp32
-
-from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh,
- build_assigner, build_sampler, multi_apply,
- reduce_mean)
-from mmdet.models.utils import (FFN, build_positional_encoding,
- build_transformer)
-from ..builder import HEADS, build_loss
-from .anchor_free_head import AnchorFreeHead
-
-
-@HEADS.register_module()
-class TransformerHead(AnchorFreeHead):
- """Implements the DETR transformer head.
-
- See `paper: End-to-End Object Detection with Transformers
- `_ for details.
-
- Args:
- num_classes (int): Number of categories excluding the background.
- in_channels (int): Number of channels in the input feature map.
- num_fcs (int, optional): Number of fully-connected layers used in
- `FFN`, which is then used for the regression head. Default 2.
- transformer (dict, optional): Config for transformer.
- positional_encoding (dict, optional): Config for position encoding.
- loss_cls (dict, optional): Config of the classification loss.
- Default `CrossEntropyLoss`.
- loss_bbox (dict, optional): Config of the regression loss.
- Default `L1Loss`.
- loss_iou (dict, optional): Config of the regression iou loss.
- Default `GIoULoss`.
- tran_cfg (dict, optional): Training config of transformer head.
- test_cfg (dict, optional): Testing config of transformer head.
-
- Example:
- >>> import torch
- >>> self = TransformerHead(80, 2048)
- >>> x = torch.rand(1, 2048, 32, 32)
- >>> mask = torch.ones(1, 32, 32).to(x.dtype)
- >>> mask[:, :16, :15] = 0
- >>> all_cls_scores, all_bbox_preds = self(x, mask)
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- num_fcs=2,
- transformer=dict(
- type='Transformer',
- embed_dims=256,
- num_heads=8,
- num_encoder_layers=6,
- num_decoder_layers=6,
- feedforward_channels=2048,
- dropout=0.1,
- act_cfg=dict(type='ReLU', inplace=True),
- norm_cfg=dict(type='LN'),
- num_fcs=2,
- pre_norm=False,
- return_intermediate_dec=True),
- positional_encoding=dict(
- type='SinePositionalEncoding',
- num_feats=128,
- normalize=True),
- loss_cls=dict(
- type='CrossEntropyLoss',
- bg_cls_weight=0.1,
- use_sigmoid=False,
- loss_weight=1.0,
- class_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=5.0),
- loss_iou=dict(type='GIoULoss', loss_weight=2.0),
- train_cfg=dict(
- assigner=dict(
- type='HungarianAssigner',
- cls_cost=dict(type='ClassificationCost', weight=1.),
- reg_cost=dict(type='BBoxL1Cost', weight=5.0),
- iou_cost=dict(
- type='IoUCost', iou_mode='giou', weight=2.0))),
- test_cfg=dict(max_per_img=100),
- **kwargs):
- # NOTE here use `AnchorFreeHead` instead of `TransformerHead`,
- # since it brings inconvenience when the initialization of
- # `AnchorFreeHead` is called.
- super(AnchorFreeHead, self).__init__()
- use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
- assert not use_sigmoid_cls, 'setting use_sigmoid_cls as True is ' \
- 'not supported in DETR, since background is needed for the ' \
- 'matching process.'
- assert 'embed_dims' in transformer \
- and 'num_feats' in positional_encoding
- num_feats = positional_encoding['num_feats']
- embed_dims = transformer['embed_dims']
- assert num_feats * 2 == embed_dims, 'embed_dims should' \
- f' be exactly 2 times of num_feats. Found {embed_dims}' \
- f' and {num_feats}.'
- assert test_cfg is not None and 'max_per_img' in test_cfg
-
- class_weight = loss_cls.get('class_weight', None)
- if class_weight is not None:
- assert isinstance(class_weight, float), 'Expected ' \
- 'class_weight to have type float. Found ' \
- f'{type(class_weight)}.'
- # NOTE following the official DETR rep0, bg_cls_weight means
- # relative classification weight of the no-object class.
- bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight)
- assert isinstance(bg_cls_weight, float), 'Expected ' \
- 'bg_cls_weight to have type float. Found ' \
- f'{type(bg_cls_weight)}.'
- class_weight = torch.ones(num_classes + 1) * class_weight
- # set background class as the last indice
- class_weight[num_classes] = bg_cls_weight
- loss_cls.update({'class_weight': class_weight})
- if 'bg_cls_weight' in loss_cls:
- loss_cls.pop('bg_cls_weight')
- self.bg_cls_weight = bg_cls_weight
-
- if train_cfg:
- assert 'assigner' in train_cfg, 'assigner should be provided '\
- 'when train_cfg is set.'
- assigner = train_cfg['assigner']
- assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \
- 'The classification weight for loss and matcher should be' \
- 'exactly the same.'
- assert loss_bbox['loss_weight'] == assigner['reg_cost'][
- 'weight'], 'The regression L1 weight for loss and matcher ' \
- 'should be exactly the same.'
- assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \
- 'The regression iou weight for loss and matcher should be' \
- 'exactly the same.'
- self.assigner = build_assigner(assigner)
- # DETR sampling=False, so use PseudoSampler
- sampler_cfg = dict(type='PseudoSampler')
- self.sampler = build_sampler(sampler_cfg, context=self)
- self.num_classes = num_classes
- self.cls_out_channels = num_classes + 1
- self.in_channels = in_channels
- self.num_fcs = num_fcs
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- self.use_sigmoid_cls = use_sigmoid_cls
- self.embed_dims = embed_dims
- self.num_query = test_cfg['max_per_img']
- self.fp16_enabled = False
- self.loss_cls = build_loss(loss_cls)
- self.loss_bbox = build_loss(loss_bbox)
- self.loss_iou = build_loss(loss_iou)
- self.act_cfg = transformer.get('act_cfg',
- dict(type='ReLU', inplace=True))
- self.activate = build_activation_layer(self.act_cfg)
- self.positional_encoding = build_positional_encoding(
- positional_encoding)
- self.transformer = build_transformer(transformer)
- self._init_layers()
-
- def _init_layers(self):
- """Initialize layers of the transformer head."""
- self.input_proj = Conv2d(
- self.in_channels, self.embed_dims, kernel_size=1)
- self.fc_cls = Linear(self.embed_dims, self.cls_out_channels)
- self.reg_ffn = FFN(
- self.embed_dims,
- self.embed_dims,
- self.num_fcs,
- self.act_cfg,
- dropout=0.0,
- add_residual=False)
- self.fc_reg = Linear(self.embed_dims, 4)
- self.query_embedding = nn.Embedding(self.num_query, self.embed_dims)
-
- def init_weights(self, distribution='uniform'):
- """Initialize weights of the transformer head."""
- # The initialization for transformer is important
- self.transformer.init_weights()
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- """load checkpoints."""
- # NOTE here use `AnchorFreeHead` instead of `TransformerHead`,
- # since `AnchorFreeHead._load_from_state_dict` should not be
- # called here. Invoking the default `Module._load_from_state_dict`
- # is enough.
- super(AnchorFreeHead,
- self)._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys,
- unexpected_keys, error_msgs)
-
- def forward(self, feats, img_metas):
- """Forward function.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
- img_metas (list[dict]): List of image information.
-
- Returns:
- tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels.
-
- - all_cls_scores_list (list[Tensor]): Classification scores \
- for each scale level. Each is a 4D-tensor with shape \
- [nb_dec, bs, num_query, cls_out_channels]. Note \
- `cls_out_channels` should includes background.
- - all_bbox_preds_list (list[Tensor]): Sigmoid regression \
- outputs for each scale level. Each is a 4D-tensor with \
- normalized coordinate format (cx, cy, w, h) and shape \
- [nb_dec, bs, num_query, 4].
- """
- num_levels = len(feats)
- img_metas_list = [img_metas for _ in range(num_levels)]
- return multi_apply(self.forward_single, feats, img_metas_list)
-
- def forward_single(self, x, img_metas):
- """"Forward function for a single feature level.
-
- Args:
- x (Tensor): Input feature from backbone's single stage, shape
- [bs, c, h, w].
- img_metas (list[dict]): List of image information.
-
- Returns:
- all_cls_scores (Tensor): Outputs from the classification head,
- shape [nb_dec, bs, num_query, cls_out_channels]. Note
- cls_out_channels should includes background.
- all_bbox_preds (Tensor): Sigmoid outputs from the regression
- head with normalized coordinate format (cx, cy, w, h).
- Shape [nb_dec, bs, num_query, 4].
- """
- # construct binary masks which used for the transformer.
- # NOTE following the official DETR repo, non-zero values representing
- # ignored positions, while zero values means valid positions.
- batch_size = x.size(0)
- input_img_h, input_img_w = img_metas[0]['batch_input_shape']
- masks = x.new_ones((batch_size, input_img_h, input_img_w))
- for img_id in range(batch_size):
- img_h, img_w, _ = img_metas[img_id]['img_shape']
- masks[img_id, :img_h, :img_w] = 0
-
- x = self.input_proj(x)
- # interpolate masks to have the same spatial shape with x
- masks = F.interpolate(
- masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1)
- # position encoding
- pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w]
- # outs_dec: [nb_dec, bs, num_query, embed_dim]
- outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight,
- pos_embed)
-
- all_cls_scores = self.fc_cls(outs_dec)
- all_bbox_preds = self.fc_reg(self.activate(
- self.reg_ffn(outs_dec))).sigmoid()
- return all_cls_scores, all_bbox_preds
-
- @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list'))
- def loss(self,
- all_cls_scores_list,
- all_bbox_preds_list,
- gt_bboxes_list,
- gt_labels_list,
- img_metas,
- gt_bboxes_ignore=None):
- """"Loss function.
-
- Only outputs from the last feature level are used for computing
- losses by default.
-
- Args:
- all_cls_scores_list (list[Tensor]): Classification outputs
- for each feature level. Each is a 4D-tensor with shape
- [nb_dec, bs, num_query, cls_out_channels].
- all_bbox_preds_list (list[Tensor]): Sigmoid regression
- outputs for each feature level. Each is a 4D-tensor with
- normalized coordinate format (cx, cy, w, h) and shape
- [nb_dec, bs, num_query, 4].
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels_list (list[Tensor]): Ground truth class indices for each
- image with shape (num_gts, ).
- img_metas (list[dict]): List of image meta information.
- gt_bboxes_ignore (list[Tensor], optional): Bounding boxes
- which can be ignored for each image. Default None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- # NOTE defaultly only the outputs from the last feature scale is used.
- all_cls_scores = all_cls_scores_list[-1]
- all_bbox_preds = all_bbox_preds_list[-1]
- assert gt_bboxes_ignore is None, \
- 'Only supports for gt_bboxes_ignore setting to None.'
-
- num_dec_layers = len(all_cls_scores)
- all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)]
- all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)]
- all_gt_bboxes_ignore_list = [
- gt_bboxes_ignore for _ in range(num_dec_layers)
- ]
- img_metas_list = [img_metas for _ in range(num_dec_layers)]
-
- losses_cls, losses_bbox, losses_iou = multi_apply(
- self.loss_single, all_cls_scores, all_bbox_preds,
- all_gt_bboxes_list, all_gt_labels_list, img_metas_list,
- all_gt_bboxes_ignore_list)
-
- loss_dict = dict()
- # loss from the last decoder layer
- loss_dict['loss_cls'] = losses_cls[-1]
- loss_dict['loss_bbox'] = losses_bbox[-1]
- loss_dict['loss_iou'] = losses_iou[-1]
- # loss from other decoder layers
- num_dec_layer = 0
- for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1],
- losses_bbox[:-1],
- losses_iou[:-1]):
- loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i
- loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i
- loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i
- num_dec_layer += 1
- return loss_dict
-
- def loss_single(self,
- cls_scores,
- bbox_preds,
- gt_bboxes_list,
- gt_labels_list,
- img_metas,
- gt_bboxes_ignore_list=None):
- """"Loss function for outputs from a single decoder layer of a single
- feature level.
-
- Args:
- cls_scores (Tensor): Box score logits from a single decoder layer
- for all images. Shape [bs, num_query, cls_out_channels].
- bbox_preds (Tensor): Sigmoid outputs from a single decoder layer
- for all images, with normalized coordinate (cx, cy, w, h) and
- shape [bs, num_query, 4].
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels_list (list[Tensor]): Ground truth class indices for each
- image with shape (num_gts, ).
- img_metas (list[dict]): List of image meta information.
- gt_bboxes_ignore_list (list[Tensor], optional): Bounding
- boxes which can be ignored for each image. Default None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components for outputs from
- a single decoder layer.
- """
- num_imgs = cls_scores.size(0)
- cls_scores_list = [cls_scores[i] for i in range(num_imgs)]
- bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)]
- cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list,
- gt_bboxes_list, gt_labels_list,
- img_metas, gt_bboxes_ignore_list)
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
- num_total_pos, num_total_neg) = cls_reg_targets
- labels = torch.cat(labels_list, 0)
- label_weights = torch.cat(label_weights_list, 0)
- bbox_targets = torch.cat(bbox_targets_list, 0)
- bbox_weights = torch.cat(bbox_weights_list, 0)
-
- # classification loss
- cls_scores = cls_scores.reshape(-1, self.cls_out_channels)
- # construct weighted avg_factor to match with the official DETR repo
- cls_avg_factor = num_total_pos * 1.0 + \
- num_total_neg * self.bg_cls_weight
- loss_cls = self.loss_cls(
- cls_scores, labels, label_weights, avg_factor=cls_avg_factor)
-
- # Compute the average number of gt boxes accross all gpus, for
- # normalization purposes
- num_total_pos = loss_cls.new_tensor([num_total_pos])
- num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item()
-
- # construct factors used for rescale bboxes
- factors = []
- for img_meta, bbox_pred in zip(img_metas, bbox_preds):
- img_h, img_w, _ = img_meta['img_shape']
- factor = bbox_pred.new_tensor([img_w, img_h, img_w,
- img_h]).unsqueeze(0).repeat(
- bbox_pred.size(0), 1)
- factors.append(factor)
- factors = torch.cat(factors, 0)
-
- # DETR regress the relative position of boxes (cxcywh) in the image,
- # thus the learning target is normalized by the image size. So here
- # we need to re-scale them for calculating IoU loss
- bbox_preds = bbox_preds.reshape(-1, 4)
- bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors
- bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors
-
- # regression IoU loss, defaultly GIoU loss
- loss_iou = self.loss_iou(
- bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos)
-
- # regression L1 loss
- loss_bbox = self.loss_bbox(
- bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos)
- return loss_cls, loss_bbox, loss_iou
-
- def get_targets(self,
- cls_scores_list,
- bbox_preds_list,
- gt_bboxes_list,
- gt_labels_list,
- img_metas,
- gt_bboxes_ignore_list=None):
- """"Compute regression and classification targets for a batch image.
-
- Outputs from a single decoder layer of a single feature level are used.
-
- Args:
- cls_scores_list (list[Tensor]): Box score logits from a single
- decoder layer for each image with shape [num_query,
- cls_out_channels].
- bbox_preds_list (list[Tensor]): Sigmoid outputs from a single
- decoder layer for each image, with normalized coordinate
- (cx, cy, w, h) and shape [num_query, 4].
- gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image
- with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels_list (list[Tensor]): Ground truth class indices for each
- image with shape (num_gts, ).
- img_metas (list[dict]): List of image meta information.
- gt_bboxes_ignore_list (list[Tensor], optional): Bounding
- boxes which can be ignored for each image. Default None.
-
- Returns:
- tuple: a tuple containing the following targets.
-
- - labels_list (list[Tensor]): Labels for all images.
- - label_weights_list (list[Tensor]): Label weights for all \
- images.
- - bbox_targets_list (list[Tensor]): BBox targets for all \
- images.
- - bbox_weights_list (list[Tensor]): BBox weights for all \
- images.
- - num_total_pos (int): Number of positive samples in all \
- images.
- - num_total_neg (int): Number of negative samples in all \
- images.
- """
- assert gt_bboxes_ignore_list is None, \
- 'Only supports for gt_bboxes_ignore setting to None.'
- num_imgs = len(cls_scores_list)
- gt_bboxes_ignore_list = [
- gt_bboxes_ignore_list for _ in range(num_imgs)
- ]
-
- (labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply(
- self._get_target_single, cls_scores_list, bbox_preds_list,
- gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list)
- num_total_pos = sum((inds.numel() for inds in pos_inds_list))
- num_total_neg = sum((inds.numel() for inds in neg_inds_list))
- return (labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, num_total_pos, num_total_neg)
-
- def _get_target_single(self,
- cls_score,
- bbox_pred,
- gt_bboxes,
- gt_labels,
- img_meta,
- gt_bboxes_ignore=None):
- """"Compute regression and classification targets for one image.
-
- Outputs from a single decoder layer of a single feature level are used.
-
- Args:
- cls_score (Tensor): Box score logits from a single decoder layer
- for one image. Shape [num_query, cls_out_channels].
- bbox_pred (Tensor): Sigmoid outputs from a single decoder layer
- for one image, with normalized coordinate (cx, cy, w, h) and
- shape [num_query, 4].
- gt_bboxes (Tensor): Ground truth bboxes for one image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (Tensor): Ground truth class indices for one image
- with shape (num_gts, ).
- img_meta (dict): Meta information for one image.
- gt_bboxes_ignore (Tensor, optional): Bounding boxes
- which can be ignored. Default None.
-
- Returns:
- tuple[Tensor]: a tuple containing the following for one image.
-
- - labels (Tensor): Labels of each image.
- - label_weights (Tensor]): Label weights of each image.
- - bbox_targets (Tensor): BBox targets of each image.
- - bbox_weights (Tensor): BBox weights of each image.
- - pos_inds (Tensor): Sampled positive indices for each image.
- - neg_inds (Tensor): Sampled negative indices for each image.
- """
-
- num_bboxes = bbox_pred.size(0)
- # assigner and sampler
- assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes,
- gt_labels, img_meta,
- gt_bboxes_ignore)
- sampling_result = self.sampler.sample(assign_result, bbox_pred,
- gt_bboxes)
- pos_inds = sampling_result.pos_inds
- neg_inds = sampling_result.neg_inds
-
- # label targets
- labels = gt_bboxes.new_full((num_bboxes, ),
- self.num_classes,
- dtype=torch.long)
- labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds]
- label_weights = gt_bboxes.new_ones(num_bboxes)
-
- # bbox targets
- bbox_targets = torch.zeros_like(bbox_pred)
- bbox_weights = torch.zeros_like(bbox_pred)
- bbox_weights[pos_inds] = 1.0
- img_h, img_w, _ = img_meta['img_shape']
-
- # DETR regress the relative position of boxes (cxcywh) in the image.
- # Thus the learning target should be normalized by the image size, also
- # the box format should be converted from defaultly x1y1x2y2 to cxcywh.
- factor = bbox_pred.new_tensor([img_w, img_h, img_w,
- img_h]).unsqueeze(0)
- pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor
- pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized)
- bbox_targets[pos_inds] = pos_gt_bboxes_targets
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
- neg_inds)
-
- # over-write because img_metas are needed as inputs for bbox_head.
- def forward_train(self,
- x,
- img_metas,
- gt_bboxes,
- gt_labels=None,
- gt_bboxes_ignore=None,
- proposal_cfg=None,
- **kwargs):
- """Forward function for training mode.
-
- Args:
- x (list[Tensor]): Features from backbone.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes (Tensor): Ground truth bboxes of the image,
- shape (num_gts, 4).
- gt_labels (Tensor): Ground truth labels of each box,
- shape (num_gts,).
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
- proposal_cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- assert proposal_cfg is None, '"proposal_cfg" must be None'
- outs = self(x, img_metas)
- if gt_labels is None:
- loss_inputs = outs + (gt_bboxes, img_metas)
- else:
- loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
- losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
- return losses
-
- @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list'))
- def get_bboxes(self,
- all_cls_scores_list,
- all_bbox_preds_list,
- img_metas,
- rescale=False):
- """Transform network outputs for a batch into bbox predictions.
-
- Args:
- all_cls_scores_list (list[Tensor]): Classification outputs
- for each feature level. Each is a 4D-tensor with shape
- [nb_dec, bs, num_query, cls_out_channels].
- all_bbox_preds_list (list[Tensor]): Sigmoid regression
- outputs for each feature level. Each is a 4D-tensor with
- normalized coordinate format (cx, cy, w, h) and shape
- [nb_dec, bs, num_query, 4].
- img_metas (list[dict]): Meta information of each image.
- rescale (bool, optional): If True, return boxes in original
- image space. Default False.
-
- Returns:
- list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \
- The first item is an (n, 5) tensor, where the first 4 columns \
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the \
- 5-th column is a score between 0 and 1. The second item is a \
- (n,) tensor where each item is the predicted class label of \
- the corresponding box.
- """
- # NOTE defaultly only using outputs from the last feature level,
- # and only the outputs from the last decoder layer is used.
- cls_scores = all_cls_scores_list[-1][-1]
- bbox_preds = all_bbox_preds_list[-1][-1]
-
- result_list = []
- for img_id in range(len(img_metas)):
- cls_score = cls_scores[img_id]
- bbox_pred = bbox_preds[img_id]
- img_shape = img_metas[img_id]['img_shape']
- scale_factor = img_metas[img_id]['scale_factor']
- proposals = self._get_bboxes_single(cls_score, bbox_pred,
- img_shape, scale_factor,
- rescale)
- result_list.append(proposals)
- return result_list
-
- def _get_bboxes_single(self,
- cls_score,
- bbox_pred,
- img_shape,
- scale_factor,
- rescale=False):
- """Transform outputs from the last decoder layer into bbox predictions
- for each image.
-
- Args:
- cls_score (Tensor): Box score logits from the last decoder layer
- for each image. Shape [num_query, cls_out_channels].
- bbox_pred (Tensor): Sigmoid outputs from the last decoder layer
- for each image, with coordinate format (cx, cy, w, h) and
- shape [num_query, 4].
- img_shape (tuple[int]): Shape of input image, (height, width, 3).
- scale_factor (ndarray, optional): Scale factor of the image arange
- as (w_scale, h_scale, w_scale, h_scale).
- rescale (bool, optional): If True, return boxes in original image
- space. Default False.
-
- Returns:
- tuple[Tensor]: Results of detected bboxes and labels.
-
- - det_bboxes: Predicted bboxes with shape [num_query, 5], \
- where the first 4 columns are bounding box positions \
- (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \
- between 0 and 1.
- - det_labels: Predicted labels of the corresponding box with \
- shape [num_query].
- """
- assert len(cls_score) == len(bbox_pred)
- # exclude background
- scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1)
- det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred)
- det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1]
- det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0]
- det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1])
- det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0])
- if rescale:
- det_bboxes /= det_bboxes.new_tensor(scale_factor)
- det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1)
- return det_bboxes, det_labels
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/training/__init__.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/training/__init__.py
deleted file mode 100644
index e1e1a5ba99e56a56ecaa14f7d4fa41777789c0cf..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/training/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/Sakil/tweetlib6_app/README.md b/spaces/Sakil/tweetlib6_app/README.md
deleted file mode 100644
index 9f39442308c4b3f6e9848d88899d3655c7aafb98..0000000000000000000000000000000000000000
--- a/spaces/Sakil/tweetlib6_app/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Tweetlib6_app
-emoji: 🏢
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/SameerR007/Movie_Recommendation_updated/app.py b/spaces/SameerR007/Movie_Recommendation_updated/app.py
deleted file mode 100644
index 7491ba067cd55dc37e1b37e61d3450c54c69bda1..0000000000000000000000000000000000000000
--- a/spaces/SameerR007/Movie_Recommendation_updated/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import streamlit as st
-import pandas as pd
-import pickle
-movies_data=pickle.load(open("movies_data.pkl","rb"))
-similarity=pickle.load(open("similarity.pkl","rb"))
-def recommend(movie):
- movie_index=movies_data[movies_data['title']==movie].index[0]
- distances=similarity[movie_index]
- movies_list_index=sorted(list(enumerate(distances)),reverse=True,key=lambda x:x[1])[1:6]
- recom_movies=[]
- for i in movies_list_index:
- recom_movies.append(movies_data.iloc[i[0]].title)
- return(recom_movies)
-
-def main():
- st.title("Movie Recommendar system")
- movies_list=movies_data['title']
- selected=st.selectbox('Which movie have you seen',movies_list)
-
- if st.button("Recommend"):
- recommendations=recommend(selected)
- for i in recommendations:
- st.write(i)
-
-if __name__=='__main__':
- main()
\ No newline at end of file
diff --git a/spaces/ServerX/PorcoDiaz/demucs/pretrained.py b/spaces/ServerX/PorcoDiaz/demucs/pretrained.py
deleted file mode 100644
index 6aac5db100cc7a9084af96d2cd083f0c8fac473c..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/demucs/pretrained.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import logging
-
-from diffq import DiffQuantizer
-import torch.hub
-
-from .model import Demucs
-from .tasnet import ConvTasNet
-from .utils import set_state
-
-logger = logging.getLogger(__name__)
-ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/"
-
-PRETRAINED_MODELS = {
- 'demucs': 'e07c671f',
- 'demucs48_hq': '28a1282c',
- 'demucs_extra': '3646af93',
- 'demucs_quantized': '07afea75',
- 'tasnet': 'beb46fac',
- 'tasnet_extra': 'df3777b2',
- 'demucs_unittest': '09ebc15f',
-}
-
-SOURCES = ["drums", "bass", "other", "vocals"]
-
-
-def get_url(name):
- sig = PRETRAINED_MODELS[name]
- return ROOT + name + "-" + sig[:8] + ".th"
-
-
-def is_pretrained(name):
- return name in PRETRAINED_MODELS
-
-
-def load_pretrained(name):
- if name == "demucs":
- return demucs(pretrained=True)
- elif name == "demucs48_hq":
- return demucs(pretrained=True, hq=True, channels=48)
- elif name == "demucs_extra":
- return demucs(pretrained=True, extra=True)
- elif name == "demucs_quantized":
- return demucs(pretrained=True, quantized=True)
- elif name == "demucs_unittest":
- return demucs_unittest(pretrained=True)
- elif name == "tasnet":
- return tasnet(pretrained=True)
- elif name == "tasnet_extra":
- return tasnet(pretrained=True, extra=True)
- else:
- raise ValueError(f"Invalid pretrained name {name}")
-
-
-def _load_state(name, model, quantizer=None):
- url = get_url(name)
- state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True)
- set_state(model, quantizer, state)
- if quantizer:
- quantizer.detach()
-
-
-def demucs_unittest(pretrained=True):
- model = Demucs(channels=4, sources=SOURCES)
- if pretrained:
- _load_state('demucs_unittest', model)
- return model
-
-
-def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64):
- if not pretrained and (extra or quantized or hq):
- raise ValueError("if extra or quantized is True, pretrained must be True.")
- model = Demucs(sources=SOURCES, channels=channels)
- if pretrained:
- name = 'demucs'
- if channels != 64:
- name += str(channels)
- quantizer = None
- if sum([extra, quantized, hq]) > 1:
- raise ValueError("Only one of extra, quantized, hq, can be True.")
- if quantized:
- quantizer = DiffQuantizer(model, group_size=8, min_size=1)
- name += '_quantized'
- if extra:
- name += '_extra'
- if hq:
- name += '_hq'
- _load_state(name, model, quantizer)
- return model
-
-
-def tasnet(pretrained=True, extra=False):
- if not pretrained and extra:
- raise ValueError("if extra is True, pretrained must be True.")
- model = ConvTasNet(X=10, sources=SOURCES)
- if pretrained:
- name = 'tasnet'
- if extra:
- name = 'tasnet_extra'
- _load_state(name, model)
- return model
diff --git a/spaces/SpacesExamples/ComfyUI/README.md b/spaces/SpacesExamples/ComfyUI/README.md
deleted file mode 100644
index 9a18b220b506adbf1f72ca73223edc0fc1f6f754..0000000000000000000000000000000000000000
--- a/spaces/SpacesExamples/ComfyUI/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: ComfyUI
-emoji: 📈
-colorFrom: green
-colorTo: pink
-sdk: docker
-pinned: false
----
-
-model: https://huggingface.co/stabilityai/control-lora
\ No newline at end of file
diff --git a/spaces/Stearns/soar-d-rules-knowledge-inspector/Inspect_Logic.py b/spaces/Stearns/soar-d-rules-knowledge-inspector/Inspect_Logic.py
deleted file mode 100644
index 227ecd82513408e4346bc5d4344d0694fe85f667..0000000000000000000000000000000000000000
--- a/spaces/Stearns/soar-d-rules-knowledge-inspector/Inspect_Logic.py
+++ /dev/null
@@ -1,154 +0,0 @@
-from io import StringIO
-import streamlit as st
-
-from smem_token_parser import SMEM_Parser, read_tokens_from_lines
-from smem_obj import SMEM_Obj, ObjType
-
-MIN_COLS = 4
-
-# Set up page config before any other streamlit commands
-st.set_page_config(page_title="Soar Agent Memory Inspector", layout="wide")
-st.title("Logic Inspector")
-
-
-def get_smem_root_from_file(smem_file):
- # tokens = get_smem_tokens_from_local_file(smem_filename)
- tokens = read_tokens_from_lines(smem_file)
- if tokens == None:
- st.error("Error reading file: '"+str(smem_file)+"'")
- return None
- parser = SMEM_Parser()
- parser.parse_file(tokens)
- return parser.get_context_root()
-
-def reset_cols():
- st.session_state.col_obj_list = [None if i > 0 else x for i,x in enumerate(st.session_state.col_obj_list)]
-
-## DEFINE THE FILE UPLOADER ELEMENT
-
-if "col_obj_list" not in st.session_state:
- st.session_state["col_obj_list"] = None
-
-file_upload_expander = st.expander(label="Select a file to inspect", expanded=(st.session_state.col_obj_list == None))
-file = file_upload_expander.file_uploader(" ")
-
-if file is not None:
- if st.session_state.col_obj_list is None:
- root = get_smem_root_from_file(StringIO(file.getvalue().decode("utf-8")))
- if root:
- st.session_state["col_obj_list"] = [None]*MIN_COLS
- st.session_state.col_obj_list[0] = root
- st.experimental_rerun()
-else:
- st.session_state["col_obj_list"] = None
-
-if st.session_state.col_obj_list is None:
- st.stop()
-
-
-## DEFINE THE CONTENT FILTERS
-@st.cache(show_spinner=False, hash_funcs={SMEM_Obj: id})
-def get_filter_features_dict(root_obj):
- return root_obj.get_referenced_features()
-
-# Get the content to filter on
-features_dict = get_filter_features_dict(st.session_state.col_obj_list[0])
-filters_expander = st.expander(label="Filters")
-filters_cols = filters_expander.columns(min(len(features_dict), 5)) # Show 5 filters per row
-filters_dict = {}
-for i,key in enumerate(features_dict):
- col = filters_cols[i % len(filters_cols)]
- filters_dict[key] = col.multiselect(label=key, options=["(none)"]+sorted(features_dict[key]), on_change=reset_cols)
-
-
-## DEFINE THE KNOWLEDGE INSPECTOR COLUMNS
-
-def add_col(index, obj):
- st.session_state.col_obj_list = st.session_state.col_obj_list[:index+1]
- st.session_state.col_obj_list.append(obj)
- # st.session_state.current_tab_index = index+1
- st.experimental_rerun()
-
-def get_header_str(obj_type):
- if obj_type == ObjType.CONTEXT:
- return "START"
- elif obj_type == ObjType.COND_CONJ:
- return "CONDITION"
- elif obj_type == ObjType.OP:
- return "CHOICE"
- elif obj_type == ObjType.ACT_GROUP:
- return "RESULT"
- else:
- return " "
-
-def get_tested_wmes_from_obj_str(obj_str):
- attr_list = []
- val_list = []
- clauses = str(obj_str).split(" AND ")
- for clause in clauses:
- attr,val = clause.replace("(","").replace(")","").split(" is ", maxsplit=1)
- attr_list += [attr]
- val_list += [val]
- return attr_list, val_list
-
-st.subheader("Click the buttons below to select conditions and to inspect resulting actions.")
-
-cols = st.columns(max(MIN_COLS,len(st.session_state.col_obj_list)))
-# Iteratively build the columns of navigable knowlege elements
-for i,col in enumerate(cols):
- try:
- if st.session_state.col_obj_list[i] == None:
- break
- except Exception as e:
- # print("ERROR checking column "+str(i+1)+" of "+str(len(st.session_state.col_obj_list))+": "+str(e))
- break
- # Build the available objects to navigate under this col's object
- obj = st.session_state.col_obj_list[i]
- obj_str,obj_label,obj_desc = obj.to_string()
- sub_objs = list(obj.get_child_objects(compact=True))
- sub_objs.sort(key=lambda x:x.to_string()[1])
- # print(obj.obj_type)
- if len(sub_objs) > 0:
- col.markdown("**"+get_header_str(sub_objs[0].obj_type)+"**",)
- col.markdown("---")
- # Show the object's main title and description
- col.text(obj_str)
- if obj_desc != None:
- col.markdown("*"+obj_desc+"*")
-
- # Print the child objects of this object as the items in this column
- for j,sub in enumerate(sub_objs):
- sub_str,sub_label,sub_desc = sub.to_string()
- if sub_desc != None:
- button_text = sub_desc
- else:
- button_text = sub_label
- # Check filters
- if sub.obj_type == ObjType.COND_PRIM or sub.obj_type == ObjType.COND_CONJ:
- # Get the feature and value for this sub obj
- keep = True
- attr_list, val_list = get_tested_wmes_from_obj_str(sub_desc)
- # Check each filter key for a match
- for attr in filters_dict:
- if attr not in attr_list:
- # Filter doesn't apply for this object
- continue
- if val_list[attr_list.index(attr)] not in filters_dict[attr] and len(filters_dict[attr]) > 0:
- # Value not present
- keep = False
- break
- if not keep:
- continue
-
- if sub.obj_type == ObjType.OP:
- subsub = list(sub.get_child_objects(compact=True))[0]
- _,group_string,_ = subsub.to_string()
- group_string = "\n * "+str(group_string).replace("AND", "\n * ")
- col.markdown(group_string)
- if col.button(button_text, key="button"+str(i)+"-"+str(j)):
- add_col(i,sub)
- elif sub.obj_type == ObjType.ACT_GROUP:
- col.markdown("*Output Details:* ")
- col.markdown("\n * "+str(sub_str).replace(") AND", ")\n * "))
- elif col.button(button_text, key="button"+str(i)+"-"+str(j)):
- add_col(i,sub)
diff --git a/spaces/SuYuanS/AudioCraft_Plus/scripts/static/style.css b/spaces/SuYuanS/AudioCraft_Plus/scripts/static/style.css
deleted file mode 100644
index a0df7c63a0d2dd9a79f33f5d869ca31c9da87e8d..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/scripts/static/style.css
+++ /dev/null
@@ -1,113 +0,0 @@
-body {
- background-color: #fbfbfb;
- margin: 0;
-}
-
-select, input {
- font-size: 1em;
- max-width: 100%;
-}
-
-.xp_name {
- font-family: monospace;
-}
-
-.simple_form {
- background-color: #dddddd;
- padding: 1em;
- margin: 0.5em;
-}
-
-textarea {
- margin-top: 0.5em;
- margin-bottom: 0.5em;
-}
-
-.rating {
- background-color: grey;
- padding-top: 5px;
- padding-bottom: 5px;
- padding-left: 8px;
- padding-right: 8px;
- margin-right: 2px;
- cursor:pointer;
-}
-
-.rating_selected {
- background-color: purple;
-}
-
-.content {
- font-family: sans-serif;
- background-color: #f6f6f6;
- padding: 40px;
- margin: 0 auto;
- max-width: 1000px;
-}
-
-.track label {
- padding-top: 10px;
- padding-bottom: 10px;
-}
-.track {
- padding: 15px;
- margin: 5px;
- background-color: #c8c8c8;
-}
-
-.submit-big {
- width:400px;
- height:30px;
- font-size: 20px;
-}
-
-.error {
- color: red;
-}
-
-.ratings {
- margin-left: 10px;
-}
-
-.important {
- font-weight: bold;
-}
-
-.survey {
- margin-bottom: 100px;
-}
-
-.success {
- color: #25901b;
- font-weight: bold;
-}
-.warning {
- color: #8a1f19;
- font-weight: bold;
-}
-.track>section {
- display: flex;
- align-items: center;
-}
-
-.prompt {
- display: flex;
- align-items: center;
-}
-
-.track>section>div {
- padding-left: 10px;
-}
-
-audio {
- max-width: 280px;
- max-height: 40px;
- margin-left: 10px;
- margin-right: 10px;
-}
-
-.special {
- font-weight: bold;
- color: #2c2c2c;
-}
-
diff --git a/spaces/Suniilkumaar/MusicGen-updated/tests/common_utils/__init__.py b/spaces/Suniilkumaar/MusicGen-updated/tests/common_utils/__init__.py
deleted file mode 100644
index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/MusicGen-updated/tests/common_utils/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .temp_utils import TempDirMixin
-from .wav_utils import get_batch_white_noise, get_white_noise, save_wav
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/backbone/vit.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/backbone/vit.py
deleted file mode 100644
index 07b5e2073ae80859be59d1142394929b504cf427..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/backbone/vit.py
+++ /dev/null
@@ -1,524 +0,0 @@
-import logging
-import math
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn as nn
-
-from annotator.oneformer.detectron2.layers import CNNBlockBase, Conv2d, get_norm
-from annotator.oneformer.detectron2.modeling.backbone.fpn import _assert_strides_are_log2_contiguous
-
-from .backbone import Backbone
-from .utils import (
- PatchEmbed,
- add_decomposed_rel_pos,
- get_abs_pos,
- window_partition,
- window_unpartition,
-)
-
-logger = logging.getLogger(__name__)
-
-
-__all__ = ["ViT", "SimpleFeaturePyramid", "get_vit_lr_decay_rate"]
-
-
-class Attention(nn.Module):
- """Multi-head Attention block with relative position embeddings."""
-
- def __init__(
- self,
- dim,
- num_heads=8,
- qkv_bias=True,
- use_rel_pos=False,
- rel_pos_zero_init=True,
- input_size=None,
- ):
- """
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- qkv_bias (bool: If True, add a learnable bias to query, key, value.
- rel_pos (bool): If True, add relative positional embeddings to the attention map.
- rel_pos_zero_init (bool): If True, zero initialize relative positional parameters.
- input_size (int or None): Input resolution for calculating the relative positional
- parameter size.
- """
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = head_dim**-0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.proj = nn.Linear(dim, dim)
-
- self.use_rel_pos = use_rel_pos
- if self.use_rel_pos:
- # initialize relative positional embeddings
- self.rel_pos_h = nn.Parameter(torch.zeros(2 * input_size[0] - 1, head_dim))
- self.rel_pos_w = nn.Parameter(torch.zeros(2 * input_size[1] - 1, head_dim))
-
- if not rel_pos_zero_init:
- nn.init.trunc_normal_(self.rel_pos_h, std=0.02)
- nn.init.trunc_normal_(self.rel_pos_w, std=0.02)
-
- def forward(self, x):
- B, H, W, _ = x.shape
- # qkv with shape (3, B, nHead, H * W, C)
- qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- # q, k, v with shape (B * nHead, H * W, C)
- q, k, v = qkv.reshape(3, B * self.num_heads, H * W, -1).unbind(0)
-
- attn = (q * self.scale) @ k.transpose(-2, -1)
-
- if self.use_rel_pos:
- attn = add_decomposed_rel_pos(attn, q, self.rel_pos_h, self.rel_pos_w, (H, W), (H, W))
-
- attn = attn.softmax(dim=-1)
- x = (attn @ v).view(B, self.num_heads, H, W, -1).permute(0, 2, 3, 1, 4).reshape(B, H, W, -1)
- x = self.proj(x)
-
- return x
-
-
-class ResBottleneckBlock(CNNBlockBase):
- """
- The standard bottleneck residual block without the last activation layer.
- It contains 3 conv layers with kernels 1x1, 3x3, 1x1.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- bottleneck_channels,
- norm="LN",
- act_layer=nn.GELU,
- ):
- """
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- bottleneck_channels (int): number of output channels for the 3x3
- "bottleneck" conv layers.
- norm (str or callable): normalization for all conv layers.
- See :func:`layers.get_norm` for supported format.
- act_layer (callable): activation for all conv layers.
- """
- super().__init__(in_channels, out_channels, 1)
-
- self.conv1 = Conv2d(in_channels, bottleneck_channels, 1, bias=False)
- self.norm1 = get_norm(norm, bottleneck_channels)
- self.act1 = act_layer()
-
- self.conv2 = Conv2d(
- bottleneck_channels,
- bottleneck_channels,
- 3,
- padding=1,
- bias=False,
- )
- self.norm2 = get_norm(norm, bottleneck_channels)
- self.act2 = act_layer()
-
- self.conv3 = Conv2d(bottleneck_channels, out_channels, 1, bias=False)
- self.norm3 = get_norm(norm, out_channels)
-
- for layer in [self.conv1, self.conv2, self.conv3]:
- weight_init.c2_msra_fill(layer)
- for layer in [self.norm1, self.norm2]:
- layer.weight.data.fill_(1.0)
- layer.bias.data.zero_()
- # zero init last norm layer.
- self.norm3.weight.data.zero_()
- self.norm3.bias.data.zero_()
-
- def forward(self, x):
- out = x
- for layer in self.children():
- out = layer(out)
-
- out = x + out
- return out
-
-
-class Block(nn.Module):
- """Transformer blocks with support of window attention and residual propagation blocks"""
-
- def __init__(
- self,
- dim,
- num_heads,
- mlp_ratio=4.0,
- qkv_bias=True,
- drop_path=0.0,
- norm_layer=nn.LayerNorm,
- act_layer=nn.GELU,
- use_rel_pos=False,
- rel_pos_zero_init=True,
- window_size=0,
- use_residual_block=False,
- input_size=None,
- ):
- """
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads in each ViT block.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool): If True, add a learnable bias to query, key, value.
- drop_path (float): Stochastic depth rate.
- norm_layer (nn.Module): Normalization layer.
- act_layer (nn.Module): Activation layer.
- use_rel_pos (bool): If True, add relative positional embeddings to the attention map.
- rel_pos_zero_init (bool): If True, zero initialize relative positional parameters.
- window_size (int): Window size for window attention blocks. If it equals 0, then not
- use window attention.
- use_residual_block (bool): If True, use a residual block after the MLP block.
- input_size (int or None): Input resolution for calculating the relative positional
- parameter size.
- """
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads,
- qkv_bias=qkv_bias,
- use_rel_pos=use_rel_pos,
- rel_pos_zero_init=rel_pos_zero_init,
- input_size=input_size if window_size == 0 else (window_size, window_size),
- )
-
- from timm.models.layers import DropPath, Mlp
-
- self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
- self.norm2 = norm_layer(dim)
- self.mlp = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer)
-
- self.window_size = window_size
-
- self.use_residual_block = use_residual_block
- if use_residual_block:
- # Use a residual block with bottleneck channel as dim // 2
- self.residual = ResBottleneckBlock(
- in_channels=dim,
- out_channels=dim,
- bottleneck_channels=dim // 2,
- norm="LN",
- act_layer=act_layer,
- )
-
- def forward(self, x):
- shortcut = x
- x = self.norm1(x)
- # Window partition
- if self.window_size > 0:
- H, W = x.shape[1], x.shape[2]
- x, pad_hw = window_partition(x, self.window_size)
-
- x = self.attn(x)
- # Reverse window partition
- if self.window_size > 0:
- x = window_unpartition(x, self.window_size, pad_hw, (H, W))
-
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- if self.use_residual_block:
- x = self.residual(x.permute(0, 3, 1, 2)).permute(0, 2, 3, 1)
-
- return x
-
-
-class ViT(Backbone):
- """
- This module implements Vision Transformer (ViT) backbone in :paper:`vitdet`.
- "Exploring Plain Vision Transformer Backbones for Object Detection",
- https://arxiv.org/abs/2203.16527
- """
-
- def __init__(
- self,
- img_size=1024,
- patch_size=16,
- in_chans=3,
- embed_dim=768,
- depth=12,
- num_heads=12,
- mlp_ratio=4.0,
- qkv_bias=True,
- drop_path_rate=0.0,
- norm_layer=nn.LayerNorm,
- act_layer=nn.GELU,
- use_abs_pos=True,
- use_rel_pos=False,
- rel_pos_zero_init=True,
- window_size=0,
- window_block_indexes=(),
- residual_block_indexes=(),
- use_act_checkpoint=False,
- pretrain_img_size=224,
- pretrain_use_cls_token=True,
- out_feature="last_feat",
- ):
- """
- Args:
- img_size (int): Input image size.
- patch_size (int): Patch size.
- in_chans (int): Number of input image channels.
- embed_dim (int): Patch embedding dimension.
- depth (int): Depth of ViT.
- num_heads (int): Number of attention heads in each ViT block.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool): If True, add a learnable bias to query, key, value.
- drop_path_rate (float): Stochastic depth rate.
- norm_layer (nn.Module): Normalization layer.
- act_layer (nn.Module): Activation layer.
- use_abs_pos (bool): If True, use absolute positional embeddings.
- use_rel_pos (bool): If True, add relative positional embeddings to the attention map.
- rel_pos_zero_init (bool): If True, zero initialize relative positional parameters.
- window_size (int): Window size for window attention blocks.
- window_block_indexes (list): Indexes for blocks using window attention.
- residual_block_indexes (list): Indexes for blocks using conv propagation.
- use_act_checkpoint (bool): If True, use activation checkpointing.
- pretrain_img_size (int): input image size for pretraining models.
- pretrain_use_cls_token (bool): If True, pretrainig models use class token.
- out_feature (str): name of the feature from the last block.
- """
- super().__init__()
- self.pretrain_use_cls_token = pretrain_use_cls_token
-
- self.patch_embed = PatchEmbed(
- kernel_size=(patch_size, patch_size),
- stride=(patch_size, patch_size),
- in_chans=in_chans,
- embed_dim=embed_dim,
- )
-
- if use_abs_pos:
- # Initialize absolute positional embedding with pretrain image size.
- num_patches = (pretrain_img_size // patch_size) * (pretrain_img_size // patch_size)
- num_positions = (num_patches + 1) if pretrain_use_cls_token else num_patches
- self.pos_embed = nn.Parameter(torch.zeros(1, num_positions, embed_dim))
- else:
- self.pos_embed = None
-
- # stochastic depth decay rule
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)]
-
- self.blocks = nn.ModuleList()
- for i in range(depth):
- block = Block(
- dim=embed_dim,
- num_heads=num_heads,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- drop_path=dpr[i],
- norm_layer=norm_layer,
- act_layer=act_layer,
- use_rel_pos=use_rel_pos,
- rel_pos_zero_init=rel_pos_zero_init,
- window_size=window_size if i in window_block_indexes else 0,
- use_residual_block=i in residual_block_indexes,
- input_size=(img_size // patch_size, img_size // patch_size),
- )
- if use_act_checkpoint:
- # TODO: use torch.utils.checkpoint
- from fairscale.nn.checkpoint import checkpoint_wrapper
-
- block = checkpoint_wrapper(block)
- self.blocks.append(block)
-
- self._out_feature_channels = {out_feature: embed_dim}
- self._out_feature_strides = {out_feature: patch_size}
- self._out_features = [out_feature]
-
- if self.pos_embed is not None:
- nn.init.trunc_normal_(self.pos_embed, std=0.02)
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- nn.init.trunc_normal_(m.weight, std=0.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def forward(self, x):
- x = self.patch_embed(x)
- if self.pos_embed is not None:
- x = x + get_abs_pos(
- self.pos_embed, self.pretrain_use_cls_token, (x.shape[1], x.shape[2])
- )
-
- for blk in self.blocks:
- x = blk(x)
-
- outputs = {self._out_features[0]: x.permute(0, 3, 1, 2)}
- return outputs
-
-
-class SimpleFeaturePyramid(Backbone):
- """
- This module implements SimpleFeaturePyramid in :paper:`vitdet`.
- It creates pyramid features built on top of the input feature map.
- """
-
- def __init__(
- self,
- net,
- in_feature,
- out_channels,
- scale_factors,
- top_block=None,
- norm="LN",
- square_pad=0,
- ):
- """
- Args:
- net (Backbone): module representing the subnetwork backbone.
- Must be a subclass of :class:`Backbone`.
- in_feature (str): names of the input feature maps coming
- from the net.
- out_channels (int): number of channels in the output feature maps.
- scale_factors (list[float]): list of scaling factors to upsample or downsample
- the input features for creating pyramid features.
- top_block (nn.Module or None): if provided, an extra operation will
- be performed on the output of the last (smallest resolution)
- pyramid output, and the result will extend the result list. The top_block
- further downsamples the feature map. It must have an attribute
- "num_levels", meaning the number of extra pyramid levels added by
- this block, and "in_feature", which is a string representing
- its input feature (e.g., p5).
- norm (str): the normalization to use.
- square_pad (int): If > 0, require input images to be padded to specific square size.
- """
- super(SimpleFeaturePyramid, self).__init__()
- assert isinstance(net, Backbone)
-
- self.scale_factors = scale_factors
-
- input_shapes = net.output_shape()
- strides = [int(input_shapes[in_feature].stride / scale) for scale in scale_factors]
- _assert_strides_are_log2_contiguous(strides)
-
- dim = input_shapes[in_feature].channels
- self.stages = []
- use_bias = norm == ""
- for idx, scale in enumerate(scale_factors):
- out_dim = dim
- if scale == 4.0:
- layers = [
- nn.ConvTranspose2d(dim, dim // 2, kernel_size=2, stride=2),
- get_norm(norm, dim // 2),
- nn.GELU(),
- nn.ConvTranspose2d(dim // 2, dim // 4, kernel_size=2, stride=2),
- ]
- out_dim = dim // 4
- elif scale == 2.0:
- layers = [nn.ConvTranspose2d(dim, dim // 2, kernel_size=2, stride=2)]
- out_dim = dim // 2
- elif scale == 1.0:
- layers = []
- elif scale == 0.5:
- layers = [nn.MaxPool2d(kernel_size=2, stride=2)]
- else:
- raise NotImplementedError(f"scale_factor={scale} is not supported yet.")
-
- layers.extend(
- [
- Conv2d(
- out_dim,
- out_channels,
- kernel_size=1,
- bias=use_bias,
- norm=get_norm(norm, out_channels),
- ),
- Conv2d(
- out_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- bias=use_bias,
- norm=get_norm(norm, out_channels),
- ),
- ]
- )
- layers = nn.Sequential(*layers)
-
- stage = int(math.log2(strides[idx]))
- self.add_module(f"simfp_{stage}", layers)
- self.stages.append(layers)
-
- self.net = net
- self.in_feature = in_feature
- self.top_block = top_block
- # Return feature names are "p", like ["p2", "p3", ..., "p6"]
- self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides}
- # top block output feature maps.
- if self.top_block is not None:
- for s in range(stage, stage + self.top_block.num_levels):
- self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1)
-
- self._out_features = list(self._out_feature_strides.keys())
- self._out_feature_channels = {k: out_channels for k in self._out_features}
- self._size_divisibility = strides[-1]
- self._square_pad = square_pad
-
- @property
- def padding_constraints(self):
- return {
- "size_divisiblity": self._size_divisibility,
- "square_size": self._square_pad,
- }
-
- def forward(self, x):
- """
- Args:
- x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``.
-
- Returns:
- dict[str->Tensor]:
- mapping from feature map name to pyramid feature map tensor
- in high to low resolution order. Returned feature names follow the FPN
- convention: "p", where stage has stride = 2 ** stage e.g.,
- ["p2", "p3", ..., "p6"].
- """
- bottom_up_features = self.net(x)
- features = bottom_up_features[self.in_feature]
- results = []
-
- for stage in self.stages:
- results.append(stage(features))
-
- if self.top_block is not None:
- if self.top_block.in_feature in bottom_up_features:
- top_block_in_feature = bottom_up_features[self.top_block.in_feature]
- else:
- top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)]
- results.extend(self.top_block(top_block_in_feature))
- assert len(self._out_features) == len(results)
- return {f: res for f, res in zip(self._out_features, results)}
-
-
-def get_vit_lr_decay_rate(name, lr_decay_rate=1.0, num_layers=12):
- """
- Calculate lr decay rate for different ViT blocks.
- Args:
- name (string): parameter name.
- lr_decay_rate (float): base lr decay rate.
- num_layers (int): number of ViT blocks.
-
- Returns:
- lr decay rate for the given parameter.
- """
- layer_id = num_layers + 1
- if name.startswith("backbone"):
- if ".pos_embed" in name or ".patch_embed" in name:
- layer_id = 0
- elif ".blocks." in name and ".residual." not in name:
- layer_id = int(name[name.find(".blocks.") :].split(".")[2]) + 1
-
- return lr_decay_rate ** (num_layers + 1 - layer_id)
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/fp16_utils.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/fp16_utils.py
deleted file mode 100644
index 1981011d6859192e3e663e29d13500d56ba47f6c..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/fp16_utils.py
+++ /dev/null
@@ -1,410 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import functools
-import warnings
-from collections import abc
-from inspect import getfullargspec
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version
-from .dist_utils import allreduce_grads as _allreduce_grads
-
-try:
- # If PyTorch version >= 1.6.0, torch.cuda.amp.autocast would be imported
- # and used; otherwise, auto fp16 will adopt mmcv's implementation.
- # Note that when PyTorch >= 1.6.0, we still cast tensor types to fp16
- # manually, so the behavior may not be consistent with real amp.
- from torch.cuda.amp import autocast
-except ImportError:
- pass
-
-
-def cast_tensor_type(inputs, src_type, dst_type):
- """Recursively convert Tensor in inputs from src_type to dst_type.
-
- Args:
- inputs: Inputs that to be casted.
- src_type (torch.dtype): Source type..
- dst_type (torch.dtype): Destination type.
-
- Returns:
- The same type with inputs, but all contained Tensors have been cast.
- """
- if isinstance(inputs, nn.Module):
- return inputs
- elif isinstance(inputs, torch.Tensor):
- return inputs.to(dst_type)
- elif isinstance(inputs, str):
- return inputs
- elif isinstance(inputs, np.ndarray):
- return inputs
- elif isinstance(inputs, abc.Mapping):
- return type(inputs)({
- k: cast_tensor_type(v, src_type, dst_type)
- for k, v in inputs.items()
- })
- elif isinstance(inputs, abc.Iterable):
- return type(inputs)(
- cast_tensor_type(item, src_type, dst_type) for item in inputs)
- else:
- return inputs
-
-
-def auto_fp16(apply_to=None, out_fp32=False):
- """Decorator to enable fp16 training automatically.
-
- This decorator is useful when you write custom modules and want to support
- mixed precision training. If inputs arguments are fp32 tensors, they will
- be converted to fp16 automatically. Arguments other than fp32 tensors are
- ignored. If you are using PyTorch >= 1.6, torch.cuda.amp is used as the
- backend, otherwise, original mmcv implementation will be adopted.
-
- Args:
- apply_to (Iterable, optional): The argument names to be converted.
- `None` indicates all arguments.
- out_fp32 (bool): Whether to convert the output back to fp32.
-
- Example:
-
- >>> import torch.nn as nn
- >>> class MyModule1(nn.Module):
- >>>
- >>> # Convert x and y to fp16
- >>> @auto_fp16()
- >>> def forward(self, x, y):
- >>> pass
-
- >>> import torch.nn as nn
- >>> class MyModule2(nn.Module):
- >>>
- >>> # convert pred to fp16
- >>> @auto_fp16(apply_to=('pred', ))
- >>> def do_something(self, pred, others):
- >>> pass
- """
-
- def auto_fp16_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # check if the module has set the attribute `fp16_enabled`, if not,
- # just fallback to the original method.
- if not isinstance(args[0], torch.nn.Module):
- raise TypeError('@auto_fp16 can only be used to decorate the '
- 'method of nn.Module')
- if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled):
- return old_func(*args, **kwargs)
-
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get the argument names to be casted
- args_to_cast = args_info.args if apply_to is None else apply_to
- # convert the args that need to be processed
- new_args = []
- # NOTE: default args are not taken into consideration
- if args:
- arg_names = args_info.args[:len(args)]
- for i, arg_name in enumerate(arg_names):
- if arg_name in args_to_cast:
- new_args.append(
- cast_tensor_type(args[i], torch.float, torch.half))
- else:
- new_args.append(args[i])
- # convert the kwargs that need to be processed
- new_kwargs = {}
- if kwargs:
- for arg_name, arg_value in kwargs.items():
- if arg_name in args_to_cast:
- new_kwargs[arg_name] = cast_tensor_type(
- arg_value, torch.float, torch.half)
- else:
- new_kwargs[arg_name] = arg_value
- # apply converted arguments to the decorated method
- if (TORCH_VERSION != 'parrots' and
- digit_version(TORCH_VERSION) >= digit_version('1.6.0')):
- with autocast(enabled=True):
- output = old_func(*new_args, **new_kwargs)
- else:
- output = old_func(*new_args, **new_kwargs)
- # cast the results back to fp32 if necessary
- if out_fp32:
- output = cast_tensor_type(output, torch.half, torch.float)
- return output
-
- return new_func
-
- return auto_fp16_wrapper
-
-
-def force_fp32(apply_to=None, out_fp16=False):
- """Decorator to convert input arguments to fp32 in force.
-
- This decorator is useful when you write custom modules and want to support
- mixed precision training. If there are some inputs that must be processed
- in fp32 mode, then this decorator can handle it. If inputs arguments are
- fp16 tensors, they will be converted to fp32 automatically. Arguments other
- than fp16 tensors are ignored. If you are using PyTorch >= 1.6,
- torch.cuda.amp is used as the backend, otherwise, original mmcv
- implementation will be adopted.
-
- Args:
- apply_to (Iterable, optional): The argument names to be converted.
- `None` indicates all arguments.
- out_fp16 (bool): Whether to convert the output back to fp16.
-
- Example:
-
- >>> import torch.nn as nn
- >>> class MyModule1(nn.Module):
- >>>
- >>> # Convert x and y to fp32
- >>> @force_fp32()
- >>> def loss(self, x, y):
- >>> pass
-
- >>> import torch.nn as nn
- >>> class MyModule2(nn.Module):
- >>>
- >>> # convert pred to fp32
- >>> @force_fp32(apply_to=('pred', ))
- >>> def post_process(self, pred, others):
- >>> pass
- """
-
- def force_fp32_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # check if the module has set the attribute `fp16_enabled`, if not,
- # just fallback to the original method.
- if not isinstance(args[0], torch.nn.Module):
- raise TypeError('@force_fp32 can only be used to decorate the '
- 'method of nn.Module')
- if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled):
- return old_func(*args, **kwargs)
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get the argument names to be casted
- args_to_cast = args_info.args if apply_to is None else apply_to
- # convert the args that need to be processed
- new_args = []
- if args:
- arg_names = args_info.args[:len(args)]
- for i, arg_name in enumerate(arg_names):
- if arg_name in args_to_cast:
- new_args.append(
- cast_tensor_type(args[i], torch.half, torch.float))
- else:
- new_args.append(args[i])
- # convert the kwargs that need to be processed
- new_kwargs = dict()
- if kwargs:
- for arg_name, arg_value in kwargs.items():
- if arg_name in args_to_cast:
- new_kwargs[arg_name] = cast_tensor_type(
- arg_value, torch.half, torch.float)
- else:
- new_kwargs[arg_name] = arg_value
- # apply converted arguments to the decorated method
- if (TORCH_VERSION != 'parrots' and
- digit_version(TORCH_VERSION) >= digit_version('1.6.0')):
- with autocast(enabled=False):
- output = old_func(*new_args, **new_kwargs)
- else:
- output = old_func(*new_args, **new_kwargs)
- # cast the results back to fp32 if necessary
- if out_fp16:
- output = cast_tensor_type(output, torch.float, torch.half)
- return output
-
- return new_func
-
- return force_fp32_wrapper
-
-
-def allreduce_grads(params, coalesce=True, bucket_size_mb=-1):
- warnings.warning(
- '"mmcv.runner.fp16_utils.allreduce_grads" is deprecated, and will be '
- 'removed in v2.8. Please switch to "mmcv.runner.allreduce_grads')
- _allreduce_grads(params, coalesce=coalesce, bucket_size_mb=bucket_size_mb)
-
-
-def wrap_fp16_model(model):
- """Wrap the FP32 model to FP16.
-
- If you are using PyTorch >= 1.6, torch.cuda.amp is used as the
- backend, otherwise, original mmcv implementation will be adopted.
-
- For PyTorch >= 1.6, this function will
- 1. Set fp16 flag inside the model to True.
-
- Otherwise:
- 1. Convert FP32 model to FP16.
- 2. Remain some necessary layers to be FP32, e.g., normalization layers.
- 3. Set `fp16_enabled` flag inside the model to True.
-
- Args:
- model (nn.Module): Model in FP32.
- """
- if (TORCH_VERSION == 'parrots'
- or digit_version(TORCH_VERSION) < digit_version('1.6.0')):
- # convert model to fp16
- model.half()
- # patch the normalization layers to make it work in fp32 mode
- patch_norm_fp32(model)
- # set `fp16_enabled` flag
- for m in model.modules():
- if hasattr(m, 'fp16_enabled'):
- m.fp16_enabled = True
-
-
-def patch_norm_fp32(module):
- """Recursively convert normalization layers from FP16 to FP32.
-
- Args:
- module (nn.Module): The modules to be converted in FP16.
-
- Returns:
- nn.Module: The converted module, the normalization layers have been
- converted to FP32.
- """
- if isinstance(module, (nn.modules.batchnorm._BatchNorm, nn.GroupNorm)):
- module.float()
- if isinstance(module, nn.GroupNorm) or torch.__version__ < '1.3':
- module.forward = patch_forward_method(module.forward, torch.half,
- torch.float)
- for child in module.children():
- patch_norm_fp32(child)
- return module
-
-
-def patch_forward_method(func, src_type, dst_type, convert_output=True):
- """Patch the forward method of a module.
-
- Args:
- func (callable): The original forward method.
- src_type (torch.dtype): Type of input arguments to be converted from.
- dst_type (torch.dtype): Type of input arguments to be converted to.
- convert_output (bool): Whether to convert the output back to src_type.
-
- Returns:
- callable: The patched forward method.
- """
-
- def new_forward(*args, **kwargs):
- output = func(*cast_tensor_type(args, src_type, dst_type),
- **cast_tensor_type(kwargs, src_type, dst_type))
- if convert_output:
- output = cast_tensor_type(output, dst_type, src_type)
- return output
-
- return new_forward
-
-
-class LossScaler:
- """Class that manages loss scaling in mixed precision training which
- supports both dynamic or static mode.
-
- The implementation refers to
- https://github.com/NVIDIA/apex/blob/master/apex/fp16_utils/loss_scaler.py.
- Indirectly, by supplying ``mode='dynamic'`` for dynamic loss scaling.
- It's important to understand how :class:`LossScaler` operates.
- Loss scaling is designed to combat the problem of underflowing
- gradients encountered at long times when training fp16 networks.
- Dynamic loss scaling begins by attempting a very high loss
- scale. Ironically, this may result in OVERflowing gradients.
- If overflowing gradients are encountered, :class:`FP16_Optimizer` then
- skips the update step for this particular iteration/minibatch,
- and :class:`LossScaler` adjusts the loss scale to a lower value.
- If a certain number of iterations occur without overflowing gradients
- detected,:class:`LossScaler` increases the loss scale once more.
- In this way :class:`LossScaler` attempts to "ride the edge" of always
- using the highest loss scale possible without incurring overflow.
-
- Args:
- init_scale (float): Initial loss scale value, default: 2**32.
- scale_factor (float): Factor used when adjusting the loss scale.
- Default: 2.
- mode (str): Loss scaling mode. 'dynamic' or 'static'
- scale_window (int): Number of consecutive iterations without an
- overflow to wait before increasing the loss scale. Default: 1000.
- """
-
- def __init__(self,
- init_scale=2**32,
- mode='dynamic',
- scale_factor=2.,
- scale_window=1000):
- self.cur_scale = init_scale
- self.cur_iter = 0
- assert mode in ('dynamic',
- 'static'), 'mode can only be dynamic or static'
- self.mode = mode
- self.last_overflow_iter = -1
- self.scale_factor = scale_factor
- self.scale_window = scale_window
-
- def has_overflow(self, params):
- """Check if params contain overflow."""
- if self.mode != 'dynamic':
- return False
- for p in params:
- if p.grad is not None and LossScaler._has_inf_or_nan(p.grad.data):
- return True
- return False
-
- def _has_inf_or_nan(x):
- """Check if params contain NaN."""
- try:
- cpu_sum = float(x.float().sum())
- except RuntimeError as instance:
- if 'value cannot be converted' not in instance.args[0]:
- raise
- return True
- else:
- if cpu_sum == float('inf') or cpu_sum == -float('inf') \
- or cpu_sum != cpu_sum:
- return True
- return False
-
- def update_scale(self, overflow):
- """update the current loss scale value when overflow happens."""
- if self.mode != 'dynamic':
- return
- if overflow:
- self.cur_scale = max(self.cur_scale / self.scale_factor, 1)
- self.last_overflow_iter = self.cur_iter
- else:
- if (self.cur_iter - self.last_overflow_iter) % \
- self.scale_window == 0:
- self.cur_scale *= self.scale_factor
- self.cur_iter += 1
-
- def state_dict(self):
- """Returns the state of the scaler as a :class:`dict`."""
- return dict(
- cur_scale=self.cur_scale,
- cur_iter=self.cur_iter,
- mode=self.mode,
- last_overflow_iter=self.last_overflow_iter,
- scale_factor=self.scale_factor,
- scale_window=self.scale_window)
-
- def load_state_dict(self, state_dict):
- """Loads the loss_scaler state dict.
-
- Args:
- state_dict (dict): scaler state.
- """
- self.cur_scale = state_dict['cur_scale']
- self.cur_iter = state_dict['cur_iter']
- self.mode = state_dict['mode']
- self.last_overflow_iter = state_dict['last_overflow_iter']
- self.scale_factor = state_dict['scale_factor']
- self.scale_window = state_dict['scale_window']
-
- @property
- def loss_scale(self):
- return self.cur_scale
diff --git a/spaces/Synthia/ChatGal/rwkv_tokenizer.py b/spaces/Synthia/ChatGal/rwkv_tokenizer.py
deleted file mode 100644
index b879889dd201b807155b39ba82226e4ba5f5ef82..0000000000000000000000000000000000000000
--- a/spaces/Synthia/ChatGal/rwkv_tokenizer.py
+++ /dev/null
@@ -1,103 +0,0 @@
-########################################################################################################
-# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
-########################################################################################################
-
-class TRIE:
- __slots__ = tuple("ch,to,values,front".split(","))
- to:list
- values:set
- def __init__(self, front=None, ch=None):
- self.ch = ch
- self.to = [None for ch in range(256)]
- self.values = set()
- self.front = front
-
- def __repr__(self):
- fr = self
- ret = []
- while(fr!=None):
- if(fr.ch!=None):
- ret.append(fr.ch)
- fr = fr.front
- return ""%(ret[::-1], self.values)
-
- def add(self, key:bytes, idx:int=0, val=None):
- if(idx == len(key)):
- if(val is None):
- val = key
- self.values.add(val)
- return self
- ch = key[idx]
- if(self.to[ch] is None):
- self.to[ch] = TRIE(front=self, ch=ch)
- return self.to[ch].add(key, idx=idx+1, val=val)
-
- def find_longest(self, key:bytes, idx:int=0):
- u:TRIE = self
- ch:int = key[idx]
-
- while(u.to[ch] is not None):
- u = u.to[ch]
- idx += 1
- if(u.values):
- ret = idx, u, u.values
- if(idx==len(key)):
- break
- ch = key[idx]
- return ret
-
-class TRIE_TOKENIZER():
- def __init__(self, file_name):
- self.idx2token = {}
- sorted = [] # must be already sorted
- with open(file_name, "r", encoding="utf-8") as f:
- lines = f.readlines()
- for l in lines:
- idx = int(l[:l.index(' ')])
- x = eval(l[l.index(' '):l.rindex(' ')])
- x = x.encode("utf-8") if isinstance(x, str) else x
- assert isinstance(x, bytes)
- assert len(x) == int(l[l.rindex(' '):])
- sorted += [x]
- self.idx2token[idx] = x
-
- self.token2idx = {}
- for k,v in self.idx2token.items():
- self.token2idx[v] = int(k)
-
- self.root = TRIE()
- for t, i in self.token2idx.items():
- _ = self.root.add(t, val=(t, i))
-
- def encodeBytes(self, src:bytes):
- idx:int = 0
- tokens = []
- while (idx < len(src)):
- _idx:int = idx
- idx, _, values = self.root.find_longest(src, idx)
- assert(idx != _idx)
- _, token = next(iter(values))
- tokens.append(token)
- return tokens
-
- def decodeBytes(self, tokens):
- return b''.join(map(lambda i: self.idx2token[i], tokens))
-
- def encode(self, src):
- return self.encodeBytes(src.encode("utf-8"))
-
- def decode(self, tokens):
- try:
- return self.decodeBytes(tokens).decode('utf-8')
- except:
- return '\ufffd' # bad utf-8
-
- def printTokens(self, tokens):
- for i in tokens:
- s = self.idx2token[i]
- try:
- s = s.decode('utf-8')
- except:
- pass
- print(f'{repr(s)}{i}', end=' ')
- print()
\ No newline at end of file
diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/__init__.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/TSjB/QM_RU_translator/README.md b/spaces/TSjB/QM_RU_translator/README.md
deleted file mode 100644
index fe648e8d58f8044215c49812f06dca8dbb5398c3..0000000000000000000000000000000000000000
--- a/spaces/TSjB/QM_RU_translator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: QM RU Translator
-emoji: ⚡
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/TSjB/QM_RU_translator/app.py b/spaces/TSjB/QM_RU_translator/app.py
deleted file mode 100644
index 011bd9fe62cac15c3631d3d3d62130ce491d38fd..0000000000000000000000000000000000000000
--- a/spaces/TSjB/QM_RU_translator/app.py
+++ /dev/null
@@ -1,392 +0,0 @@
-import gradio as gr
-import torch
-from transformers import AutoModelForSeq2SeqLM, NllbTokenizer
-# model_ru_qm_path = 'TSjB/mbart-large-52-ru-qm-v1'
-# model_qm_ru_path = 'TSjB/mbart-large-52-qm-ru-v1'
-MODEL_PATH = 'TSjB/NLLB-201-600M-QM-V1'
-
-# 2. Models
-#tokenizer_ru_qm = MBart50Tokenizer.from_pretrained(model_ru_qm_path)
-#tokenizer_qm_ru = MBart50Tokenizer.from_pretrained(model_qm_ru_path)
-#model_ru_qm = MBartForConditionalGeneration.from_pretrained(model_ru_qm_path)
-#model_qm_ru = MBartForConditionalGeneration.from_pretrained(model_qm_ru_path)
-tokenizer = NllbTokenizer.from_pretrained(MODEL_PATH)
-model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_PATH)
-
-# 3. Fix tokenizer
-def fixTokenizer(tokenizer, new_lang='krc_Cyrl'):
- """
- Add a new language token to the tokenizer vocabulary
- (this should be done each time after its initialization)
- """
- old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
- tokenizer.lang_code_to_id[new_lang] = old_len-1
- tokenizer.id_to_lang_code[old_len-1] = new_lang
- # always move "mask" to the last position
- tokenizer.fairseq_tokens_to_ids[""] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
-
- tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
- tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
- if new_lang not in tokenizer._additional_special_tokens:
- tokenizer._additional_special_tokens.append(new_lang)
- # clear the added token encoder; otherwise a new token may end up there by mistake
- tokenizer.added_tokens_encoder = {}
- tokenizer.added_tokens_decoder = {}
-
-fixTokenizer(tokenizer)
-
-# 4. Change letters
-
-def fromModel(str, dialect = "qrc"):
- if dialect == "qrc":
- str = str.replace("тюйюл", "тюл")
- str = str.replace("Тюйюл", "Тюл")
- str = str.replace("уку", "гылын qуш")
- str = str.replace("Уку", "Гылын qуш")
- str = str.replace("хораз", "гугурукку")
- str = str.replace("Хораз", "Гугурукку")
- str = str.replace("юзмез", "qум")
- str = str.replace("Юзмез", "Qум")
- str = str.replace("jиля", "jыла")
- str = str.replace("Jиля", "Jыла")
- str = str.replace("ярабий", "арабин")
- str = str.replace("арабий", "арабин")
- str = str.replace("Ярабий", "Арабин")
- str = str.replace("Арабий", "Арабин")
- str = str.replace("нтта", "нтда")
- str = str.replace("ртте", "ртде")
- str = str.replace("jамауат", "jамаgат")
- str = str.replace("jамаwат", "jамаgат")
- str = str.replace("Jамауат", "Jамаgат")
- str = str.replace("Jамаwат", "Jамаgат")
- str = str.replace("шуёх", "шох")
- str = str.replace("Шуёх", "Шох")
- str = str.replace("шёндю", "бусаgат")
- str = str.replace("Шёндю", "Бусаgат")
- str = str.replace("уgай", "оgай")
- str = str.replace("Уgай", "Оgай")
- # str = str.replace("терк", "тез")
- str = str.replace("саnа", "сенnе")
- str = str.replace("сеnе", "сенnе")
- str = str.replace("Саnа", "Сенnе")
- str = str.replace("Сеnе", "Сенnе")
- str = str.replace("маnа", "менnе")
- str = str.replace("меnе", "менnе")
- str = str.replace("Маnа", "Менnе")
- str = str.replace("Меnе", "Менnе")
- str = str.replace("аяq jол", "jахтана")
- str = str.replace("Аяq jол", "Jахтана")
- str = str.replace("сыbат", "сыфат")
- str = str.replace("Сыbат", "Сыфат")
- str = str.replace("b", "б")
- str = str.replace("q", "къ")
- str = str.replace("Q", "Къ")
- str = str.replace("g", "гъ")
- str = str.replace("G", "Гъ")
- str = str.replace("j", "дж")
- str = str.replace("J", "Дж")
- str = str.replace("w", "ў")
- str = str.replace("W", "Ў")
- str = str.replace("n", "нг")
- str = str.replace("N", "Нг")
- elif dialect == "hlm":
- str = str.replace("тюл", "тюйюл")
- str = str.replace("Тюл", "Тюйюл")
- str = str.replace("гылын qуш", "уку")
- str = str.replace("Гылын qуш", "Уку")
- str = str.replace("гугурукку", "хораз")
- str = str.replace("Гугурукку", "Хораз")
- str = str.replace("qум", "юзмез")
- str = str.replace("Qум", "Юзмез")
- str = str.replace("jыла", "jиля")
- str = str.replace("Jыла", "Jиля")
- str = str.replace("арабин", "ярабий")
- str = str.replace("арабий", "ярабий")
- str = str.replace("Арабин", "Ярабий")
- str = str.replace("Арабий", "Ярабий")
- str = str.replace("нтда", "нтта")
- str = str.replace("ртде", "ртте")
- str = str.replace("jамаgат", "jамаwат")
- str = str.replace("Jамаgат", "Jамаwат")
- str = str.replace("шох", "шуёх")
- str = str.replace("Шох", "Шуёх")
- str = str.replace("бусаgат", "шёндю")
- str = str.replace("Бусаgат", "Шёндю")
- str = str.replace("оgай", "уgай")
- str = str.replace("Оgай", "Уgай")
- str = str.replace("тез", "терк")
- str = str.replace("сенnе", "саnа")
- str = str.replace("сеnе", "саnа")
- str = str.replace("Сенnе", "Саnа")
- str = str.replace("Сеnе", "Саnа")
- str = str.replace("менnе", "маnа")
- str = str.replace("меnе", "маnа")
- str = str.replace("Менnе", "Маnа")
- str = str.replace("Меnе", "Маnа")
- str = str.replace("jахтана", "аяq jол")
- str = str.replace("Jахтана", "аяq jол")
- str = str.replace("хо", "хаw")
- str = str.replace("Хо", "Хаw")
- str = str.replace("сыbат", "сыфат")
- str = str.replace("Сыbат", "Сыфат")
- str = str.replace("b", "п")
- str = str.replace("q", "къ")
- str = str.replace("Q", "Къ")
- str = str.replace("g", "гъ")
- str = str.replace("G", "Гъ")
- str = str.replace("j", "ж")
- str = str.replace("J", "Ж")
- str = str.replace("w", "ў")
- str = str.replace("W", "Ў")
- str = str.replace("n", "нг")
- str = str.replace("N", "Нг")
- elif dialect == "mqr":
- str = str.replace("тюл", "тюйюл")
- str = str.replace("Тюл", "Тюйюл")
- str = str.replace("гылын qуш", "уку")
- str = str.replace("Гылын qуш", "Уку")
- str = str.replace("гугурукку", "хораз")
- str = str.replace("Гугурукку", "Хораз")
- str = str.replace("qум", "юзмез")
- str = str.replace("Qум", "Юзмез")
- str = str.replace("jыла", "jиля")
- str = str.replace("Jыла", "Jиля")
- str = str.replace("арабин", "ярабий")
- str = str.replace("арабий", "ярабий")
- str = str.replace("Арабин", "Ярабий")
- str = str.replace("Арабий", "Ярабий")
- str = str.replace("нтда", "нтта")
- str = str.replace("ртде", "ртте")
- str = str.replace("jамаgат", "jамаwат")
- str = str.replace("Jамаgат", "Jамаwат")
- str = str.replace("шох", "шуёх")
- str = str.replace("Шох", "Шуёх")
- str = str.replace("бусаgат", "шёндю")
- str = str.replace("Бусаgат", "Шёндю")
- str = str.replace("оgай", "уgай")
- str = str.replace("Оgай", "Уgай")
- str = str.replace("тез", "терк")
- str = str.replace("сенnе", "саnа")
- str = str.replace("сеnе", "саnа")
- str = str.replace("Сенnе", "Саnа")
- str = str.replace("Сеnе", "Саnа")
- str = str.replace("менnе", "маnа")
- str = str.replace("меnе", "маnа")
- str = str.replace("Менnе", "Маnа")
- str = str.replace("Меnе", "Маnа")
- str = str.replace("jахтана", "аяq jол")
- str = str.replace("Jахтана", "аяq jол")
- str = str.replace("хо", "хаw")
- str = str.replace("Хо", "Хаw")
- str = str.replace("сыbат", "сыфат")
- str = str.replace("Сыbат", "Сыфат")
- str = str.replace("b", "п")
- str = str.replace("q", "къ")
- str = str.replace("Q", "Къ")
- str = str.replace("g", "гъ")
- str = str.replace("G", "Гъ")
- str = str.replace("j", "з")
- str = str.replace("J", "З")
- str = str.replace("w", "ў")
- str = str.replace("W", "Ў")
- str = str.replace("n", "нг")
- str = str.replace("N", "Нг")
- str = str.replace("ч", "ц")
- str = str.replace("Ч", "Ц")
- str = str.replace("п", "ф")
- str = str.replace("П", "Ф")
- str = str.replace("къ|гъ", "х")
- return str
-
-
-def toModel(str):
- str = str.replace("дж", "j")
- str = str.replace("Дж", "J")
- str = str.replace("ДЖ", "J")
- str = str.replace("ж", "j")
- str = str.replace("Ж", "J")
- str = str.replace("себеп", "себеb")
- str = str.replace("себеб", "себеb")
- str = str.replace("Себеп", "Себеb")
- str = str.replace("Себеб", "Себеb")
- str = str.replace("тюйюл", "тюл")
- str = str.replace("Тюйюл", "Тюл")
- str = str.replace("уку", "гылын qуш")
- str = str.replace("Уку", "Гылын qуш")
- str = str.replace("хораз", "гугурукку")
- str = str.replace("Хораз", "Гугурукку")
- str = str.replace("юзмез", "qум")
- str = str.replace("Юзмез", "Qум")
- str = str.replace("арап", "араb")
- str = str.replace("араб", "араb")
- str = str.replace("Арап", "Араb")
- str = str.replace("Араб", "Араb")
- str = str.replace("jиля", "jыла")
- str = str.replace("jыла", "jыла")
- str = str.replace("jыла", "jыла")
- str = str.replace("Jиля", "Jыла")
- str = str.replace("Jыла", "Jыла")
- str = str.replace("Jыла", "Jыла")
- str = str.replace("ярабий", "арабин")
- str = str.replace("арабий", "арабин")
- str = str.replace("Ярабий", "Арабин")
- str = str.replace("Арабий", "Арабин")
- str = str.replace("нтта", "нтда")
- str = str.replace("ртте", "ртде")
- str = str.replace("jамагъат", "jамаgат")
- str = str.replace("jамауат", "jамаgат")
- str = str.replace("jамагъат", "jамаgат")
- str = str.replace("jамауат", "jамаgат")
- str = str.replace("Jамагъат", "Jамаgат")
- str = str.replace("Jамауат", "Jамаgат")
- str = str.replace("Jамагъат", "Jамаgат")
- str = str.replace("Jамаўат", "Jамаgат")
- str = str.replace("шуёх", "шох")
- str = str.replace("Шуёх", "Шох")
- str = str.replace("шёндю", "бусаgат")
- str = str.replace("бусагъат", "бусаgат")
- str = str.replace("Шёндю", "Бусаgат")
- str = str.replace("Бусагъат", "Бусаgат")
- str = str.replace("угъай", "оgай")
- str = str.replace("огъай", "оgай")
- str = str.replace("Угъай", "Оgай")
- str = str.replace("Огъай", "Оgай")
- # str = str.replace("терк", "тез")
- # str = str.replace("терк", "тез")
- str = str.replace("санга", "сенnе")
- str = str.replace("сенге", "сенnе")
- str = str.replace("сеннге", "сенnе")
- str = str.replace("Санга", "Сенnе")
- str = str.replace("Сеннге", "Сенnе")
- str = str.replace("Сенге", "Сенnе")
- str = str.replace("манга", "менnе")
- str = str.replace("меннге", "менnе")
- str = str.replace("менге", "менnе")
- str = str.replace("Манга", "Менnе")
- str = str.replace("Меннге", "Менnе")
- str = str.replace("Менге", "Менnе")
- str = str.replace("аякъ jол", "jахтана")
- str = str.replace("аякъ jол", "jахтана")
- str = str.replace("jахтана", "jахтана")
- str = str.replace("jахтана", "jахтана")
- str = str.replace("Аякъ jол", "Jахтана")
- str = str.replace("Аякъ jол", "Jахтана")
- str = str.replace("Jахтана", "Jахтана")
- str = str.replace("Jахтана", "Jахтана")
- str = str.replace("къамж", "qамыzh")
- str = str.replace("къамыж", "qамыzh")
- str = str.replace("Къамж", "Qамыzh")
- str = str.replace("Къамыж", "Qамыzh")
- str = str.replace("къымыж", "qымыzh")
- str = str.replace("къымыж", "qымыzh")
- str = str.replace("Къымыж", "Qымыzh")
- str = str.replace("Къымыж", "Qымыzh")
- str = str.replace("хау", "хо")
- str = str.replace("хаў", "хо")
- str = str.replace("Хау", "Хо")
- str = str.replace("Хаў", "Хо")
- str = str.replace("уа", "wa")
- str = str.replace("ўа", "wa")
- str = str.replace("Уа", "Wa")
- str = str.replace("Ўа", "Wa")
- str = str.replace("п", "b")
- str = str.replace("б", "b")
- str = str.replace("къ", "q")
- str = str.replace("Къ", "Q")
- str = str.replace("КЪ", "Q")
- str = str.replace("гъ", "g")
- str = str.replace("Гъ", "G")
- str = str.replace("ГЪ", "G")
- str = str.replace("ц", "ч")
- str = str.replace("Ц", "Ч")
- str = str.replace("ф", "п")
- str = str.replace("сыпат", "сыфат")
- str = str.replace("Сыпат", "Сыфат")
- str = str.replace("Ф", "П")
- str = str.replace("(?<=[аыоуэеиёюя])у(?=[аыоуэеиёюя])|(?<=[аыоуэеиёюя])ў(?=[аыоуэеиёюя])|(?<=[АЫОУЭЕИЁЮЯ])у(?=[АЫОУЭЕИЁЮЯ])|(?<=[АЫОУЭЕИЁЮЯ])ў(?=[АЫОУЭЕИЁЮЯ])", "w")
- str = str.replace("(?<=[аыоуэеиёюя])у|(?<=[аыоуэеиёюя])ў|(?<=[АЫОУЭЕИЁЮЯ])у|(?<=[АЫОУЭЕИЁЮЯ])ў", "w")
- # str = str.replace("у(?=[аыоуэеиёюя])|ў(?=[аыоуэеиёюя])|у(?=[АЫОУЭЕИЁЮЯ])|ў(?=[АЫОУЭЕИЁЮЯ])", "w")
- # str = str.replace("У(?=[аыоуэеиёюя])|Ў(?=[аыоуэеиёюя])|У(?=[АЫОУЭЕИЁЮЯ])|Ў(?=[АЫОУЭЕИЁЮЯ])", "W")
- str = str.replace("zh", "ж")
- str = str.replace("нг", "n")
- str = str.replace("Нг", " N")
- str = str.replace("НГ", " N")
- return str
-
-
-
-
-# 4. Translate function
-
-
-
-#def translatePy(text, model, tokenizer, src='ru_RU', trg='qm_XX', max_length='auto', num_beams=3, repetition_penalty=5.0, train_mode=False, n_out=None, **kwargs):
-# tokenizer.src_lang = src
-# tokenizer.tgt_lang = trg
-# encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=1024)
-# if max_length == 'auto':
-# max_length = int(32 + 1.5 * encoded.input_ids.shape[1])
-# if train_mode:
-# model.train()
-# else:
-# model.eval()
-# generated_tokens = model.generate(
-# **encoded.to(model.device),
-# forced_bos_token_id=tokenizer.lang_code_to_id[trg],
-# max_length=max_length,
-# num_beams=num_beams,
-# repetition_penalty=repetition_penalty,
-# # early_stopping=True,
-# num_return_sequences=n_out or 1,
-# **kwargs
-# )
-# out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
-# if isinstance(text, str) and n_out is None:
-# return out[0]
-# return out
-
-
-
-def translatePy(text, src_lang='rus_Cyrl', tgt_lang='krc_Cyrl',
- a=32, b=3, max_input_length=1024, num_beams=3, **kwargs
-):
- """Turn a text or a list of texts into a list of translations"""
- tokenizer.src_lang = src_lang
- tokenizer.tgt_lang = tgt_lang
- inputs = tokenizer(
- text, return_tensors='pt', padding=True, truncation=True,
- max_length=max_input_length
- )
- model.eval() # turn off training mode
- result = model.generate(
- **inputs.to(model.device),
- forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
- max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
- num_beams=num_beams, **kwargs
- )
- return tokenizer.batch_decode(result, skip_special_tokens=True)[0]
-
-# 5. Translate
-def transl(text, til, change_letters = True):
- str = ''
- if til == "Къарачай-Малкъар":
- if change_letters == True:
- str = translatePy(toModel(text), src_lang = 'krc_Cyrl', tgt_lang='rus_Cyrl')
- else:
- str = translatePy(text, src_lang = 'krc_Cyrl', tgt_lang='rus_Cyrl')
- elif til == "Русский":
- if change_letters == True:
- str = translatePy(text, src_lang = 'rus_Cyrl', tgt_lang='krc_Cyrl')
- str = fromModel(str)
- else:
- str = translatePy(text, src_lang = 'rus_Cyrl', tgt_lang='krc_Cyrl')
-
- return str
-
-demo = gr.Interface(
- fn=transl,
- inputs=[gr.Textbox(lines=1, placeholder="Your sentence here...", label = "input"), gr.Radio(
- ["Къарачай-Малкъар", "Русский"], label="Language", value = "Русский"), gr.Checkbox(label="Change letter", info="It's for inner using", value = True)],
- outputs="text"
-)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/transformer_prediction_interface.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/transformer_prediction_interface.py
deleted file mode 100644
index 54f29c4f0fe537c74fa12650593aaed2c5468ab7..0000000000000000000000000000000000000000
--- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/transformer_prediction_interface.py
+++ /dev/null
@@ -1,357 +0,0 @@
-import torch
-import random
-
-from torch.utils.checkpoint import checkpoint
-
-from utils import normalize_data, to_ranking_low_mem, remove_outliers
-from priors.utils import normalize_by_used_features_f
-from utils import NOP
-
-from sklearn.preprocessing import PowerTransformer, QuantileTransformer, RobustScaler
-
-from notebook_utils import CustomUnpickler
-
-import numpy as np
-from sklearn.base import BaseEstimator, ClassifierMixin
-from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
-from sklearn.utils.multiclass import check_classification_targets
-from sklearn.utils import column_or_1d
-from pathlib import Path
-from model_builder import load_model
-import os
-
-def load_model_workflow(i, e, add_name, base_path, device='cpu', eval_addition=''):
- """
- Workflow for loading a model and setting appropriate parameters for diffable hparam tuning.
-
- :param i:
- :param e:
- :param eval_positions_valid:
- :param add_name:
- :param base_path:
- :param device:
- :param eval_addition:
- :return:
- """
- def check_file(e):
- model_file = f'models_diff/prior_diff_real_checkpoint{add_name}_n_{i}_epoch_{e}.cpkt'
- model_path = os.path.join(base_path, model_file)
- # print('Evaluate ', model_path)
- results_file = os.path.join(base_path,
- f'models_diff/prior_diff_real_results{add_name}_n_{i}_epoch_{e}_{eval_addition}.pkl')
- if not Path(model_path).is_file(): # or Path(results_file).is_file():
- return None, None, None
- return model_file, model_path, results_file
-
- model_file = None
- if e == -1:
- for e_ in range(100, -1, -1):
- model_file_, model_path_, results_file_ = check_file(e_)
- if model_file_ is not None:
- e = e_
- model_file, model_path, results_file = model_file_, model_path_, results_file_
- break
- else:
- model_file, model_path, results_file = check_file(e)
-
- if model_file is None:
- print('No checkpoint found')
- return None
-
- print(f'Loading {model_file}')
-
- model, c = load_model(base_path, model_file, device, eval_positions=[], verbose=False)
-
- return model, c, results_file
-
-
-class TabPFNClassifier(BaseEstimator, ClassifierMixin):
-
- def __init__(self, device='cpu', base_path='.'):
- # Model file specification (Model name, Epoch)
- model_string = ''
- i, e = '8x_lr0.0003', -1
-
- # File which contains result of hyperparameter tuning run: style (i.e. hyperparameters) and a dataframe with results.
- style_file = 'prior_tuning_result.pkl'
-
- model, c, results_file = load_model_workflow(i, e, add_name=model_string, base_path=base_path, device=device,
- eval_addition='')
- style, temperature = self.load_result_minimal(style_file, i, e, base_path=base_path)
-
- self.device = device
- self.base_path = base_path
- self.model = model
- self.c = c
- self.style = style
- self.temperature = temperature
-
- self.max_num_features = self.c['num_features']
- self.max_num_classes = self.c['max_num_classes']
-
- def load_result_minimal(self, path, i, e, base_path='.'):
- with open(os.path.join(base_path,path), 'rb') as output:
- _, _, _, style, temperature, optimization_route = CustomUnpickler(output).load()
-
- return style, temperature
-
- def fit(self, X, y):
- # Check that X and y have correct shape
- X, y = check_X_y(X, y)
- y = self._validate_targets(y)
-
- self.X_ = X
- self.y_ = y
-
- if X.shape[1] > self.max_num_features:
- raise ValueError("The number of features for this classifier is restricted to ", self.max_num_features)
- if len(np.unique(y)) > self.max_num_classes:
- raise ValueError("The number of classes for this classifier is restricted to ", self.max_num_classes)
-
- # Return the classifier
- return self
-
- def _validate_targets(self, y):
- y_ = column_or_1d(y, warn=True)
- check_classification_targets(y)
- cls, y = np.unique(y_, return_inverse=True)
- if len(cls) < 2:
- raise ValueError(
- "The number of classes has to be greater than one; got %d class"
- % len(cls)
- )
-
- self.classes_ = cls
-
- return np.asarray(y, dtype=np.float64, order="C")
-
- def predict_proba(self, X):
- # Check is fit had been called
- check_is_fitted(self)
-
- # Input validation
- X = check_array(X)
-
- X_full = np.concatenate([self.X_, X], axis=0)
- X_full = torch.tensor(X_full, device=self.device).float().unsqueeze(1)
- y_full = np.concatenate([self.y_, self.y_[0] + np.zeros_like(X[:, 0])], axis=0)
- y_full = torch.tensor(y_full, device=self.device).float().unsqueeze(1)
-
- eval_pos = self.X_.shape[0]
-
- prediction = transformer_predict(self.model[2], X_full, y_full, eval_pos,
- device=self.device,
- style=self.style,
- inference_mode=True,
- N_ensemble_configurations=10,
- softmax_temperature=self.temperature
- , **get_params_from_config(self.c))
- prediction_ = prediction.squeeze(0)
-
- return prediction_.detach().cpu().numpy()
-
- def predict(self, X, return_winning_probability=False):
- p = self.predict_proba(X)
- y = np.argmax(self.predict_proba(X), axis=-1)
- y = self.classes_.take(np.asarray(y, dtype=np.intp))
- if return_winning_probability:
- return y, p.max(axis=-1)
- return y
-
-def transformer_predict(model, eval_xs, eval_ys, eval_position,
- device='cpu',
- max_features=100,
- style=None,
- inference_mode=False,
- num_classes=2,
- extend_features=True,
- normalize_to_ranking=False,
- softmax_temperature=0.0,
- multiclass_decoder='permutation',
- preprocess_transform='mix',
- categorical_feats=[],
- feature_shift_decoder=True,
- N_ensemble_configurations=10,
- average_logits=True,
- normalize_with_sqrt=False, **kwargs):
- """
-
- :param model:
- :param eval_xs:
- :param eval_ys: should be classes that are 0-indexed and every class until num_classes-1 is present
- :param eval_position:
- :param rescale_features:
- :param device:
- :param max_features:
- :param style:
- :param inference_mode:
- :param num_classes:
- :param extend_features:
- :param normalize_to_ranking:
- :param softmax_temperature:
- :param multiclass_decoder:
- :param preprocess_transform:
- :param categorical_feats:
- :param feature_shift_decoder:
- :param N_ensemble_configurations:
- :param average_logits:
- :param normalize_with_sqrt:
- :param metric_used:
- :return:
- """
- num_classes = len(torch.unique(eval_ys))
-
- def predict(eval_xs, eval_ys, used_style, softmax_temperature, return_logits):
- # Initialize results array size S, B, Classes
-
- inference_mode_call = torch.inference_mode() if inference_mode else NOP()
- with inference_mode_call:
- output = model(
- (used_style.repeat(eval_xs.shape[1], 1) if used_style is not None else None, eval_xs, eval_ys.float()),
- single_eval_pos=eval_position)[:, :, 0:num_classes]
-
- output = output[:, :, 0:num_classes] / torch.exp(softmax_temperature)
- if not return_logits:
- output = torch.nn.functional.softmax(output, dim=-1)
- #else:
- # output[:, :, 1] = model((style.repeat(eval_xs.shape[1], 1) if style is not None else None, eval_xs, eval_ys.float()),
- # single_eval_pos=eval_position)
-
- # output[:, :, 1] = torch.sigmoid(output[:, :, 1]).squeeze(-1)
- # output[:, :, 0] = 1 - output[:, :, 1]
-
- #print('RESULTS', eval_ys.shape, torch.unique(eval_ys, return_counts=True), output.mean(axis=0))
-
- return output
-
- def preprocess_input(eval_xs, preprocess_transform):
- import warnings
-
- if eval_xs.shape[1] > 1:
- raise Exception("Transforms only allow one batch dim - TODO")
- if preprocess_transform != 'none':
- if preprocess_transform == 'power' or preprocess_transform == 'power_all':
- pt = PowerTransformer(standardize=True)
- elif preprocess_transform == 'quantile' or preprocess_transform == 'quantile_all':
- pt = QuantileTransformer(output_distribution='normal')
- elif preprocess_transform == 'robust' or preprocess_transform == 'robust_all':
- pt = RobustScaler(unit_variance=True)
-
- # eval_xs, eval_ys = normalize_data(eval_xs), normalize_data(eval_ys)
- eval_xs = normalize_data(eval_xs)
-
- # Removing empty features
- eval_xs = eval_xs[:, 0, :].cpu().numpy()
- sel = [len(np.unique(eval_xs[0:eval_ys.shape[0], col])) > 1 for col in range(eval_xs.shape[1])]
- eval_xs = np.array(eval_xs[:, sel])
-
- warnings.simplefilter('error')
- if preprocess_transform != 'none':
- feats = set(range(eval_xs.shape[1])) if 'all' in preprocess_transform else set(
- range(eval_xs.shape[1])) - set(categorical_feats)
- for col in feats:
- try:
- pt.fit(eval_xs[0:eval_ys.shape[0], col:col + 1])
- trans = pt.transform(eval_xs[:, col:col + 1])
- # print(scipy.stats.spearmanr(trans[~np.isnan(eval_xs[:, col:col+1])], eval_xs[:, col:col+1][~np.isnan(eval_xs[:, col:col+1])]))
- eval_xs[:, col:col + 1] = trans
- except:
- pass
- warnings.simplefilter('default')
-
- eval_xs = torch.tensor(eval_xs).float().unsqueeze(1).to(device)
-
- # eval_xs = normalize_data(eval_xs)
-
- # TODO: Cautian there is information leakage when to_ranking is used, we should not use it
- eval_xs = remove_outliers(eval_xs) if not normalize_to_ranking else normalize_data(to_ranking_low_mem(eval_xs))
-
- # Rescale X
- eval_xs = normalize_by_used_features_f(eval_xs, eval_xs.shape[-1], max_features,
- normalize_with_sqrt=normalize_with_sqrt)
- return eval_xs.detach()
-
- eval_xs, eval_ys = eval_xs.to(device), eval_ys.to(device)
- eval_ys = eval_ys[:eval_position]
-
- model.to(device)
- style = style.to(device)
-
- model.eval()
-
- import itertools
- style = style.unsqueeze(0) if len(style.shape) == 1 else style
- num_styles = style.shape[0]
- styles_configurations = range(0, num_styles)
- preprocess_transform_configurations = [preprocess_transform if i % 2 == 0 else 'none' for i in range(0, num_styles)]
- if preprocess_transform == 'mix':
- def get_preprocess(i):
- if i == 0:
- return 'power_all'
- if i == 1:
- return 'robust_all'
- if i == 2:
- return 'none'
- preprocess_transform_configurations = [get_preprocess(i) for i in range(0, num_styles)]
- styles_configurations = zip(styles_configurations, preprocess_transform_configurations)
-
- feature_shift_configurations = range(0, eval_xs.shape[2]) if feature_shift_decoder else [0]
- class_shift_configurations = range(0, len(torch.unique(eval_ys))) if multiclass_decoder == 'permutation' else [0]
-
- ensemble_configurations = list(itertools.product(styles_configurations, feature_shift_configurations, class_shift_configurations))
- random.shuffle(ensemble_configurations)
- ensemble_configurations = ensemble_configurations[0:N_ensemble_configurations]
-
- output = None
-
- eval_xs_transformed = {}
- for ensemble_configuration in ensemble_configurations:
- (styles_configuration, preprocess_transform_configuration), feature_shift_configuration, class_shift_configuration = ensemble_configuration
-
- style_ = style[styles_configuration:styles_configuration+1, :]
- softmax_temperature_ = softmax_temperature[styles_configuration]
-
- eval_xs_, eval_ys_ = eval_xs.clone(), eval_ys.clone()
-
- if preprocess_transform_configuration in eval_xs_transformed:
- eval_xs_ = eval_xs_transformed['preprocess_transform_configuration'].clone()
- else:
- eval_xs_ = preprocess_input(eval_xs_, preprocess_transform=preprocess_transform_configuration)
- eval_xs_transformed['preprocess_transform_configuration'] = eval_xs_
-
- eval_ys_ = ((eval_ys_ + class_shift_configuration) % num_classes).float()
-
- eval_xs_ = torch.cat([eval_xs_[..., feature_shift_configuration:],eval_xs_[..., :feature_shift_configuration]],dim=-1)
-
- # Extend X
- if extend_features:
- eval_xs_ = torch.cat(
- [eval_xs_,
- torch.zeros((eval_xs_.shape[0], eval_xs_.shape[1], max_features - eval_xs_.shape[2])).to(device)], -1)
-
- #preprocess_transform_ = preprocess_transform if styles_configuration % 2 == 0 else 'none'
- import warnings
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore", message="None of the inputs have requires_grad=True. Gradients will be None")
- output_ = checkpoint(predict, eval_xs_, eval_ys_, style_, softmax_temperature_, True)
- output_ = torch.cat([output_[..., class_shift_configuration:],output_[..., :class_shift_configuration]],dim=-1)
-
- #output_ = predict(eval_xs, eval_ys, style_, preprocess_transform_)
- if not average_logits:
- output_ = torch.nn.functional.softmax(output_, dim=-1)
- output = output_ if output is None else output + output_
-
- output = output / len(ensemble_configurations)
- if average_logits:
- output = torch.nn.functional.softmax(output, dim=-1)
-
- output = torch.transpose(output, 0, 1)
-
- return output
-
-def get_params_from_config(c):
- return {'max_features': c['num_features']
- , 'rescale_features': c["normalize_by_used_features"]
- , 'normalize_to_ranking': c["normalize_to_ranking"]
- , 'normalize_with_sqrt': c.get("normalize_with_sqrt", False)
- }
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/cache.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/cache.py
deleted file mode 100644
index a81a23985198d2eaa3c25ad1f77924f0fcdb037b..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/cache.py
+++ /dev/null
@@ -1,69 +0,0 @@
-"""HTTP cache implementation.
-"""
-
-import os
-from contextlib import contextmanager
-from typing import Generator, Optional
-
-from pip._vendor.cachecontrol.cache import BaseCache
-from pip._vendor.cachecontrol.caches import FileCache
-from pip._vendor.requests.models import Response
-
-from pip._internal.utils.filesystem import adjacent_tmp_file, replace
-from pip._internal.utils.misc import ensure_dir
-
-
-def is_from_cache(response: Response) -> bool:
- return getattr(response, "from_cache", False)
-
-
-@contextmanager
-def suppressed_cache_errors() -> Generator[None, None, None]:
- """If we can't access the cache then we can just skip caching and process
- requests as if caching wasn't enabled.
- """
- try:
- yield
- except OSError:
- pass
-
-
-class SafeFileCache(BaseCache):
- """
- A file based cache which is safe to use even when the target directory may
- not be accessible or writable.
- """
-
- def __init__(self, directory: str) -> None:
- assert directory is not None, "Cache directory must not be None."
- super().__init__()
- self.directory = directory
-
- def _get_cache_path(self, name: str) -> str:
- # From cachecontrol.caches.file_cache.FileCache._fn, brought into our
- # class for backwards-compatibility and to avoid using a non-public
- # method.
- hashed = FileCache.encode(name)
- parts = list(hashed[:5]) + [hashed]
- return os.path.join(self.directory, *parts)
-
- def get(self, key: str) -> Optional[bytes]:
- path = self._get_cache_path(key)
- with suppressed_cache_errors():
- with open(path, "rb") as f:
- return f.read()
-
- def set(self, key: str, value: bytes, expires: Optional[int] = None) -> None:
- path = self._get_cache_path(key)
- with suppressed_cache_errors():
- ensure_dir(os.path.dirname(path))
-
- with adjacent_tmp_file(path) as f:
- f.write(value)
-
- replace(f.name, path)
-
- def delete(self, key: str) -> None:
- path = self._get_cache_path(key)
- with suppressed_cache_errors():
- os.remove(path)
diff --git a/spaces/Tatiana2u1/Tatiana/Dockerfile b/spaces/Tatiana2u1/Tatiana/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Tatiana2u1/Tatiana/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/TechShark20/handwespeak/app.py b/spaces/TechShark20/handwespeak/app.py
deleted file mode 100644
index fbfa5fc18521af5fa504b6910cc4a7736fc06bd8..0000000000000000000000000000000000000000
--- a/spaces/TechShark20/handwespeak/app.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import copy
-
-import torch
-import numpy as np
-import pickle
-import gradio as gr
-from skeleton_extractor import obtain_pose_data
-from body_normalization import normalize_single_dict as normalize_single_body_dict, BODY_IDENTIFIERS
-from hand_normalization import normalize_single_dict as normalize_single_hand_dict, HAND_IDENTIFIERS
-
-
-model = torch.load("checkpoint_t_7.pth", map_location=torch.device('cpu'))
-model.train(False)
-
-with open('labelkey.pkl', 'rb') as f:
- my_object = pickle.load(f)
-HAND_IDENTIFIERS = [id + "_Left" for id in HAND_IDENTIFIERS] + [id + "_Right" for id in HAND_IDENTIFIERS]
-GLOSS=[i for i in my_object.keys()]
-device = torch.device("cpu")
-if torch.cuda.is_available():
- device = torch.device("cuda")
-
-
-def tensor_to_dictionary(landmarks_tensor: torch.Tensor) -> dict:
-
- data_array = landmarks_tensor.numpy()
- output = {}
-
- for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
- output[identifier] = data_array[:, landmark_index]
-
- return output
-
-
-def dictionary_to_tensor(landmarks_dict: dict) -> torch.Tensor:
-
- output = np.empty(shape=(len(landmarks_dict["leftEar"]), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2))
-
- for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
- output[:, landmark_index, 0] = [frame[0] for frame in landmarks_dict[identifier]]
- output[:, landmark_index, 1] = [frame[1] for frame in landmarks_dict[identifier]]
-
- return torch.from_numpy(output)
-
-
-def greet(label, video0, video1):
-
- if label == "Webcam":
- video = video0
-
- elif label == "Video":
- video = video1
-
- elif label == "X":
- return {"A": 0.8, "B": 0.1, "C": 0.1}
-
- else:
- return {}
-
- data = obtain_pose_data(video)
-
- depth_map = np.empty(shape=(len(data.data_hub["nose_X"]), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2))
-
- for index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS):
- depth_map[:, index, 0] = data.data_hub[identifier + "_X"]
- depth_map[:, index, 1] = data.data_hub[identifier + "_Y"]
-
- depth_map = torch.from_numpy(np.copy(depth_map))
-
- depth_map = tensor_to_dictionary(depth_map)
-
- keys = copy.copy(list(depth_map.keys()))
- for key in keys:
- data = depth_map[key]
- del depth_map[key]
- depth_map[key.replace("_Left", "_0").replace("_Right", "_1")] = data
-
- depth_map = normalize_single_body_dict(depth_map)
- depth_map = normalize_single_hand_dict(depth_map)
-
- keys = copy.copy(list(depth_map.keys()))
- for key in keys:
- data = depth_map[key]
- del depth_map[key]
- depth_map[key.replace("_0", "_Left").replace("_1", "_Right")] = data
-
- depth_map = dictionary_to_tensor(depth_map)
-
- depth_map = depth_map - 0.5
-
- inputs = depth_map.squeeze(0).to(device)
- outputs = model(inputs).expand(1, -1, -1)
- results = torch.nn.functional.softmax(outputs, dim=2).detach().numpy()[0, 0]
-
- results = {GLOSS[i]: float(results[i]) for i in range(59)}
-
- return results
-
-
-label = gr.outputs.Label(num_top_classes=5, label="Top class probabilities")
-demo = gr.Interface(fn=greet, inputs=[gr.Dropdown(["Webcam", "Video"], label="Please select the input type:", type="value"), gr.Video(source="webcam", label="Webcam recording", type="mp4"), gr.Video(source="upload", label="Video upload", type="mp4")], outputs=label,
- title="🤟 Hands We speak",
- description="""This repository is devoloped , The implementation utilises 2 state of the art works
-### Citations
-i am thnkful to the authors of the following datasets and model
-- **WACV2022** - Original SPOTER paper - [Paper](https://openaccess.thecvf.com/content/WACV2022W/HADCV/papers/Bohacek_Sign_Pose-Based_Transformer_for_Word-Level_Sign_Language_Recognition_WACVW_2022_paper.pdf), [Code](https://github.com/matyasbohacek/spoter)
-- **INCLUDE DATASET** INCLUDE: A Large Scale Dataset for Indian Sign Language Recognition [reference ](https://dl.acm.org/doi/10.1145/3394171.3413528)
-### How to sign?
-The model wrapped in this demo was trained on [Include dataset adjectives ](https://zenodo.org/record/4010759#.ZEDjSXZBzIU), so it only knows selected ISL (INDO PAKASTNI SIGN LANGUAGE) vocabulary. try to use yourself by learning some sign from [here](https://www.youtube.com/watch?v=bIkHfFlu4VU),try to learn ugly short tall etc., and have them recognized using the webcam capture below. Have fun!
-> The demo can analyze webcam recordings or your uploaded videos. Before you hit Submit, **don't forget to select the input source in the dropdown first**.""",
- article="this space is a work of samar jain ",
- css="""
- @font-face {
- font-family: Graphik;
- font-weight: regular;
- src: url("https://www.signlanguagerecognition.com/supplementary/GraphikRegular.otf") format("opentype");
- }
- @font-face {
- font-family: Graphik;
- font-weight: bold;
- src: url("https://www.signlanguagerecognition.com/supplementary/GraphikBold.otf") format("opentype");
- }
- @font-face {
- font-family: MonumentExpanded;
- font-weight: regular;
- src: url("https://www.signlanguagerecognition.com/supplementary/MonumentExtended-Regular.otf") format("opentype");
- }
- @font-face {
- font-family: MonumentExpanded;
- font-weight: bold;
- src: url("https://www.signlanguagerecognition.com/supplementary/MonumentExtended-Bold.otf") format("opentype");
- }
- html {
- font-family: "Graphik";
- }
- h1 {
- font-family: "MonumentExpanded";
- }
- #12 {
- - background-image: linear-gradient(to left, #61D836, #6CB346) !important;
- background-color: #61D836 !important;
- }
- #12:hover {
- - background-image: linear-gradient(to left, #61D836, #6CB346) !important;
- background-color: #6CB346 !important;
- border: 0 !important;
- border-color: 0 !important;
- }
- .dark .gr-button-primary {
- --tw-gradient-from: #61D836;
- --tw-gradient-to: #6CB346;
- border: 0 !important;
- border-color: 0 !important;
- }
- .dark .gr-button-primary:hover {
- --tw-gradient-from: #64A642;
- --tw-gradient-to: #58933B;
- border: 0 !important;
- border-color: 0 !important;
- }
- """,
- cache_examples=True
- )
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/__init__.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/TornikeO/dreambooth-training/convertosd.py b/spaces/TornikeO/dreambooth-training/convertosd.py
deleted file mode 100644
index e4bec6cbe894dd74b24f633cc66346d687d3f802..0000000000000000000000000000000000000000
--- a/spaces/TornikeO/dreambooth-training/convertosd.py
+++ /dev/null
@@ -1,226 +0,0 @@
-# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint.
-# *Only* converts the UNet, VAE, and Text Encoder.
-# Does not convert optimizer state or any other thing.
-# Written by jachiam
-
-import argparse
-import os.path as osp
-
-import torch
-import gc
-
-# =================#
-# UNet Conversion #
-# =================#
-
-unet_conversion_map = [
- # (stable-diffusion, HF Diffusers)
- ("time_embed.0.weight", "time_embedding.linear_1.weight"),
- ("time_embed.0.bias", "time_embedding.linear_1.bias"),
- ("time_embed.2.weight", "time_embedding.linear_2.weight"),
- ("time_embed.2.bias", "time_embedding.linear_2.bias"),
- ("input_blocks.0.0.weight", "conv_in.weight"),
- ("input_blocks.0.0.bias", "conv_in.bias"),
- ("out.0.weight", "conv_norm_out.weight"),
- ("out.0.bias", "conv_norm_out.bias"),
- ("out.2.weight", "conv_out.weight"),
- ("out.2.bias", "conv_out.bias"),
-]
-
-unet_conversion_map_resnet = [
- # (stable-diffusion, HF Diffusers)
- ("in_layers.0", "norm1"),
- ("in_layers.2", "conv1"),
- ("out_layers.0", "norm2"),
- ("out_layers.3", "conv2"),
- ("emb_layers.1", "time_emb_proj"),
- ("skip_connection", "conv_shortcut"),
-]
-
-unet_conversion_map_layer = []
-# hardcoded number of downblocks and resnets/attentions...
-# would need smarter logic for other networks.
-for i in range(4):
- # loop over downblocks/upblocks
-
- for j in range(2):
- # loop over resnets/attentions for downblocks
- hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
- sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
- unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
-
- if i < 3:
- # no attention layers in down_blocks.3
- hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
- sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
- unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
-
- for j in range(3):
- # loop over resnets/attentions for upblocks
- hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
- sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
- unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
-
- if i > 0:
- # no attention layers in up_blocks.0
- hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
- sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
- unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
-
- if i < 3:
- # no downsample in down_blocks.3
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
- sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
- unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
-
- # no upsample in up_blocks.3
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}."
- unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
-
-hf_mid_atn_prefix = "mid_block.attentions.0."
-sd_mid_atn_prefix = "middle_block.1."
-unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
-
-for j in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{j}."
- sd_mid_res_prefix = f"middle_block.{2*j}."
- unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
-
-def convert_unet_state_dict(unet_state_dict):
- # buyer beware: this is a *brittle* function,
- # and correct output requires that all of these pieces interact in
- # the exact order in which I have arranged them.
- mapping = {k: k for k in unet_state_dict.keys()}
- for sd_name, hf_name in unet_conversion_map:
- mapping[hf_name] = sd_name
- for k, v in mapping.items():
- if "resnets" in k:
- for sd_part, hf_part in unet_conversion_map_resnet:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- for k, v in mapping.items():
- for sd_part, hf_part in unet_conversion_map_layer:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()}
- return new_state_dict
-
-
-# ================#
-# VAE Conversion #
-# ================#
-
-vae_conversion_map = [
- # (stable-diffusion, HF Diffusers)
- ("nin_shortcut", "conv_shortcut"),
- ("norm_out", "conv_norm_out"),
- ("mid.attn_1.", "mid_block.attentions.0."),
-]
-
-for i in range(4):
- # down_blocks have two resnets
- for j in range(2):
- hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}."
- sd_down_prefix = f"encoder.down.{i}.block.{j}."
- vae_conversion_map.append((sd_down_prefix, hf_down_prefix))
-
- if i < 3:
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0."
- sd_downsample_prefix = f"down.{i}.downsample."
- vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix))
-
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"up.{3-i}.upsample."
- vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix))
-
- # up_blocks have three resnets
- # also, up blocks in hf are numbered in reverse from sd
- for j in range(3):
- hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}."
- sd_up_prefix = f"decoder.up.{3-i}.block.{j}."
- vae_conversion_map.append((sd_up_prefix, hf_up_prefix))
-
-# this part accounts for mid blocks in both the encoder and the decoder
-for i in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{i}."
- sd_mid_res_prefix = f"mid.block_{i+1}."
- vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
-
-vae_conversion_map_attn = [
- # (stable-diffusion, HF Diffusers)
- ("norm.", "group_norm."),
- ("q.", "query."),
- ("k.", "key."),
- ("v.", "value."),
- ("proj_out.", "proj_attn."),
-]
-
-
-def reshape_weight_for_sd(w):
- # convert HF linear weights to SD conv2d weights
- return w.reshape(*w.shape, 1, 1)
-
-
-def convert_vae_state_dict(vae_state_dict):
- mapping = {k: k for k in vae_state_dict.keys()}
- for k, v in mapping.items():
- for sd_part, hf_part in vae_conversion_map:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- for k, v in mapping.items():
- if "attentions" in k:
- for sd_part, hf_part in vae_conversion_map_attn:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()}
- weights_to_convert = ["q", "k", "v", "proj_out"]
- print("[1;32mConverting to CKPT ...")
- for k, v in new_state_dict.items():
- for weight_name in weights_to_convert:
- if f"mid.attn_1.{weight_name}.weight" in k:
- new_state_dict[k] = reshape_weight_for_sd(v)
- return new_state_dict
-
-
-# =========================#
-# Text Encoder Conversion #
-# =========================#
-# pretty much a no-op
-
-
-def convert_text_enc_state_dict(text_enc_dict):
- return text_enc_dict
-
-
-def convert(model_path, checkpoint_path):
- unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin")
- vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin")
- text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin")
-
- # Convert the UNet model
- unet_state_dict = torch.load(unet_path, map_location='cpu')
- unet_state_dict = convert_unet_state_dict(unet_state_dict)
- unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()}
-
- # Convert the VAE model
- vae_state_dict = torch.load(vae_path, map_location='cpu')
- vae_state_dict = convert_vae_state_dict(vae_state_dict)
- vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()}
-
- # Convert the text encoder model
- text_enc_dict = torch.load(text_enc_path, map_location='cpu')
- text_enc_dict = convert_text_enc_state_dict(text_enc_dict)
- text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()}
-
- # Put together new checkpoint
- state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict}
-
- state_dict = {k:v.half() for k,v in state_dict.items()}
- state_dict = {"state_dict": state_dict}
- torch.save(state_dict, checkpoint_path)
- del state_dict, text_enc_dict, vae_state_dict, unet_state_dict
- torch.cuda.empty_cache()
- gc.collect()
diff --git a/spaces/TushDeMort/yolo/utils/datasets.py b/spaces/TushDeMort/yolo/utils/datasets.py
deleted file mode 100644
index 5fe4f7bcc28a91e83313c5372029928d0b8c0fd5..0000000000000000000000000000000000000000
--- a/spaces/TushDeMort/yolo/utils/datasets.py
+++ /dev/null
@@ -1,1320 +0,0 @@
-# Dataset utils and dataloaders
-
-import glob
-import logging
-import math
-import os
-import random
-import shutil
-import time
-from itertools import repeat
-from multiprocessing.pool import ThreadPool
-from pathlib import Path
-from threading import Thread
-
-import cv2
-import numpy as np
-import torch
-import torch.nn.functional as F
-from PIL import Image, ExifTags
-from torch.utils.data import Dataset
-from tqdm import tqdm
-
-import pickle
-from copy import deepcopy
-#from pycocotools import mask as maskUtils
-from torchvision.utils import save_image
-from torchvision.ops import roi_pool, roi_align, ps_roi_pool, ps_roi_align
-
-from utils.general import check_requirements, xyxy2xywh, xywh2xyxy, xywhn2xyxy, xyn2xy, segment2box, segments2boxes, \
- resample_segments, clean_str
-from utils.torch_utils import torch_distributed_zero_first
-
-# Parameters
-help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
-img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes
-vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes
-logger = logging.getLogger(__name__)
-
-# Get orientation exif tag
-for orientation in ExifTags.TAGS.keys():
- if ExifTags.TAGS[orientation] == 'Orientation':
- break
-
-
-def get_hash(files):
- # Returns a single hash value of a list of files
- return sum(os.path.getsize(f) for f in files if os.path.isfile(f))
-
-
-def exif_size(img):
- # Returns exif-corrected PIL size
- s = img.size # (width, height)
- try:
- rotation = dict(img._getexif().items())[orientation]
- if rotation == 6: # rotation 270
- s = (s[1], s[0])
- elif rotation == 8: # rotation 90
- s = (s[1], s[0])
- except:
- pass
-
- return s
-
-
-def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False,
- rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''):
- # Make sure only the first process in DDP process the dataset first, and the following others can use the cache
- with torch_distributed_zero_first(rank):
- dataset = LoadImagesAndLabels(path, imgsz, batch_size,
- augment=augment, # augment images
- hyp=hyp, # augmentation hyperparameters
- rect=rect, # rectangular training
- cache_images=cache,
- single_cls=opt.single_cls,
- stride=int(stride),
- pad=pad,
- image_weights=image_weights,
- prefix=prefix)
-
- batch_size = min(batch_size, len(dataset))
- nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers
- sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None
- loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader
- # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader()
- dataloader = loader(dataset,
- batch_size=batch_size,
- num_workers=nw,
- sampler=sampler,
- pin_memory=True,
- collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn)
- return dataloader, dataset
-
-
-class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
- """ Dataloader that reuses workers
-
- Uses same syntax as vanilla DataLoader
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
- self.iterator = super().__iter__()
-
- def __len__(self):
- return len(self.batch_sampler.sampler)
-
- def __iter__(self):
- for i in range(len(self)):
- yield next(self.iterator)
-
-
-class _RepeatSampler(object):
- """ Sampler that repeats forever
-
- Args:
- sampler (Sampler)
- """
-
- def __init__(self, sampler):
- self.sampler = sampler
-
- def __iter__(self):
- while True:
- yield from iter(self.sampler)
-
-
-class LoadImages: # for inference
- def __init__(self, path, img_size=640, stride=32):
- p = str(Path(path).absolute()) # os-agnostic absolute path
- if '*' in p:
- files = sorted(glob.glob(p, recursive=True)) # glob
- elif os.path.isdir(p):
- files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir
- elif os.path.isfile(p):
- files = [p] # files
- else:
- raise Exception(f'ERROR: {p} does not exist')
-
- images = [x for x in files if x.split('.')[-1].lower() in img_formats]
- videos = [x for x in files if x.split('.')[-1].lower() in vid_formats]
- ni, nv = len(images), len(videos)
-
- self.img_size = img_size
- self.stride = stride
- self.files = images + videos
- self.nf = ni + nv # number of files
- self.video_flag = [False] * ni + [True] * nv
- self.mode = 'image'
- if any(videos):
- self.new_video(videos[0]) # new video
- else:
- self.cap = None
- assert self.nf > 0, f'No images or videos found in {p}. ' \
- f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}'
-
- def __iter__(self):
- self.count = 0
- return self
-
- def __next__(self):
- if self.count == self.nf:
- raise StopIteration
- path = self.files[self.count]
-
- if self.video_flag[self.count]:
- # Read video
- self.mode = 'video'
- ret_val, img0 = self.cap.read()
- if not ret_val:
- self.count += 1
- self.cap.release()
- if self.count == self.nf: # last video
- raise StopIteration
- else:
- path = self.files[self.count]
- self.new_video(path)
- ret_val, img0 = self.cap.read()
-
- self.frame += 1
- print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ', end='')
-
- else:
- # Read image
- self.count += 1
- img0 = cv2.imread(path) # BGR
- assert img0 is not None, 'Image Not Found ' + path
- #print(f'image {self.count}/{self.nf} {path}: ', end='')
-
- # Padded resize
- img = letterbox(img0, self.img_size, stride=self.stride)[0]
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
-
- return path, img, img0, self.cap
-
- def new_video(self, path):
- self.frame = 0
- self.cap = cv2.VideoCapture(path)
- self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
-
- def __len__(self):
- return self.nf # number of files
-
-
-class LoadWebcam: # for inference
- def __init__(self, pipe='0', img_size=640, stride=32):
- self.img_size = img_size
- self.stride = stride
-
- if pipe.isnumeric():
- pipe = eval(pipe) # local camera
- # pipe = 'rtsp://192.168.1.64/1' # IP camera
- # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login
- # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera
-
- self.pipe = pipe
- self.cap = cv2.VideoCapture(pipe) # video capture object
- self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size
-
- def __iter__(self):
- self.count = -1
- return self
-
- def __next__(self):
- self.count += 1
- if cv2.waitKey(1) == ord('q'): # q to quit
- self.cap.release()
- cv2.destroyAllWindows()
- raise StopIteration
-
- # Read frame
- if self.pipe == 0: # local camera
- ret_val, img0 = self.cap.read()
- img0 = cv2.flip(img0, 1) # flip left-right
- else: # IP camera
- n = 0
- while True:
- n += 1
- self.cap.grab()
- if n % 30 == 0: # skip frames
- ret_val, img0 = self.cap.retrieve()
- if ret_val:
- break
-
- # Print
- assert ret_val, f'Camera Error {self.pipe}'
- img_path = 'webcam.jpg'
- print(f'webcam {self.count}: ', end='')
-
- # Padded resize
- img = letterbox(img0, self.img_size, stride=self.stride)[0]
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
-
- return img_path, img, img0, None
-
- def __len__(self):
- return 0
-
-
-class LoadStreams: # multiple IP or RTSP cameras
- def __init__(self, sources='streams.txt', img_size=640, stride=32):
- self.mode = 'stream'
- self.img_size = img_size
- self.stride = stride
-
- if os.path.isfile(sources):
- with open(sources, 'r') as f:
- sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())]
- else:
- sources = [sources]
-
- n = len(sources)
- self.imgs = [None] * n
- self.sources = [clean_str(x) for x in sources] # clean source names for later
- for i, s in enumerate(sources):
- # Start the thread to read frames from the video stream
- print(f'{i + 1}/{n}: {s}... ', end='')
- url = eval(s) if s.isnumeric() else s
- if 'youtube.com/' in str(url) or 'youtu.be/' in str(url): # if source is YouTube video
- check_requirements(('pafy', 'youtube_dl'))
- import pafy
- url = pafy.new(url).getbest(preftype="mp4").url
- cap = cv2.VideoCapture(url)
- assert cap.isOpened(), f'Failed to open {s}'
- w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- self.fps = cap.get(cv2.CAP_PROP_FPS) % 100
-
- _, self.imgs[i] = cap.read() # guarantee first frame
- thread = Thread(target=self.update, args=([i, cap]), daemon=True)
- print(f' success ({w}x{h} at {self.fps:.2f} FPS).')
- thread.start()
- print('') # newline
-
- # check for common shapes
- s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes
- self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
- if not self.rect:
- print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
-
- def update(self, index, cap):
- # Read next stream frame in a daemon thread
- n = 0
- while cap.isOpened():
- n += 1
- # _, self.imgs[index] = cap.read()
- cap.grab()
- if n == 4: # read every 4th frame
- success, im = cap.retrieve()
- self.imgs[index] = im if success else self.imgs[index] * 0
- n = 0
- time.sleep(1 / self.fps) # wait time
-
- def __iter__(self):
- self.count = -1
- return self
-
- def __next__(self):
- self.count += 1
- img0 = self.imgs.copy()
- if cv2.waitKey(1) == ord('q'): # q to quit
- cv2.destroyAllWindows()
- raise StopIteration
-
- # Letterbox
- img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0]
-
- # Stack
- img = np.stack(img, 0)
-
- # Convert
- img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416
- img = np.ascontiguousarray(img)
-
- return self.sources, img, img0, None
-
- def __len__(self):
- return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years
-
-
-def img2label_paths(img_paths):
- # Define label paths as a function of image paths
- sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings
- return ['txt'.join(x.replace(sa, sb, 1).rsplit(x.split('.')[-1], 1)) for x in img_paths]
-
-
-class LoadImagesAndLabels(Dataset): # for training/testing
- def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,
- cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''):
- self.img_size = img_size
- self.augment = augment
- self.hyp = hyp
- self.image_weights = image_weights
- self.rect = False if image_weights else rect
- self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training)
- self.mosaic_border = [-img_size // 2, -img_size // 2]
- self.stride = stride
- self.path = path
- #self.albumentations = Albumentations() if augment else None
-
- try:
- f = [] # image files
- for p in path if isinstance(path, list) else [path]:
- p = Path(p) # os-agnostic
- if p.is_dir(): # dir
- f += glob.glob(str(p / '**' / '*.*'), recursive=True)
- # f = list(p.rglob('**/*.*')) # pathlib
- elif p.is_file(): # file
- with open(p, 'r') as t:
- t = t.read().strip().splitlines()
- parent = str(p.parent) + os.sep
- f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path
- # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib)
- else:
- raise Exception(f'{prefix}{p} does not exist')
- self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats])
- # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib
- assert self.img_files, f'{prefix}No images found'
- except Exception as e:
- raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}')
-
- # Check cache
- self.label_files = img2label_paths(self.img_files) # labels
- cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels
- if cache_path.is_file():
- cache, exists = torch.load(cache_path), True # load
- #if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed
- # cache, exists = self.cache_labels(cache_path, prefix), False # re-cache
- else:
- cache, exists = self.cache_labels(cache_path, prefix), False # cache
-
- # Display cache
- nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total
- if exists:
- d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
- tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results
- assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}'
-
- # Read cache
- cache.pop('hash') # remove hash
- cache.pop('version') # remove version
- labels, shapes, self.segments = zip(*cache.values())
- self.labels = list(labels)
- self.shapes = np.array(shapes, dtype=np.float64)
- self.img_files = list(cache.keys()) # update
- self.label_files = img2label_paths(cache.keys()) # update
- if single_cls:
- for x in self.labels:
- x[:, 0] = 0
-
- n = len(shapes) # number of images
- bi = np.floor(np.arange(n) / batch_size).astype(int) # batch index
- nb = bi[-1] + 1 # number of batches
- self.batch = bi # batch index of image
- self.n = n
- self.indices = range(n)
-
- # Rectangular Training
- if self.rect:
- # Sort by aspect ratio
- s = self.shapes # wh
- ar = s[:, 1] / s[:, 0] # aspect ratio
- irect = ar.argsort()
- self.img_files = [self.img_files[i] for i in irect]
- self.label_files = [self.label_files[i] for i in irect]
- self.labels = [self.labels[i] for i in irect]
- self.shapes = s[irect] # wh
- ar = ar[irect]
-
- # Set training image shapes
- shapes = [[1, 1]] * nb
- for i in range(nb):
- ari = ar[bi == i]
- mini, maxi = ari.min(), ari.max()
- if maxi < 1:
- shapes[i] = [maxi, 1]
- elif mini > 1:
- shapes[i] = [1, 1 / mini]
-
- self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(int) * stride
-
- # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
- self.imgs = [None] * n
- if cache_images:
- if cache_images == 'disk':
- self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy')
- self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files]
- self.im_cache_dir.mkdir(parents=True, exist_ok=True)
- gb = 0 # Gigabytes of cached images
- self.img_hw0, self.img_hw = [None] * n, [None] * n
- results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n)))
- pbar = tqdm(enumerate(results), total=n)
- for i, x in pbar:
- if cache_images == 'disk':
- if not self.img_npy[i].exists():
- np.save(self.img_npy[i].as_posix(), x[0])
- gb += self.img_npy[i].stat().st_size
- else:
- self.imgs[i], self.img_hw0[i], self.img_hw[i] = x
- gb += self.imgs[i].nbytes
- pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)'
- pbar.close()
-
- def cache_labels(self, path=Path('./labels.cache'), prefix=''):
- # Cache dataset labels, check images and read shapes
- x = {} # dict
- nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate
- pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files))
- for i, (im_file, lb_file) in enumerate(pbar):
- try:
- # verify images
- im = Image.open(im_file)
- im.verify() # PIL verify
- shape = exif_size(im) # image size
- segments = [] # instance segments
- assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels'
- assert im.format.lower() in img_formats, f'invalid image format {im.format}'
-
- # verify labels
- if os.path.isfile(lb_file):
- nf += 1 # label found
- with open(lb_file, 'r') as f:
- l = [x.split() for x in f.read().strip().splitlines()]
- if any([len(x) > 8 for x in l]): # is segment
- classes = np.array([x[0] for x in l], dtype=np.float32)
- segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...)
- l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
- l = np.array(l, dtype=np.float32)
- if len(l):
- assert l.shape[1] == 5, 'labels require 5 columns each'
- assert (l >= 0).all(), 'negative labels'
- assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels'
- assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels'
- else:
- ne += 1 # label empty
- l = np.zeros((0, 5), dtype=np.float32)
- else:
- nm += 1 # label missing
- l = np.zeros((0, 5), dtype=np.float32)
- x[im_file] = [l, shape, segments]
- except Exception as e:
- nc += 1
- print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}')
-
- pbar.desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels... " \
- f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted"
- pbar.close()
-
- if nf == 0:
- print(f'{prefix}WARNING: No labels found in {path}. See {help_url}')
-
- x['hash'] = get_hash(self.label_files + self.img_files)
- x['results'] = nf, nm, ne, nc, i + 1
- x['version'] = 0.1 # cache version
- torch.save(x, path) # save for next time
- logging.info(f'{prefix}New cache created: {path}')
- return x
-
- def __len__(self):
- return len(self.img_files)
-
- # def __iter__(self):
- # self.count = -1
- # print('ran dataset iter')
- # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
- # return self
-
- def __getitem__(self, index):
- index = self.indices[index] # linear, shuffled, or image_weights
-
- hyp = self.hyp
- mosaic = self.mosaic and random.random() < hyp['mosaic']
- if mosaic:
- # Load mosaic
- if random.random() < 0.8:
- img, labels = load_mosaic(self, index)
- else:
- img, labels = load_mosaic9(self, index)
- shapes = None
-
- # MixUp https://arxiv.org/pdf/1710.09412.pdf
- if random.random() < hyp['mixup']:
- if random.random() < 0.8:
- img2, labels2 = load_mosaic(self, random.randint(0, len(self.labels) - 1))
- else:
- img2, labels2 = load_mosaic9(self, random.randint(0, len(self.labels) - 1))
- r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0
- img = (img * r + img2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
-
- else:
- # Load image
- img, (h0, w0), (h, w) = load_image(self, index)
-
- # Letterbox
- shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
- img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
- shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
-
- labels = self.labels[index].copy()
- if labels.size: # normalized xywh to pixel xyxy format
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1])
-
- if self.augment:
- # Augment imagespace
- if not mosaic:
- img, labels = random_perspective(img, labels,
- degrees=hyp['degrees'],
- translate=hyp['translate'],
- scale=hyp['scale'],
- shear=hyp['shear'],
- perspective=hyp['perspective'])
-
-
- #img, labels = self.albumentations(img, labels)
-
- # Augment colorspace
- augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])
-
- # Apply cutouts
- # if random.random() < 0.9:
- # labels = cutout(img, labels)
-
- if random.random() < hyp['paste_in']:
- sample_labels, sample_images, sample_masks = [], [], []
- while len(sample_labels) < 30:
- sample_labels_, sample_images_, sample_masks_ = load_samples(self, random.randint(0, len(self.labels) - 1))
- sample_labels += sample_labels_
- sample_images += sample_images_
- sample_masks += sample_masks_
- #print(len(sample_labels))
- if len(sample_labels) == 0:
- break
- labels = pastein(img, labels, sample_labels, sample_images, sample_masks)
-
- nL = len(labels) # number of labels
- if nL:
- labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh
- labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1
- labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1
-
- if self.augment:
- # flip up-down
- if random.random() < hyp['flipud']:
- img = np.flipud(img)
- if nL:
- labels[:, 2] = 1 - labels[:, 2]
-
- # flip left-right
- if random.random() < hyp['fliplr']:
- img = np.fliplr(img)
- if nL:
- labels[:, 1] = 1 - labels[:, 1]
-
- labels_out = torch.zeros((nL, 6))
- if nL:
- labels_out[:, 1:] = torch.from_numpy(labels)
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
-
- return torch.from_numpy(img), labels_out, self.img_files[index], shapes
-
- @staticmethod
- def collate_fn(batch):
- img, label, path, shapes = zip(*batch) # transposed
- for i, l in enumerate(label):
- l[:, 0] = i # add target image index for build_targets()
- return torch.stack(img, 0), torch.cat(label, 0), path, shapes
-
- @staticmethod
- def collate_fn4(batch):
- img, label, path, shapes = zip(*batch) # transposed
- n = len(shapes) // 4
- img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n]
-
- ho = torch.tensor([[0., 0, 0, 1, 0, 0]])
- wo = torch.tensor([[0., 0, 1, 0, 0, 0]])
- s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale
- for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW
- i *= 4
- if random.random() < 0.5:
- im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[
- 0].type(img[i].type())
- l = label[i]
- else:
- im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2)
- l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s
- img4.append(im)
- label4.append(l)
-
- for i, l in enumerate(label4):
- l[:, 0] = i # add target image index for build_targets()
-
- return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4
-
-
-# Ancillary functions --------------------------------------------------------------------------------------------------
-def load_image(self, index):
- # loads 1 image from dataset, returns img, original hw, resized hw
- img = self.imgs[index]
- if img is None: # not cached
- path = self.img_files[index]
- img = cv2.imread(path) # BGR
- assert img is not None, 'Image Not Found ' + path
- h0, w0 = img.shape[:2] # orig hw
- r = self.img_size / max(h0, w0) # resize image to img_size
- if r != 1: # always resize down, only resize up if training with augmentation
- interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR
- img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp)
- return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized
- else:
- return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized
-
-
-def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5):
- r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
- hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV))
- dtype = img.dtype # uint8
-
- x = np.arange(0, 256, dtype=np.int16)
- lut_hue = ((x * r[0]) % 180).astype(dtype)
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
-
- img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype)
- cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed
-
-
-def hist_equalize(img, clahe=True, bgr=False):
- # Equalize histogram on BGR image 'img' with img.shape(n,m,3) and range 0-255
- yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
- if clahe:
- c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
- yuv[:, :, 0] = c.apply(yuv[:, :, 0])
- else:
- yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram
- return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB
-
-
-def load_mosaic(self, index):
- # loads images in a 4-mosaic
-
- labels4, segments4 = [], []
- s = self.img_size
- yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
- indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img4
- if i == 0: # top left
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
- labels4.append(labels)
- segments4.extend(segments)
-
- # Concat/clip labels
- labels4 = np.concatenate(labels4, 0)
- for x in (labels4[:, 1:], *segments4):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img4, labels4 = replicate(img4, labels4) # replicate
-
- # Augment
- #img4, labels4, segments4 = remove_background(img4, labels4, segments4)
- #sample_segments(img4, labels4, segments4, probability=self.hyp['copy_paste'])
- img4, labels4, segments4 = copy_paste(img4, labels4, segments4, probability=self.hyp['copy_paste'])
- img4, labels4 = random_perspective(img4, labels4, segments4,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
-
- return img4, labels4
-
-
-def load_mosaic9(self, index):
- # loads images in a 9-mosaic
-
- labels9, segments9 = [], []
- s = self.img_size
- indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img9
- if i == 0: # center
- img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- h0, w0 = h, w
- c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates
- elif i == 1: # top
- c = s, s - h, s + w, s
- elif i == 2: # top right
- c = s + wp, s - h, s + wp + w, s
- elif i == 3: # right
- c = s + w0, s, s + w0 + w, s + h
- elif i == 4: # bottom right
- c = s + w0, s + hp, s + w0 + w, s + hp + h
- elif i == 5: # bottom
- c = s + w0 - w, s + h0, s + w0, s + h0 + h
- elif i == 6: # bottom left
- c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h
- elif i == 7: # left
- c = s - w, s + h0 - h, s, s + h0
- elif i == 8: # top left
- c = s - w, s + h0 - hp - h, s, s + h0 - hp
-
- padx, pady = c[:2]
- x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padx, pady) for x in segments]
- labels9.append(labels)
- segments9.extend(segments)
-
- # Image
- img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax]
- hp, wp = h, w # height, width previous
-
- # Offset
- yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y
- img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s]
-
- # Concat/clip labels
- labels9 = np.concatenate(labels9, 0)
- labels9[:, [1, 3]] -= xc
- labels9[:, [2, 4]] -= yc
- c = np.array([xc, yc]) # centers
- segments9 = [x - c for x in segments9]
-
- for x in (labels9[:, 1:], *segments9):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img9, labels9 = replicate(img9, labels9) # replicate
-
- # Augment
- #img9, labels9, segments9 = remove_background(img9, labels9, segments9)
- img9, labels9, segments9 = copy_paste(img9, labels9, segments9, probability=self.hyp['copy_paste'])
- img9, labels9 = random_perspective(img9, labels9, segments9,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
-
- return img9, labels9
-
-
-def load_samples(self, index):
- # loads images in a 4-mosaic
-
- labels4, segments4 = [], []
- s = self.img_size
- yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
- indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img4
- if i == 0: # top left
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
- labels4.append(labels)
- segments4.extend(segments)
-
- # Concat/clip labels
- labels4 = np.concatenate(labels4, 0)
- for x in (labels4[:, 1:], *segments4):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img4, labels4 = replicate(img4, labels4) # replicate
-
- # Augment
- #img4, labels4, segments4 = remove_background(img4, labels4, segments4)
- sample_labels, sample_images, sample_masks = sample_segments(img4, labels4, segments4, probability=0.5)
-
- return sample_labels, sample_images, sample_masks
-
-
-def copy_paste(img, labels, segments, probability=0.5):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- if probability and n:
- h, w, c = img.shape # height, width, channels
- im_new = np.zeros(img.shape, np.uint8)
- for j in random.sample(range(n), k=round(probability * n)):
- l, s = labels[j], segments[j]
- box = w - l[3], l[2], w - l[1], l[4]
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
- labels = np.concatenate((labels, [[l[0], *box]]), 0)
- segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
- cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
-
- result = cv2.bitwise_and(src1=img, src2=im_new)
- result = cv2.flip(result, 1) # augment segments (flip left-right)
- i = result > 0 # pixels to replace
- # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
- img[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
-
- return img, labels, segments
-
-
-def remove_background(img, labels, segments):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- h, w, c = img.shape # height, width, channels
- im_new = np.zeros(img.shape, np.uint8)
- img_new = np.ones(img.shape, np.uint8) * 114
- for j in range(n):
- cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
-
- result = cv2.bitwise_and(src1=img, src2=im_new)
-
- i = result > 0 # pixels to replace
- img_new[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
-
- return img_new, labels, segments
-
-
-def sample_segments(img, labels, segments, probability=0.5):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- sample_labels = []
- sample_images = []
- sample_masks = []
- if probability and n:
- h, w, c = img.shape # height, width, channels
- for j in random.sample(range(n), k=round(probability * n)):
- l, s = labels[j], segments[j]
- box = l[1].astype(int).clip(0,w-1), l[2].astype(int).clip(0,h-1), l[3].astype(int).clip(0,w-1), l[4].astype(int).clip(0,h-1)
-
- #print(box)
- if (box[2] <= box[0]) or (box[3] <= box[1]):
- continue
-
- sample_labels.append(l[0])
-
- mask = np.zeros(img.shape, np.uint8)
-
- cv2.drawContours(mask, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
- sample_masks.append(mask[box[1]:box[3],box[0]:box[2],:])
-
- result = cv2.bitwise_and(src1=img, src2=mask)
- i = result > 0 # pixels to replace
- mask[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
- #print(box)
- sample_images.append(mask[box[1]:box[3],box[0]:box[2],:])
-
- return sample_labels, sample_images, sample_masks
-
-
-def replicate(img, labels):
- # Replicate labels
- h, w = img.shape[:2]
- boxes = labels[:, 1:].astype(int)
- x1, y1, x2, y2 = boxes.T
- s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
- for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
- x1b, y1b, x2b, y2b = boxes[i]
- bh, bw = y2b - y1b, x2b - x1b
- yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
- x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
- img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
-
- return img, labels
-
-
-def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
- # Resize and pad image while meeting stride-multiple constraints
- shape = img.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better test mAP)
- r = min(r, 1.0)
-
- # Compute padding
- ratio = r, r # width, height ratios
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
- elif scaleFill: # stretch
- dw, dh = 0.0, 0.0
- new_unpad = (new_shape[1], new_shape[0])
- ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
- return img, ratio, (dw, dh)
-
-
-def random_perspective(img, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
- border=(0, 0)):
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
- # targets = [cls, xyxy]
-
- height = img.shape[0] + border[0] * 2 # shape(h,w,c)
- width = img.shape[1] + border[1] * 2
-
- # Center
- C = np.eye(3)
- C[0, 2] = -img.shape[1] / 2 # x translation (pixels)
- C[1, 2] = -img.shape[0] / 2 # y translation (pixels)
-
- # Perspective
- P = np.eye(3)
- P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
- P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1.1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
- S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
- T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
- if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
- if perspective:
- img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114))
- else: # affine
- img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
-
- # Visualize
- # import matplotlib.pyplot as plt
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
- # ax[0].imshow(img[:, :, ::-1]) # base
- # ax[1].imshow(img2[:, :, ::-1]) # warped
-
- # Transform label coordinates
- n = len(targets)
- if n:
- use_segments = any(x.any() for x in segments)
- new = np.zeros((n, 4))
- if use_segments: # warp segments
- segments = resample_segments(segments) # upsample
- for i, segment in enumerate(segments):
- xy = np.ones((len(segment), 3))
- xy[:, :2] = segment
- xy = xy @ M.T # transform
- xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine
-
- # clip
- new[i] = segment2box(xy, width, height)
-
- else: # warp boxes
- xy = np.ones((n * 4, 3))
- xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
- xy = xy @ M.T # transform
- xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine
-
- # create new boxes
- x = xy[:, [0, 2, 4, 6]]
- y = xy[:, [1, 3, 5, 7]]
- new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
-
- # clip
- new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
- new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
-
- # filter candidates
- i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
- targets = targets[i]
- targets[:, 1:5] = new[i]
-
- return img, targets
-
-
-def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
- # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
- ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
- return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
-
-
-def bbox_ioa(box1, box2):
- # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2
- box2 = box2.transpose()
-
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
-
- # Intersection area
- inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
- (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
-
- # box2 area
- box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16
-
- # Intersection over box2 area
- return inter_area / box2_area
-
-
-def cutout(image, labels):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- h, w = image.shape[:2]
-
- # create random masks
- scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
- for s in scales:
- mask_h = random.randint(1, int(h * s))
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- # apply random color mask
- image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
-
- # return unobscured labels
- if len(labels) and s > 0.03:
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- labels = labels[ioa < 0.60] # remove >60% obscured labels
-
- return labels
-
-
-def pastein(image, labels, sample_labels, sample_images, sample_masks):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- h, w = image.shape[:2]
-
- # create random masks
- scales = [0.75] * 2 + [0.5] * 4 + [0.25] * 4 + [0.125] * 4 + [0.0625] * 6 # image size fraction
- for s in scales:
- if random.random() < 0.2:
- continue
- mask_h = random.randint(1, int(h * s))
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- if len(labels):
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- else:
- ioa = np.zeros(1)
-
- if (ioa < 0.30).all() and len(sample_labels) and (xmax > xmin+20) and (ymax > ymin+20): # allow 30% obscuration of existing labels
- sel_ind = random.randint(0, len(sample_labels)-1)
- #print(len(sample_labels))
- #print(sel_ind)
- #print((xmax-xmin, ymax-ymin))
- #print(image[ymin:ymax, xmin:xmax].shape)
- #print([[sample_labels[sel_ind], *box]])
- #print(labels.shape)
- hs, ws, cs = sample_images[sel_ind].shape
- r_scale = min((ymax-ymin)/hs, (xmax-xmin)/ws)
- r_w = int(ws*r_scale)
- r_h = int(hs*r_scale)
-
- if (r_w > 10) and (r_h > 10):
- r_mask = cv2.resize(sample_masks[sel_ind], (r_w, r_h))
- r_image = cv2.resize(sample_images[sel_ind], (r_w, r_h))
- temp_crop = image[ymin:ymin+r_h, xmin:xmin+r_w]
- m_ind = r_mask > 0
- if m_ind.astype(np.int32).sum() > 60:
- temp_crop[m_ind] = r_image[m_ind]
- #print(sample_labels[sel_ind])
- #print(sample_images[sel_ind].shape)
- #print(temp_crop.shape)
- box = np.array([xmin, ymin, xmin+r_w, ymin+r_h], dtype=np.float32)
- if len(labels):
- labels = np.concatenate((labels, [[sample_labels[sel_ind], *box]]), 0)
- else:
- labels = np.array([[sample_labels[sel_ind], *box]])
-
- image[ymin:ymin+r_h, xmin:xmin+r_w] = temp_crop
-
- return labels
-
-class Albumentations:
- # YOLOv5 Albumentations class (optional, only used if package is installed)
- def __init__(self):
- self.transform = None
- import albumentations as A
-
- self.transform = A.Compose([
- A.CLAHE(p=0.01),
- A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.01),
- A.RandomGamma(gamma_limit=[80, 120], p=0.01),
- A.Blur(p=0.01),
- A.MedianBlur(p=0.01),
- A.ToGray(p=0.01),
- A.ImageCompression(quality_lower=75, p=0.01),],
- bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels']))
-
- #logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))
-
- def __call__(self, im, labels, p=1.0):
- if self.transform and random.random() < p:
- new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed
- im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
- return im, labels
-
-
-def create_folder(path='./new'):
- # Create folder
- if os.path.exists(path):
- shutil.rmtree(path) # delete output folder
- os.makedirs(path) # make new output folder
-
-
-def flatten_recursive(path='../coco'):
- # Flatten a recursive directory by bringing all files to top level
- new_path = Path(path + '_flat')
- create_folder(new_path)
- for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)):
- shutil.copyfile(file, new_path / Path(file).name)
-
-
-def extract_boxes(path='../coco/'): # from utils.datasets import *; extract_boxes('../coco128')
- # Convert detection dataset into classification dataset, with one directory per class
-
- path = Path(path) # images dir
- shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing
- files = list(path.rglob('*.*'))
- n = len(files) # number of files
- for im_file in tqdm(files, total=n):
- if im_file.suffix[1:] in img_formats:
- # image
- im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB
- h, w = im.shape[:2]
-
- # labels
- lb_file = Path(img2label_paths([str(im_file)])[0])
- if Path(lb_file).exists():
- with open(lb_file, 'r') as f:
- lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
-
- for j, x in enumerate(lb):
- c = int(x[0]) # class
- f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename
- if not f.parent.is_dir():
- f.parent.mkdir(parents=True)
-
- b = x[1:] * [w, h, w, h] # box
- # b[2:] = b[2:].max() # rectangle to square
- b[2:] = b[2:] * 1.2 + 3 # pad
- b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)
-
- b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image
- b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
- assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}'
-
-
-def autosplit(path='../coco', weights=(0.9, 0.1, 0.0), annotated_only=False):
- """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files
- Usage: from utils.datasets import *; autosplit('../coco')
- Arguments
- path: Path to images directory
- weights: Train, val, test weights (list)
- annotated_only: Only use images with an annotated txt file
- """
- path = Path(path) # images dir
- files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in img_formats], []) # image files only
- n = len(files) # number of files
- indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split
-
- txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files
- [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing
-
- print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only)
- for i, img in tqdm(zip(indices, files), total=n):
- if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label
- with open(path / txt[i], 'a') as f:
- f.write(str(img) + '\n') # add image to txt file
-
-
-def load_segmentations(self, index):
- key = '/work/handsomejw66/coco17/' + self.img_files[index]
- #print(key)
- # /work/handsomejw66/coco17/
- return self.segs[key]
diff --git a/spaces/VIPLab/Track-Anything/tracker/util/tensor_util.py b/spaces/VIPLab/Track-Anything/tracker/util/tensor_util.py
deleted file mode 100644
index 05189d38e2b0b0d1d08bd7804b8e43418d6da637..0000000000000000000000000000000000000000
--- a/spaces/VIPLab/Track-Anything/tracker/util/tensor_util.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import torch.nn.functional as F
-
-
-def compute_tensor_iu(seg, gt):
- intersection = (seg & gt).float().sum()
- union = (seg | gt).float().sum()
-
- return intersection, union
-
-def compute_tensor_iou(seg, gt):
- intersection, union = compute_tensor_iu(seg, gt)
- iou = (intersection + 1e-6) / (union + 1e-6)
-
- return iou
-
-# STM
-def pad_divide_by(in_img, d):
- h, w = in_img.shape[-2:]
-
- if h % d > 0:
- new_h = h + d - h % d
- else:
- new_h = h
- if w % d > 0:
- new_w = w + d - w % d
- else:
- new_w = w
- lh, uh = int((new_h-h) / 2), int(new_h-h) - int((new_h-h) / 2)
- lw, uw = int((new_w-w) / 2), int(new_w-w) - int((new_w-w) / 2)
- pad_array = (int(lw), int(uw), int(lh), int(uh))
- out = F.pad(in_img, pad_array)
- return out, pad_array
-
-def unpad(img, pad):
- if len(img.shape) == 4:
- if pad[2]+pad[3] > 0:
- img = img[:,:,pad[2]:-pad[3],:]
- if pad[0]+pad[1] > 0:
- img = img[:,:,:,pad[0]:-pad[1]]
- elif len(img.shape) == 3:
- if pad[2]+pad[3] > 0:
- img = img[:,pad[2]:-pad[3],:]
- if pad[0]+pad[1] > 0:
- img = img[:,:,pad[0]:-pad[1]]
- else:
- raise NotImplementedError
- return img
\ No newline at end of file
diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/save.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/save.py
deleted file mode 100644
index f7175555296b859086cc2c753888bdfe21cb502e..0000000000000000000000000000000000000000
--- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/deoldify/save.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from fastai.basic_train import Learner, LearnerCallback
-from fastai.vision.gan import GANLearner
-
-
-class GANSaveCallback(LearnerCallback):
- """A `LearnerCallback` that saves history of metrics while training `learn` into CSV `filename`."""
-
- def __init__(
- self,
- learn: GANLearner,
- learn_gen: Learner,
- filename: str,
- save_iters: int = 1000,
- ):
- super().__init__(learn)
- self.learn_gen = learn_gen
- self.filename = filename
- self.save_iters = save_iters
-
- def on_batch_end(self, iteration: int, epoch: int, **kwargs) -> None:
- if iteration == 0:
- return
-
- if iteration % self.save_iters == 0:
- self._save_gen_learner(iteration=iteration, epoch=epoch)
-
- def _save_gen_learner(self, iteration: int, epoch: int):
- filename = '{}_{}_{}'.format(self.filename, epoch, iteration)
- self.learn_gen.save(filename)
diff --git a/spaces/XuebaoDingZhen/YOLOv50.0.1/CONTRIBUTING.md b/spaces/XuebaoDingZhen/YOLOv50.0.1/CONTRIBUTING.md
deleted file mode 100644
index 95d88b9830d68f3bdcd621144a774c32f19a700e..0000000000000000000000000000000000000000
--- a/spaces/XuebaoDingZhen/YOLOv50.0.1/CONTRIBUTING.md
+++ /dev/null
@@ -1,93 +0,0 @@
-## Contributing to YOLOv5 🚀
-
-We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's:
-
-- Reporting a bug
-- Discussing the current state of the code
-- Submitting a fix
-- Proposing a new feature
-- Becoming a maintainer
-
-YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be
-helping push the frontiers of what's possible in AI 😃!
-
-## Submitting a Pull Request (PR) 🛠️
-
-Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
-
-### 1. Select File to Update
-
-Select `requirements.txt` to update by clicking on it in GitHub.
-
-
-
-### 2. Click 'Edit this file'
-
-The button is in the top-right corner.
-
-
-
-### 3. Make Changes
-
-Change the `matplotlib` version from `3.2.2` to `3.3`.
-
-
-
-### 4. Preview Changes and Submit PR
-
-Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
-for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
-changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
-
-
-
-### PR recommendations
-
-To allow your work to be integrated as seamlessly as possible, we advise you to:
-
-- ✅ Verify your PR is **up-to-date** with `ultralytics/yolov5` `master` branch. If your PR is behind you can update
- your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally.
-
-
-
-- ✅ Verify all YOLOv5 Continuous Integration (CI) **checks are passing**.
-
-
-
-- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
- but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
-
-## Submitting a Bug Report 🐛
-
-If you spot a problem with YOLOv5 please submit a Bug Report!
-
-For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few
-short guidelines below to help users provide what we need to get started.
-
-When asking a question, people will be better able to provide help if you provide **code** that they can easily
-understand and use to **reproduce** the problem. This is referred to by community members as creating
-a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). Your code that reproduces
-the problem should be:
-
-- ✅ **Minimal** – Use as little code as possible that still produces the same problem
-- ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
-- ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
-
-In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
-should be:
-
-- ✅ **Current** – Verify that your code is up-to-date with the current
- GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
- copy to ensure your problem has not already been resolved by previous commits.
-- ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this
- repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
-
-If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛
-**Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and provide
-a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us better
-understand and diagnose your problem.
-
-## License
-
-By contributing, you agree that your contributions will be licensed under
-the [AGPL-3.0 license](https://choosealicense.com/licenses/agpl-3.0/)
diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/mel_processing.py b/spaces/XzJosh/Azuma-Bert-VITS2/mel_processing.py
deleted file mode 100644
index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Azuma-Bert-VITS2/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/YaeMiko2005/Yae_Miko_voice_jp/app.py b/spaces/YaeMiko2005/Yae_Miko_voice_jp/app.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py
deleted file mode 100644
index e564438d5bf016bcdbb65b4bbdc215d79f579f8a..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .coco import register_coco_instances # noqa
-from .coco_panoptic import register_coco_panoptic_separated # noqa
diff --git a/spaces/YuhangDeng123/Whisper-online/app.py b/spaces/YuhangDeng123/Whisper-online/app.py
deleted file mode 100644
index 631a5c38b75e4d2fb89c415ad7275a0686b9f4ef..0000000000000000000000000000000000000000
--- a/spaces/YuhangDeng123/Whisper-online/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from transformers import pipeline
-import gradio as gr
-import time
-
-pipe= pipeline(model="YuhangDeng123/whisper-small-hi")
-
-def transcribe(audio, state=""):
- text = pipe(audio)["text"]
- state += text + " "
- return state, state
-
-gr.Interface(
- title="Whisper-Small Online Cantonese Recognition",
- fn=transcribe,
- inputs=[
- gr.Audio(source="microphone", type="filepath", streaming=True),
- "state"
- ],
- outputs=[
- "textbox",
- "state"
- ],
- live=True).launch()
\ No newline at end of file
diff --git a/spaces/Yunshansongbai/SVC-Nahida/modules/losses.py b/spaces/Yunshansongbai/SVC-Nahida/modules/losses.py
deleted file mode 100644
index 93f9523df3e326825c9130b5bbcced39c4a5be3f..0000000000000000000000000000000000000000
--- a/spaces/Yunshansongbai/SVC-Nahida/modules/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import paddle
-from paddle.nn import functional as F
-
-import modules.commons as commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.astype('float32').detach()
- gl = gl.astype('float32')
- loss += paddle.mean(paddle.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.astype('float32')
- dg = dg.astype('float32')
- r_loss = paddle.mean((1-dr)**2)
- g_loss = paddle.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.astype('float32')
- l = paddle.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.astype('float32')
- logs_q = logs_q.astype('float32')
- m_p = m_p.astype('float32')
- logs_p = logs_p.astype('float32')
- z_mask = z_mask.astype('float32')
- #print(logs_p)
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * paddle.exp(-2. * logs_p)
- kl = paddle.sum(kl * z_mask)
- l = kl / paddle.sum(z_mask)
- return l
diff --git a/spaces/ZenXir/FreeVC/speaker_encoder/compute_embed.py b/spaces/ZenXir/FreeVC/speaker_encoder/compute_embed.py
deleted file mode 100644
index 2fee33db0168f40efc42145c06fa62016e3e008e..0000000000000000000000000000000000000000
--- a/spaces/ZenXir/FreeVC/speaker_encoder/compute_embed.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from speaker_encoder import inference as encoder
-from multiprocessing.pool import Pool
-from functools import partial
-from pathlib import Path
-# from utils import logmmse
-# from tqdm import tqdm
-# import numpy as np
-# import librosa
-
-
-def embed_utterance(fpaths, encoder_model_fpath):
- if not encoder.is_loaded():
- encoder.load_model(encoder_model_fpath)
-
- # Compute the speaker embedding of the utterance
- wav_fpath, embed_fpath = fpaths
- wav = np.load(wav_fpath)
- wav = encoder.preprocess_wav(wav)
- embed = encoder.embed_utterance(wav)
- np.save(embed_fpath, embed, allow_pickle=False)
-
-
-def create_embeddings(outdir_root: Path, wav_dir: Path, encoder_model_fpath: Path, n_processes: int):
-
- wav_dir = outdir_root.joinpath("audio")
- metadata_fpath = synthesizer_root.joinpath("train.txt")
- assert wav_dir.exists() and metadata_fpath.exists()
- embed_dir = synthesizer_root.joinpath("embeds")
- embed_dir.mkdir(exist_ok=True)
-
- # Gather the input wave filepath and the target output embed filepath
- with metadata_fpath.open("r") as metadata_file:
- metadata = [line.split("|") for line in metadata_file]
- fpaths = [(wav_dir.joinpath(m[0]), embed_dir.joinpath(m[2])) for m in metadata]
-
- # TODO: improve on the multiprocessing, it's terrible. Disk I/O is the bottleneck here.
- # Embed the utterances in separate threads
- func = partial(embed_utterance, encoder_model_fpath=encoder_model_fpath)
- job = Pool(n_processes).imap(func, fpaths)
- list(tqdm(job, "Embedding", len(fpaths), unit="utterances"))
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/focal_loss.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/focal_loss.py
deleted file mode 100644
index 493907c6984d532175e0351daf2eafe4b9ff0256..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/losses/focal_loss.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.ops import sigmoid_focal_loss as _sigmoid_focal_loss
-
-from ..builder import LOSSES
-from .utils import weight_reduce_loss
-
-
-# This method is only for debugging
-def py_sigmoid_focal_loss(pred,
- target,
- weight=None,
- gamma=2.0,
- alpha=0.25,
- reduction='mean',
- avg_factor=None):
- """PyTorch version of `Focal Loss `_.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, C), C is the
- number of classes
- target (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): Sample-wise loss weight.
- gamma (float, optional): The gamma for calculating the modulating
- factor. Defaults to 2.0.
- alpha (float, optional): A balanced form for Focal Loss.
- Defaults to 0.25.
- reduction (str, optional): The method used to reduce the loss into
- a scalar. Defaults to 'mean'.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- """
- pred_sigmoid = pred.sigmoid()
- target = target.type_as(pred)
- pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)
- focal_weight = (alpha * target + (1 - alpha) *
- (1 - target)) * pt.pow(gamma)
- loss = F.binary_cross_entropy_with_logits(
- pred, target, reduction='none') * focal_weight
- if weight is not None:
- if weight.shape != loss.shape:
- if weight.size(0) == loss.size(0):
- # For most cases, weight is of shape (num_priors, ),
- # which means it does not have the second axis num_class
- weight = weight.view(-1, 1)
- else:
- # Sometimes, weight per anchor per class is also needed. e.g.
- # in FSAF. But it may be flattened of shape
- # (num_priors x num_class, ), while loss is still of shape
- # (num_priors, num_class).
- assert weight.numel() == loss.numel()
- weight = weight.view(loss.size(0), -1)
- assert weight.ndim == loss.ndim
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
- return loss
-
-
-def sigmoid_focal_loss(pred,
- target,
- weight=None,
- gamma=2.0,
- alpha=0.25,
- reduction='mean',
- avg_factor=None):
- r"""A warpper of cuda version `Focal Loss
- `_.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, C), C is the number
- of classes.
- target (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): Sample-wise loss weight.
- gamma (float, optional): The gamma for calculating the modulating
- factor. Defaults to 2.0.
- alpha (float, optional): A balanced form for Focal Loss.
- Defaults to 0.25.
- reduction (str, optional): The method used to reduce the loss into
- a scalar. Defaults to 'mean'. Options are "none", "mean" and "sum".
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- """
- # Function.apply does not accept keyword arguments, so the decorator
- # "weighted_loss" is not applicable
- loss = _sigmoid_focal_loss(pred.contiguous(), target, gamma, alpha, None,
- 'none')
- if weight is not None:
- if weight.shape != loss.shape:
- if weight.size(0) == loss.size(0):
- # For most cases, weight is of shape (num_priors, ),
- # which means it does not have the second axis num_class
- weight = weight.view(-1, 1)
- else:
- # Sometimes, weight per anchor per class is also needed. e.g.
- # in FSAF. But it may be flattened of shape
- # (num_priors x num_class, ), while loss is still of shape
- # (num_priors, num_class).
- assert weight.numel() == loss.numel()
- weight = weight.view(loss.size(0), -1)
- assert weight.ndim == loss.ndim
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
- return loss
-
-
-@LOSSES.register_module()
-class FocalLoss(nn.Module):
-
- def __init__(self,
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- reduction='mean',
- loss_weight=1.0):
- """`Focal Loss `_
-
- Args:
- use_sigmoid (bool, optional): Whether to the prediction is
- used for sigmoid or softmax. Defaults to True.
- gamma (float, optional): The gamma for calculating the modulating
- factor. Defaults to 2.0.
- alpha (float, optional): A balanced form for Focal Loss.
- Defaults to 0.25.
- reduction (str, optional): The method used to reduce the loss into
- a scalar. Defaults to 'mean'. Options are "none", "mean" and
- "sum".
- loss_weight (float, optional): Weight of loss. Defaults to 1.0.
- """
- super(FocalLoss, self).__init__()
- assert use_sigmoid is True, 'Only sigmoid focal loss supported now.'
- self.use_sigmoid = use_sigmoid
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): The prediction.
- target (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Options are "none", "mean" and "sum".
-
- Returns:
- torch.Tensor: The calculated loss
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.use_sigmoid:
- if torch.cuda.is_available() and pred.is_cuda:
- calculate_loss_func = sigmoid_focal_loss
- else:
- num_classes = pred.size(1)
- target = F.one_hot(target, num_classes=num_classes + 1)
- target = target[:, :num_classes]
- calculate_loss_func = py_sigmoid_focal_loss
-
- loss_cls = self.loss_weight * calculate_loss_func(
- pred,
- target,
- weight,
- gamma=self.gamma,
- alpha=self.alpha,
- reduction=reduction,
- avg_factor=avg_factor)
-
- else:
- raise NotImplementedError
- return loss_cls
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/emanet_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/emanet_r50-d8.py
deleted file mode 100644
index 26adcd430926de0862204a71d345f2543167f27b..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/emanet_r50-d8.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='EMAHead',
- in_channels=2048,
- in_index=3,
- channels=256,
- ema_channels=512,
- num_bases=64,
- num_stages=3,
- momentum=0.1,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/enc_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/enc_head.py
deleted file mode 100644
index d7c6b8ed6a72cf402802c828f27a3de321cc52a0..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/decode_heads/enc_head.py
+++ /dev/null
@@ -1,199 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
- * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv
- * Copyright (c) OpenMMLab. All rights reserved.
-'''
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, build_norm_layer
-
-from annotator.uniformer.mmseg.ops import Encoding, resize
-from ..builder import HEADS, build_loss
-from .decode_head import BaseDecodeHead
-
-
-class EncModule(nn.Module):
- """Encoding Module used in EncNet.
-
- Args:
- in_channels (int): Input channels.
- num_codes (int): Number of code words.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg):
- super(EncModule, self).__init__()
- self.encoding_project = ConvModule(
- in_channels,
- in_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- # TODO: resolve this hack
- # change to 1d
- if norm_cfg is not None:
- encoding_norm_cfg = norm_cfg.copy()
- if encoding_norm_cfg['type'] in ['BN', 'IN']:
- encoding_norm_cfg['type'] += '1d'
- else:
- encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace(
- '2d', '1d')
- else:
- # fallback to BN1d
- encoding_norm_cfg = dict(type='BN1d')
- self.encoding = nn.Sequential(
- Encoding(channels=in_channels, num_codes=num_codes),
- build_norm_layer(encoding_norm_cfg, num_codes)[1],
- nn.ReLU(inplace=True))
- self.fc = nn.Sequential(
- nn.Linear(in_channels, in_channels), nn.Sigmoid())
-
- def forward(self, x):
- """Forward function."""
- encoding_projection = self.encoding_project(x)
- encoding_feat = self.encoding(encoding_projection).mean(dim=1)
- batch_size, channels, _, _ = x.size()
- gamma = self.fc(encoding_feat)
- y = gamma.view(batch_size, channels, 1, 1)
- output = F.relu_(x + x * y)
- return encoding_feat, output
-
-
-@HEADS.register_module()
-class EncHead(BaseDecodeHead):
- """Context Encoding for Semantic Segmentation.
-
- This head is the implementation of `EncNet
- `_.
-
- Args:
- num_codes (int): Number of code words. Default: 32.
- use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to
- regularize the training. Default: True.
- add_lateral (bool): Whether use lateral connection to fuse features.
- Default: False.
- loss_se_decode (dict): Config of decode loss.
- Default: dict(type='CrossEntropyLoss', use_sigmoid=True).
- """
-
- def __init__(self,
- num_codes=32,
- use_se_loss=True,
- add_lateral=False,
- loss_se_decode=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=0.2),
- **kwargs):
- super(EncHead, self).__init__(
- input_transform='multiple_select', **kwargs)
- self.use_se_loss = use_se_loss
- self.add_lateral = add_lateral
- self.num_codes = num_codes
- self.bottleneck = ConvModule(
- self.in_channels[-1],
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- if add_lateral:
- self.lateral_convs = nn.ModuleList()
- for in_channels in self.in_channels[:-1]: # skip the last one
- self.lateral_convs.append(
- ConvModule(
- in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.fusion = ConvModule(
- len(self.in_channels) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.enc_module = EncModule(
- self.channels,
- num_codes=num_codes,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- if self.use_se_loss:
- self.loss_se_decode = build_loss(loss_se_decode)
- self.se_layer = nn.Linear(self.channels, self.num_classes)
-
- def forward(self, inputs):
- """Forward function."""
- inputs = self._transform_inputs(inputs)
- feat = self.bottleneck(inputs[-1])
- if self.add_lateral:
- laterals = [
- resize(
- lateral_conv(inputs[i]),
- size=feat.shape[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
- feat = self.fusion(torch.cat([feat, *laterals], 1))
- encode_feat, output = self.enc_module(feat)
- output = self.cls_seg(output)
- if self.use_se_loss:
- se_output = self.se_layer(encode_feat)
- return output, se_output
- else:
- return output
-
- def forward_test(self, inputs, img_metas, test_cfg):
- """Forward function for testing, ignore se_loss."""
- if self.use_se_loss:
- return self.forward(inputs)[0]
- else:
- return self.forward(inputs)
-
- @staticmethod
- def _convert_to_onehot_labels(seg_label, num_classes):
- """Convert segmentation label to onehot.
-
- Args:
- seg_label (Tensor): Segmentation label of shape (N, H, W).
- num_classes (int): Number of classes.
-
- Returns:
- Tensor: Onehot labels of shape (N, num_classes).
- """
-
- batch_size = seg_label.size(0)
- onehot_labels = seg_label.new_zeros((batch_size, num_classes))
- for i in range(batch_size):
- hist = seg_label[i].float().histc(
- bins=num_classes, min=0, max=num_classes - 1)
- onehot_labels[i] = hist > 0
- return onehot_labels
-
- def losses(self, seg_logit, seg_label):
- """Compute segmentation and semantic encoding loss."""
- seg_logit, se_seg_logit = seg_logit
- loss = dict()
- loss.update(super(EncHead, self).losses(seg_logit, seg_label))
- se_loss = self.loss_se_decode(
- se_seg_logit,
- self._convert_to_onehot_labels(seg_label, self.num_classes))
- loss['loss_se'] = se_loss
- return loss
diff --git a/spaces/aijack/hair/README.md b/spaces/aijack/hair/README.md
deleted file mode 100644
index b4bbe8194cee06b6940c3c06a9f2f68dc9eef552..0000000000000000000000000000000000000000
--- a/spaces/aijack/hair/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Hair
-emoji: 📈
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/PaintTransformer/train/models/base_model.py b/spaces/akhaliq/PaintTransformer/train/models/base_model.py
deleted file mode 100644
index a75506f05817df8b86e0eee5c450d10158ecee0d..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/PaintTransformer/train/models/base_model.py
+++ /dev/null
@@ -1,230 +0,0 @@
-import os
-import torch
-from collections import OrderedDict
-from abc import ABC, abstractmethod
-from . import networks
-
-
-class BaseModel(ABC):
- """This class is an abstract base class (ABC) for models.
- To create a subclass, you need to implement the following five functions:
- -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
- -- : unpack data from dataset and apply preprocessing.
- -- : produce intermediate results.
- -- : calculate losses, gradients, and update network weights.
- -- : (optionally) add model-specific options and set default options.
- """
-
- def __init__(self, opt):
- """Initialize the BaseModel class.
-
- Parameters:
- opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
-
- When creating your custom class, you need to implement your own initialization.
- In this function, you should first call
- Then, you need to define four lists:
- -- self.loss_names (str list): specify the training losses that you want to plot and save.
- -- self.model_names (str list): define networks used in our training.
- -- self.visual_names (str list): specify the images that you want to display and save.
- -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
- """
- self.opt = opt
- self.gpu_ids = opt.gpu_ids
- self.isTrain = opt.isTrain
- self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') # get device name: CPU or GPU
- self.save_dir = os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir
- if opt.preprocess != 'scale_width': # with [scale_width], input images might have different sizes, which hurts the performance of cudnn.benchmark.
- torch.backends.cudnn.benchmark = True
- self.loss_names = []
- self.model_names = []
- self.visual_names = []
- self.optimizers = []
- self.image_paths = []
- self.metric = 0 # used for learning rate policy 'plateau'
-
- @staticmethod
- def modify_commandline_options(parser, is_train):
- """Add new model-specific options, and rewrite default values for existing options.
-
- Parameters:
- parser -- original option parser
- is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
-
- Returns:
- the modified parser.
- """
- return parser
-
- @abstractmethod
- def set_input(self, input):
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
-
- Parameters:
- input (dict): includes the data itself and its metadata information.
- """
- pass
-
- @abstractmethod
- def forward(self):
- """Run forward pass; called by both functions and ."""
- pass
-
- @abstractmethod
- def optimize_parameters(self):
- """Calculate losses, gradients, and update network weights; called in every training iteration"""
- pass
-
- def setup(self, opt):
- """Load and print networks; create schedulers
-
- Parameters:
- opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
- """
- if self.isTrain:
- self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers]
- if not self.isTrain or opt.continue_train:
- load_suffix = 'iter_%d' % opt.load_iter if opt.load_iter > 0 else opt.epoch
- self.load_networks(load_suffix)
- self.print_networks(opt.verbose)
-
- def eval(self):
- """Make models eval mode during test time"""
- for name in self.model_names:
- if isinstance(name, str):
- net = getattr(self, 'net_' + name)
- net.eval()
-
- def test(self):
- """Forward function used in test time.
-
- This function wraps function in no_grad() so we don't save intermediate steps for backprop
- It also calls to produce additional visualization results
- """
- with torch.no_grad():
- self.forward()
- self.compute_visuals()
-
- def compute_visuals(self):
- """Calculate additional output images for visdom and HTML visualization"""
- pass
-
- def get_image_paths(self):
- """ Return image paths that are used to load current data"""
- return self.image_paths
-
- def update_learning_rate(self):
- """Update learning rates for all the networks; called at the end of every epoch"""
- old_lr = self.optimizers[0].param_groups[0]['lr']
- for scheduler in self.schedulers:
- if self.opt.lr_policy == 'plateau':
- scheduler.step(self.metric)
- else:
- scheduler.step()
-
- lr = self.optimizers[0].param_groups[0]['lr']
- print('learning rate %.7f -> %.7f' % (old_lr, lr))
-
- def get_current_visuals(self):
- """Return visualization images. train.py will display these images with visdom, and save the images to a HTML"""
- visual_ret = OrderedDict()
- for name in self.visual_names:
- if isinstance(name, str):
- visual_ret[name] = getattr(self, name)
- return visual_ret
-
- def get_current_losses(self):
- """Return traning losses / errors. train.py will print out these errors on console, and save them to a file"""
- errors_ret = OrderedDict()
- for name in self.loss_names:
- if isinstance(name, str):
- errors_ret[name] = float(getattr(self, 'loss_' + name)) # float(...) works for both scalar tensor and float number
- return errors_ret
-
- def save_networks(self, epoch):
- """Save all the networks to the disk.
-
- Parameters:
- epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
- """
- for name in self.model_names:
- if isinstance(name, str):
- save_filename = '%s_net_%s.pth' % (epoch, name)
- save_path = os.path.join(self.save_dir, save_filename)
- net = getattr(self, 'net_' + name)
-
- if len(self.gpu_ids) > 0 and torch.cuda.is_available():
- torch.save(net.module.cpu().state_dict(), save_path)
- net.cuda(self.gpu_ids[0])
- else:
- torch.save(net.cpu().state_dict(), save_path)
-
- def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):
- """Fix InstanceNorm checkpoints incompatibility (prior to 0.4)"""
- key = keys[i]
- if i + 1 == len(keys): # at the end, pointing to a parameter/buffer
- if module.__class__.__name__.startswith('InstanceNorm') and \
- (key == 'running_mean' or key == 'running_var'):
- if getattr(module, key) is None:
- state_dict.pop('.'.join(keys))
- if module.__class__.__name__.startswith('InstanceNorm') and \
- (key == 'num_batches_tracked'):
- state_dict.pop('.'.join(keys))
- else:
- self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
-
- def load_networks(self, epoch):
- """Load all the networks from the disk.
-
- Parameters:
- epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
- """
- for name in self.model_names:
- if isinstance(name, str):
- load_filename = '%s_net_%s.pth' % (epoch, name)
- load_path = os.path.join(self.save_dir, load_filename)
- net = getattr(self, 'net_' + name)
- if isinstance(net, torch.nn.DataParallel):
- net = net.module
- print('loading the model from %s' % load_path)
- # if you are using PyTorch newer than 0.4 (e.g., built from
- # GitHub source), you can remove str() on self.device
- state_dict = torch.load(load_path, map_location=str(self.device))
- if hasattr(state_dict, '_metadata'):
- del state_dict._metadata
-
- # patch InstanceNorm checkpoints prior to 0.4
- for key in list(state_dict.keys()): # need to copy keys here because we mutate in loop
- self.__patch_instance_norm_state_dict(state_dict, net, key.split('.'))
- net.load_state_dict(state_dict)
-
- def print_networks(self, verbose):
- """Print the total number of parameters in the network and (if verbose) network architecture
-
- Parameters:
- verbose (bool) -- if verbose: print the network architecture
- """
- print('---------- Networks initialized -------------')
- for name in self.model_names:
- if isinstance(name, str):
- net = getattr(self, 'net_' + name)
- num_params = 0
- for param in net.parameters():
- num_params += param.numel()
- if verbose:
- print(net)
- print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))
- print('-----------------------------------------------')
-
- def set_requires_grad(self, nets, requires_grad=False):
- """Set requies_grad=Fasle for all the networks to avoid unnecessary computations
- Parameters:
- nets (network list) -- a list of networks
- requires_grad (bool) -- whether the networks require gradients or not
- """
- if not isinstance(nets, list):
- nets = [nets]
- for net in nets:
- if net is not None:
- for param in net.parameters():
- param.requires_grad = requires_grad
diff --git a/spaces/akhaliq/kogpt/README.md b/spaces/akhaliq/kogpt/README.md
deleted file mode 100644
index fa4808990c35c0959cba0e7f116f749e2786a995..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/kogpt/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Kogpt
-emoji: 🌍
-colorFrom: red
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/misc.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/misc.py
deleted file mode 100644
index 0bf9e99af5238a37aef81159521a8c97da27a375..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/misc.py
+++ /dev/null
@@ -1,653 +0,0 @@
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import contextlib
-import errno
-import getpass
-import hashlib
-import io
-import logging
-import os
-import posixpath
-import shutil
-import stat
-import sys
-import urllib.parse
-from io import StringIO
-from itertools import filterfalse, tee, zip_longest
-from types import TracebackType
-from typing import (
- Any,
- BinaryIO,
- Callable,
- ContextManager,
- Iterable,
- Iterator,
- List,
- Optional,
- TextIO,
- Tuple,
- Type,
- TypeVar,
- cast,
-)
-
-from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed
-
-from pip import __version__
-from pip._internal.exceptions import CommandError
-from pip._internal.locations import get_major_minor_version
-from pip._internal.utils.compat import WINDOWS
-from pip._internal.utils.virtualenv import running_under_virtualenv
-
-__all__ = [
- "rmtree",
- "display_path",
- "backup_dir",
- "ask",
- "splitext",
- "format_size",
- "is_installable_dir",
- "normalize_path",
- "renames",
- "get_prog",
- "captured_stdout",
- "ensure_dir",
- "remove_auth_from_url",
-]
-
-
-logger = logging.getLogger(__name__)
-
-T = TypeVar("T")
-ExcInfo = Tuple[Type[BaseException], BaseException, TracebackType]
-VersionInfo = Tuple[int, int, int]
-NetlocTuple = Tuple[str, Tuple[Optional[str], Optional[str]]]
-
-
-def get_pip_version() -> str:
- pip_pkg_dir = os.path.join(os.path.dirname(__file__), "..", "..")
- pip_pkg_dir = os.path.abspath(pip_pkg_dir)
-
- return "pip {} from {} (python {})".format(
- __version__,
- pip_pkg_dir,
- get_major_minor_version(),
- )
-
-
-def normalize_version_info(py_version_info: Tuple[int, ...]) -> Tuple[int, int, int]:
- """
- Convert a tuple of ints representing a Python version to one of length
- three.
-
- :param py_version_info: a tuple of ints representing a Python version,
- or None to specify no version. The tuple can have any length.
-
- :return: a tuple of length three if `py_version_info` is non-None.
- Otherwise, return `py_version_info` unchanged (i.e. None).
- """
- if len(py_version_info) < 3:
- py_version_info += (3 - len(py_version_info)) * (0,)
- elif len(py_version_info) > 3:
- py_version_info = py_version_info[:3]
-
- return cast("VersionInfo", py_version_info)
-
-
-def ensure_dir(path: str) -> None:
- """os.path.makedirs without EEXIST."""
- try:
- os.makedirs(path)
- except OSError as e:
- # Windows can raise spurious ENOTEMPTY errors. See #6426.
- if e.errno != errno.EEXIST and e.errno != errno.ENOTEMPTY:
- raise
-
-
-def get_prog() -> str:
- try:
- prog = os.path.basename(sys.argv[0])
- if prog in ("__main__.py", "-c"):
- return f"{sys.executable} -m pip"
- else:
- return prog
- except (AttributeError, TypeError, IndexError):
- pass
- return "pip"
-
-
-# Retry every half second for up to 3 seconds
-# Tenacity raises RetryError by default, explicitly raise the original exception
-@retry(reraise=True, stop=stop_after_delay(3), wait=wait_fixed(0.5))
-def rmtree(dir: str, ignore_errors: bool = False) -> None:
- shutil.rmtree(dir, ignore_errors=ignore_errors, onerror=rmtree_errorhandler)
-
-
-def rmtree_errorhandler(func: Callable[..., Any], path: str, exc_info: ExcInfo) -> None:
- """On Windows, the files in .svn are read-only, so when rmtree() tries to
- remove them, an exception is thrown. We catch that here, remove the
- read-only attribute, and hopefully continue without problems."""
- try:
- has_attr_readonly = not (os.stat(path).st_mode & stat.S_IWRITE)
- except OSError:
- # it's equivalent to os.path.exists
- return
-
- if has_attr_readonly:
- # convert to read/write
- os.chmod(path, stat.S_IWRITE)
- # use the original function to repeat the operation
- func(path)
- return
- else:
- raise
-
-
-def display_path(path: str) -> str:
- """Gives the display value for a given path, making it relative to cwd
- if possible."""
- path = os.path.normcase(os.path.abspath(path))
- if path.startswith(os.getcwd() + os.path.sep):
- path = "." + path[len(os.getcwd()) :]
- return path
-
-
-def backup_dir(dir: str, ext: str = ".bak") -> str:
- """Figure out the name of a directory to back up the given dir to
- (adding .bak, .bak2, etc)"""
- n = 1
- extension = ext
- while os.path.exists(dir + extension):
- n += 1
- extension = ext + str(n)
- return dir + extension
-
-
-def ask_path_exists(message: str, options: Iterable[str]) -> str:
- for action in os.environ.get("PIP_EXISTS_ACTION", "").split():
- if action in options:
- return action
- return ask(message, options)
-
-
-def _check_no_input(message: str) -> None:
- """Raise an error if no input is allowed."""
- if os.environ.get("PIP_NO_INPUT"):
- raise Exception(
- f"No input was expected ($PIP_NO_INPUT set); question: {message}"
- )
-
-
-def ask(message: str, options: Iterable[str]) -> str:
- """Ask the message interactively, with the given possible responses"""
- while 1:
- _check_no_input(message)
- response = input(message)
- response = response.strip().lower()
- if response not in options:
- print(
- "Your response ({!r}) was not one of the expected responses: "
- "{}".format(response, ", ".join(options))
- )
- else:
- return response
-
-
-def ask_input(message: str) -> str:
- """Ask for input interactively."""
- _check_no_input(message)
- return input(message)
-
-
-def ask_password(message: str) -> str:
- """Ask for a password interactively."""
- _check_no_input(message)
- return getpass.getpass(message)
-
-
-def strtobool(val: str) -> int:
- """Convert a string representation of truth to true (1) or false (0).
-
- True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
- are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
- 'val' is anything else.
- """
- val = val.lower()
- if val in ("y", "yes", "t", "true", "on", "1"):
- return 1
- elif val in ("n", "no", "f", "false", "off", "0"):
- return 0
- else:
- raise ValueError(f"invalid truth value {val!r}")
-
-
-def format_size(bytes: float) -> str:
- if bytes > 1000 * 1000:
- return "{:.1f} MB".format(bytes / 1000.0 / 1000)
- elif bytes > 10 * 1000:
- return "{} kB".format(int(bytes / 1000))
- elif bytes > 1000:
- return "{:.1f} kB".format(bytes / 1000.0)
- else:
- return "{} bytes".format(int(bytes))
-
-
-def tabulate(rows: Iterable[Iterable[Any]]) -> Tuple[List[str], List[int]]:
- """Return a list of formatted rows and a list of column sizes.
-
- For example::
-
- >>> tabulate([['foobar', 2000], [0xdeadbeef]])
- (['foobar 2000', '3735928559'], [10, 4])
- """
- rows = [tuple(map(str, row)) for row in rows]
- sizes = [max(map(len, col)) for col in zip_longest(*rows, fillvalue="")]
- table = [" ".join(map(str.ljust, row, sizes)).rstrip() for row in rows]
- return table, sizes
-
-
-def is_installable_dir(path: str) -> bool:
- """Is path is a directory containing pyproject.toml or setup.py?
-
- If pyproject.toml exists, this is a PEP 517 project. Otherwise we look for
- a legacy setuptools layout by identifying setup.py. We don't check for the
- setup.cfg because using it without setup.py is only available for PEP 517
- projects, which are already covered by the pyproject.toml check.
- """
- if not os.path.isdir(path):
- return False
- if os.path.isfile(os.path.join(path, "pyproject.toml")):
- return True
- if os.path.isfile(os.path.join(path, "setup.py")):
- return True
- return False
-
-
-def read_chunks(file: BinaryIO, size: int = io.DEFAULT_BUFFER_SIZE) -> Iterator[bytes]:
- """Yield pieces of data from a file-like object until EOF."""
- while True:
- chunk = file.read(size)
- if not chunk:
- break
- yield chunk
-
-
-def normalize_path(path: str, resolve_symlinks: bool = True) -> str:
- """
- Convert a path to its canonical, case-normalized, absolute version.
-
- """
- path = os.path.expanduser(path)
- if resolve_symlinks:
- path = os.path.realpath(path)
- else:
- path = os.path.abspath(path)
- return os.path.normcase(path)
-
-
-def splitext(path: str) -> Tuple[str, str]:
- """Like os.path.splitext, but take off .tar too"""
- base, ext = posixpath.splitext(path)
- if base.lower().endswith(".tar"):
- ext = base[-4:] + ext
- base = base[:-4]
- return base, ext
-
-
-def renames(old: str, new: str) -> None:
- """Like os.renames(), but handles renaming across devices."""
- # Implementation borrowed from os.renames().
- head, tail = os.path.split(new)
- if head and tail and not os.path.exists(head):
- os.makedirs(head)
-
- shutil.move(old, new)
-
- head, tail = os.path.split(old)
- if head and tail:
- try:
- os.removedirs(head)
- except OSError:
- pass
-
-
-def is_local(path: str) -> bool:
- """
- Return True if this is a path pip is allowed to modify.
-
- If we're in a virtualenv, sys.prefix points to the virtualenv's
- prefix; only sys.prefix is considered local.
-
- If we're not in a virtualenv, in general we can modify anything.
- However, if the OS vendor has configured distutils to install
- somewhere other than sys.prefix (which could be a subdirectory of
- sys.prefix, e.g. /usr/local), we consider sys.prefix itself nonlocal
- and the domain of the OS vendor. (In other words, everything _other
- than_ sys.prefix is considered local.)
-
- Caution: this function assumes the head of path has been normalized
- with normalize_path.
- """
-
- path = normalize_path(path)
- # Hard-coded becouse PyPy uses a different sys.prefix on Debian
- prefix = '/usr'
-
- if running_under_virtualenv():
- return path.startswith(normalize_path(sys.prefix))
- else:
- from pip._internal.locations import get_scheme
- from pip._internal.models.scheme import SCHEME_KEYS
- if path.startswith(prefix):
- scheme = get_scheme("")
- for key in SCHEME_KEYS:
- local_path = getattr(scheme, key)
- if path.startswith(normalize_path(local_path)):
- return True
- return False
- else:
- return True
-
-
-def write_output(msg: Any, *args: Any) -> None:
- logger.info(msg, *args)
-
-
-class StreamWrapper(StringIO):
- orig_stream: TextIO = None
-
- @classmethod
- def from_stream(cls, orig_stream: TextIO) -> "StreamWrapper":
- cls.orig_stream = orig_stream
- return cls()
-
- # compileall.compile_dir() needs stdout.encoding to print to stdout
- # https://github.com/python/mypy/issues/4125
- @property
- def encoding(self): # type: ignore
- return self.orig_stream.encoding
-
-
-@contextlib.contextmanager
-def captured_output(stream_name: str) -> Iterator[StreamWrapper]:
- """Return a context manager used by captured_stdout/stdin/stderr
- that temporarily replaces the sys stream *stream_name* with a StringIO.
-
- Taken from Lib/support/__init__.py in the CPython repo.
- """
- orig_stdout = getattr(sys, stream_name)
- setattr(sys, stream_name, StreamWrapper.from_stream(orig_stdout))
- try:
- yield getattr(sys, stream_name)
- finally:
- setattr(sys, stream_name, orig_stdout)
-
-
-def captured_stdout() -> ContextManager[StreamWrapper]:
- """Capture the output of sys.stdout:
-
- with captured_stdout() as stdout:
- print('hello')
- self.assertEqual(stdout.getvalue(), 'hello\n')
-
- Taken from Lib/support/__init__.py in the CPython repo.
- """
- return captured_output("stdout")
-
-
-def captured_stderr() -> ContextManager[StreamWrapper]:
- """
- See captured_stdout().
- """
- return captured_output("stderr")
-
-
-# Simulates an enum
-def enum(*sequential: Any, **named: Any) -> Type[Any]:
- enums = dict(zip(sequential, range(len(sequential))), **named)
- reverse = {value: key for key, value in enums.items()}
- enums["reverse_mapping"] = reverse
- return type("Enum", (), enums)
-
-
-def build_netloc(host: str, port: Optional[int]) -> str:
- """
- Build a netloc from a host-port pair
- """
- if port is None:
- return host
- if ":" in host:
- # Only wrap host with square brackets when it is IPv6
- host = f"[{host}]"
- return f"{host}:{port}"
-
-
-def build_url_from_netloc(netloc: str, scheme: str = "https") -> str:
- """
- Build a full URL from a netloc.
- """
- if netloc.count(":") >= 2 and "@" not in netloc and "[" not in netloc:
- # It must be a bare IPv6 address, so wrap it with brackets.
- netloc = f"[{netloc}]"
- return f"{scheme}://{netloc}"
-
-
-def parse_netloc(netloc: str) -> Tuple[str, Optional[int]]:
- """
- Return the host-port pair from a netloc.
- """
- url = build_url_from_netloc(netloc)
- parsed = urllib.parse.urlparse(url)
- return parsed.hostname, parsed.port
-
-
-def split_auth_from_netloc(netloc: str) -> NetlocTuple:
- """
- Parse out and remove the auth information from a netloc.
-
- Returns: (netloc, (username, password)).
- """
- if "@" not in netloc:
- return netloc, (None, None)
-
- # Split from the right because that's how urllib.parse.urlsplit()
- # behaves if more than one @ is present (which can be checked using
- # the password attribute of urlsplit()'s return value).
- auth, netloc = netloc.rsplit("@", 1)
- pw: Optional[str] = None
- if ":" in auth:
- # Split from the left because that's how urllib.parse.urlsplit()
- # behaves if more than one : is present (which again can be checked
- # using the password attribute of the return value)
- user, pw = auth.split(":", 1)
- else:
- user, pw = auth, None
-
- user = urllib.parse.unquote(user)
- if pw is not None:
- pw = urllib.parse.unquote(pw)
-
- return netloc, (user, pw)
-
-
-def redact_netloc(netloc: str) -> str:
- """
- Replace the sensitive data in a netloc with "****", if it exists.
-
- For example:
- - "user:pass@example.com" returns "user:****@example.com"
- - "accesstoken@example.com" returns "****@example.com"
- """
- netloc, (user, password) = split_auth_from_netloc(netloc)
- if user is None:
- return netloc
- if password is None:
- user = "****"
- password = ""
- else:
- user = urllib.parse.quote(user)
- password = ":****"
- return "{user}{password}@{netloc}".format(
- user=user, password=password, netloc=netloc
- )
-
-
-def _transform_url(
- url: str, transform_netloc: Callable[[str], Tuple[Any, ...]]
-) -> Tuple[str, NetlocTuple]:
- """Transform and replace netloc in a url.
-
- transform_netloc is a function taking the netloc and returning a
- tuple. The first element of this tuple is the new netloc. The
- entire tuple is returned.
-
- Returns a tuple containing the transformed url as item 0 and the
- original tuple returned by transform_netloc as item 1.
- """
- purl = urllib.parse.urlsplit(url)
- netloc_tuple = transform_netloc(purl.netloc)
- # stripped url
- url_pieces = (purl.scheme, netloc_tuple[0], purl.path, purl.query, purl.fragment)
- surl = urllib.parse.urlunsplit(url_pieces)
- return surl, cast("NetlocTuple", netloc_tuple)
-
-
-def _get_netloc(netloc: str) -> NetlocTuple:
- return split_auth_from_netloc(netloc)
-
-
-def _redact_netloc(netloc: str) -> Tuple[str]:
- return (redact_netloc(netloc),)
-
-
-def split_auth_netloc_from_url(url: str) -> Tuple[str, str, Tuple[str, str]]:
- """
- Parse a url into separate netloc, auth, and url with no auth.
-
- Returns: (url_without_auth, netloc, (username, password))
- """
- url_without_auth, (netloc, auth) = _transform_url(url, _get_netloc)
- return url_without_auth, netloc, auth
-
-
-def remove_auth_from_url(url: str) -> str:
- """Return a copy of url with 'username:password@' removed."""
- # username/pass params are passed to subversion through flags
- # and are not recognized in the url.
- return _transform_url(url, _get_netloc)[0]
-
-
-def redact_auth_from_url(url: str) -> str:
- """Replace the password in a given url with ****."""
- return _transform_url(url, _redact_netloc)[0]
-
-
-class HiddenText:
- def __init__(self, secret: str, redacted: str) -> None:
- self.secret = secret
- self.redacted = redacted
-
- def __repr__(self) -> str:
- return "".format(str(self))
-
- def __str__(self) -> str:
- return self.redacted
-
- # This is useful for testing.
- def __eq__(self, other: Any) -> bool:
- if type(self) != type(other):
- return False
-
- # The string being used for redaction doesn't also have to match,
- # just the raw, original string.
- return self.secret == other.secret
-
-
-def hide_value(value: str) -> HiddenText:
- return HiddenText(value, redacted="****")
-
-
-def hide_url(url: str) -> HiddenText:
- redacted = redact_auth_from_url(url)
- return HiddenText(url, redacted=redacted)
-
-
-def protect_pip_from_modification_on_windows(modifying_pip: bool) -> None:
- """Protection of pip.exe from modification on Windows
-
- On Windows, any operation modifying pip should be run as:
- python -m pip ...
- """
- pip_names = [
- "pip.exe",
- "pip{}.exe".format(sys.version_info[0]),
- "pip{}.{}.exe".format(*sys.version_info[:2]),
- ]
-
- # See https://github.com/pypa/pip/issues/1299 for more discussion
- should_show_use_python_msg = (
- modifying_pip and WINDOWS and os.path.basename(sys.argv[0]) in pip_names
- )
-
- if should_show_use_python_msg:
- new_command = [sys.executable, "-m", "pip"] + sys.argv[1:]
- raise CommandError(
- "To modify pip, please run the following command:\n{}".format(
- " ".join(new_command)
- )
- )
-
-
-def is_console_interactive() -> bool:
- """Is this console interactive?"""
- return sys.stdin is not None and sys.stdin.isatty()
-
-
-def hash_file(path: str, blocksize: int = 1 << 20) -> Tuple[Any, int]:
- """Return (hash, length) for path using hashlib.sha256()"""
-
- h = hashlib.sha256()
- length = 0
- with open(path, "rb") as f:
- for block in read_chunks(f, size=blocksize):
- length += len(block)
- h.update(block)
- return h, length
-
-
-def is_wheel_installed() -> bool:
- """
- Return whether the wheel package is installed.
- """
- try:
- import wheel # noqa: F401
- except ImportError:
- return False
-
- return True
-
-
-def pairwise(iterable: Iterable[Any]) -> Iterator[Tuple[Any, Any]]:
- """
- Return paired elements.
-
- For example:
- s -> (s0, s1), (s2, s3), (s4, s5), ...
- """
- iterable = iter(iterable)
- return zip_longest(iterable, iterable)
-
-
-def partition(
- pred: Callable[[T], bool],
- iterable: Iterable[T],
-) -> Tuple[Iterable[T], Iterable[T]]:
- """
- Use a predicate to partition entries into false entries and true entries,
- like
-
- partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9
- """
- t1, t2 = tee(iterable)
- return filterfalse(pred, t1), filter(pred, t2)
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/img.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/img.py
deleted file mode 100644
index 978559237a6273266399514500268fa45f39d89b..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/img.py
+++ /dev/null
@@ -1,641 +0,0 @@
-"""
- pygments.formatters.img
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for Pixmap output.
-
- :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import os
-import sys
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.util import get_bool_opt, get_int_opt, get_list_opt, \
- get_choice_opt
-
-import subprocess
-
-# Import this carefully
-try:
- from PIL import Image, ImageDraw, ImageFont
- pil_available = True
-except ImportError:
- pil_available = False
-
-try:
- import _winreg
-except ImportError:
- try:
- import winreg as _winreg
- except ImportError:
- _winreg = None
-
-__all__ = ['ImageFormatter', 'GifImageFormatter', 'JpgImageFormatter',
- 'BmpImageFormatter']
-
-
-# For some unknown reason every font calls it something different
-STYLES = {
- 'NORMAL': ['', 'Roman', 'Book', 'Normal', 'Regular', 'Medium'],
- 'ITALIC': ['Oblique', 'Italic'],
- 'BOLD': ['Bold'],
- 'BOLDITALIC': ['Bold Oblique', 'Bold Italic'],
-}
-
-# A sane default for modern systems
-DEFAULT_FONT_NAME_NIX = 'DejaVu Sans Mono'
-DEFAULT_FONT_NAME_WIN = 'Courier New'
-DEFAULT_FONT_NAME_MAC = 'Menlo'
-
-
-class PilNotAvailable(ImportError):
- """When Python imaging library is not available"""
-
-
-class FontNotFound(Exception):
- """When there are no usable fonts specified"""
-
-
-class FontManager:
- """
- Manages a set of fonts: normal, italic, bold, etc...
- """
-
- def __init__(self, font_name, font_size=14):
- self.font_name = font_name
- self.font_size = font_size
- self.fonts = {}
- self.encoding = None
- if sys.platform.startswith('win'):
- if not font_name:
- self.font_name = DEFAULT_FONT_NAME_WIN
- self._create_win()
- elif sys.platform.startswith('darwin'):
- if not font_name:
- self.font_name = DEFAULT_FONT_NAME_MAC
- self._create_mac()
- else:
- if not font_name:
- self.font_name = DEFAULT_FONT_NAME_NIX
- self._create_nix()
-
- def _get_nix_font_path(self, name, style):
- proc = subprocess.Popen(['fc-list', "%s:style=%s" % (name, style), 'file'],
- stdout=subprocess.PIPE, stderr=None)
- stdout, _ = proc.communicate()
- if proc.returncode == 0:
- lines = stdout.splitlines()
- for line in lines:
- if line.startswith(b'Fontconfig warning:'):
- continue
- path = line.decode().strip().strip(':')
- if path:
- return path
- return None
-
- def _create_nix(self):
- for name in STYLES['NORMAL']:
- path = self._get_nix_font_path(self.font_name, name)
- if path is not None:
- self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
- break
- else:
- raise FontNotFound('No usable fonts named: "%s"' %
- self.font_name)
- for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
- for stylename in STYLES[style]:
- path = self._get_nix_font_path(self.font_name, stylename)
- if path is not None:
- self.fonts[style] = ImageFont.truetype(path, self.font_size)
- break
- else:
- if style == 'BOLDITALIC':
- self.fonts[style] = self.fonts['BOLD']
- else:
- self.fonts[style] = self.fonts['NORMAL']
-
- def _get_mac_font_path(self, font_map, name, style):
- return font_map.get((name + ' ' + style).strip().lower())
-
- def _create_mac(self):
- font_map = {}
- for font_dir in (os.path.join(os.getenv("HOME"), 'Library/Fonts/'),
- '/Library/Fonts/', '/System/Library/Fonts/'):
- font_map.update(
- (os.path.splitext(f)[0].lower(), os.path.join(font_dir, f))
- for f in os.listdir(font_dir)
- if f.lower().endswith(('ttf', 'ttc')))
-
- for name in STYLES['NORMAL']:
- path = self._get_mac_font_path(font_map, self.font_name, name)
- if path is not None:
- self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
- break
- else:
- raise FontNotFound('No usable fonts named: "%s"' %
- self.font_name)
- for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
- for stylename in STYLES[style]:
- path = self._get_mac_font_path(font_map, self.font_name, stylename)
- if path is not None:
- self.fonts[style] = ImageFont.truetype(path, self.font_size)
- break
- else:
- if style == 'BOLDITALIC':
- self.fonts[style] = self.fonts['BOLD']
- else:
- self.fonts[style] = self.fonts['NORMAL']
-
- def _lookup_win(self, key, basename, styles, fail=False):
- for suffix in ('', ' (TrueType)'):
- for style in styles:
- try:
- valname = '%s%s%s' % (basename, style and ' '+style, suffix)
- val, _ = _winreg.QueryValueEx(key, valname)
- return val
- except OSError:
- continue
- else:
- if fail:
- raise FontNotFound('Font %s (%s) not found in registry' %
- (basename, styles[0]))
- return None
-
- def _create_win(self):
- lookuperror = None
- keynames = [ (_winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows NT\CurrentVersion\Fonts'),
- (_winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows\CurrentVersion\Fonts'),
- (_winreg.HKEY_LOCAL_MACHINE, r'Software\Microsoft\Windows NT\CurrentVersion\Fonts'),
- (_winreg.HKEY_LOCAL_MACHINE, r'Software\Microsoft\Windows\CurrentVersion\Fonts') ]
- for keyname in keynames:
- try:
- key = _winreg.OpenKey(*keyname)
- try:
- path = self._lookup_win(key, self.font_name, STYLES['NORMAL'], True)
- self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
- for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
- path = self._lookup_win(key, self.font_name, STYLES[style])
- if path:
- self.fonts[style] = ImageFont.truetype(path, self.font_size)
- else:
- if style == 'BOLDITALIC':
- self.fonts[style] = self.fonts['BOLD']
- else:
- self.fonts[style] = self.fonts['NORMAL']
- return
- except FontNotFound as err:
- lookuperror = err
- finally:
- _winreg.CloseKey(key)
- except OSError:
- pass
- else:
- # If we get here, we checked all registry keys and had no luck
- # We can be in one of two situations now:
- # * All key lookups failed. In this case lookuperror is None and we
- # will raise a generic error
- # * At least one lookup failed with a FontNotFound error. In this
- # case, we will raise that as a more specific error
- if lookuperror:
- raise lookuperror
- raise FontNotFound('Can\'t open Windows font registry key')
-
- def get_char_size(self):
- """
- Get the character size.
- """
- return self.fonts['NORMAL'].getsize('M')
-
- def get_text_size(self, text):
- """
- Get the text size(width, height).
- """
- return self.fonts['NORMAL'].getsize(text)
-
- def get_font(self, bold, oblique):
- """
- Get the font based on bold and italic flags.
- """
- if bold and oblique:
- return self.fonts['BOLDITALIC']
- elif bold:
- return self.fonts['BOLD']
- elif oblique:
- return self.fonts['ITALIC']
- else:
- return self.fonts['NORMAL']
-
-
-class ImageFormatter(Formatter):
- """
- Create a PNG image from source code. This uses the Python Imaging Library to
- generate a pixmap from the source code.
-
- .. versionadded:: 0.10
-
- Additional options accepted:
-
- `image_format`
- An image format to output to that is recognised by PIL, these include:
-
- * "PNG" (default)
- * "JPEG"
- * "BMP"
- * "GIF"
-
- `line_pad`
- The extra spacing (in pixels) between each line of text.
-
- Default: 2
-
- `font_name`
- The font name to be used as the base font from which others, such as
- bold and italic fonts will be generated. This really should be a
- monospace font to look sane.
-
- Default: "Courier New" on Windows, "Menlo" on Mac OS, and
- "DejaVu Sans Mono" on \\*nix
-
- `font_size`
- The font size in points to be used.
-
- Default: 14
-
- `image_pad`
- The padding, in pixels to be used at each edge of the resulting image.
-
- Default: 10
-
- `line_numbers`
- Whether line numbers should be shown: True/False
-
- Default: True
-
- `line_number_start`
- The line number of the first line.
-
- Default: 1
-
- `line_number_step`
- The step used when printing line numbers.
-
- Default: 1
-
- `line_number_bg`
- The background colour (in "#123456" format) of the line number bar, or
- None to use the style background color.
-
- Default: "#eed"
-
- `line_number_fg`
- The text color of the line numbers (in "#123456"-like format).
-
- Default: "#886"
-
- `line_number_chars`
- The number of columns of line numbers allowable in the line number
- margin.
-
- Default: 2
-
- `line_number_bold`
- Whether line numbers will be bold: True/False
-
- Default: False
-
- `line_number_italic`
- Whether line numbers will be italicized: True/False
-
- Default: False
-
- `line_number_separator`
- Whether a line will be drawn between the line number area and the
- source code area: True/False
-
- Default: True
-
- `line_number_pad`
- The horizontal padding (in pixels) between the line number margin, and
- the source code area.
-
- Default: 6
-
- `hl_lines`
- Specify a list of lines to be highlighted.
-
- .. versionadded:: 1.2
-
- Default: empty list
-
- `hl_color`
- Specify the color for highlighting lines.
-
- .. versionadded:: 1.2
-
- Default: highlight color of the selected style
- """
-
- # Required by the pygments mapper
- name = 'img'
- aliases = ['img', 'IMG', 'png']
- filenames = ['*.png']
-
- unicodeoutput = False
-
- default_image_format = 'png'
-
- def __init__(self, **options):
- """
- See the class docstring for explanation of options.
- """
- if not pil_available:
- raise PilNotAvailable(
- 'Python Imaging Library is required for this formatter')
- Formatter.__init__(self, **options)
- self.encoding = 'latin1' # let pygments.format() do the right thing
- # Read the style
- self.styles = dict(self.style)
- if self.style.background_color is None:
- self.background_color = '#fff'
- else:
- self.background_color = self.style.background_color
- # Image options
- self.image_format = get_choice_opt(
- options, 'image_format', ['png', 'jpeg', 'gif', 'bmp'],
- self.default_image_format, normcase=True)
- self.image_pad = get_int_opt(options, 'image_pad', 10)
- self.line_pad = get_int_opt(options, 'line_pad', 2)
- # The fonts
- fontsize = get_int_opt(options, 'font_size', 14)
- self.fonts = FontManager(options.get('font_name', ''), fontsize)
- self.fontw, self.fonth = self.fonts.get_char_size()
- # Line number options
- self.line_number_fg = options.get('line_number_fg', '#886')
- self.line_number_bg = options.get('line_number_bg', '#eed')
- self.line_number_chars = get_int_opt(options,
- 'line_number_chars', 2)
- self.line_number_bold = get_bool_opt(options,
- 'line_number_bold', False)
- self.line_number_italic = get_bool_opt(options,
- 'line_number_italic', False)
- self.line_number_pad = get_int_opt(options, 'line_number_pad', 6)
- self.line_numbers = get_bool_opt(options, 'line_numbers', True)
- self.line_number_separator = get_bool_opt(options,
- 'line_number_separator', True)
- self.line_number_step = get_int_opt(options, 'line_number_step', 1)
- self.line_number_start = get_int_opt(options, 'line_number_start', 1)
- if self.line_numbers:
- self.line_number_width = (self.fontw * self.line_number_chars +
- self.line_number_pad * 2)
- else:
- self.line_number_width = 0
- self.hl_lines = []
- hl_lines_str = get_list_opt(options, 'hl_lines', [])
- for line in hl_lines_str:
- try:
- self.hl_lines.append(int(line))
- except ValueError:
- pass
- self.hl_color = options.get('hl_color',
- self.style.highlight_color) or '#f90'
- self.drawables = []
-
- def get_style_defs(self, arg=''):
- raise NotImplementedError('The -S option is meaningless for the image '
- 'formatter. Use -O style= instead.')
-
- def _get_line_height(self):
- """
- Get the height of a line.
- """
- return self.fonth + self.line_pad
-
- def _get_line_y(self, lineno):
- """
- Get the Y coordinate of a line number.
- """
- return lineno * self._get_line_height() + self.image_pad
-
- def _get_char_width(self):
- """
- Get the width of a character.
- """
- return self.fontw
-
- def _get_char_x(self, linelength):
- """
- Get the X coordinate of a character position.
- """
- return linelength + self.image_pad + self.line_number_width
-
- def _get_text_pos(self, linelength, lineno):
- """
- Get the actual position for a character and line position.
- """
- return self._get_char_x(linelength), self._get_line_y(lineno)
-
- def _get_linenumber_pos(self, lineno):
- """
- Get the actual position for the start of a line number.
- """
- return (self.image_pad, self._get_line_y(lineno))
-
- def _get_text_color(self, style):
- """
- Get the correct color for the token from the style.
- """
- if style['color'] is not None:
- fill = '#' + style['color']
- else:
- fill = '#000'
- return fill
-
- def _get_text_bg_color(self, style):
- """
- Get the correct background color for the token from the style.
- """
- if style['bgcolor'] is not None:
- bg_color = '#' + style['bgcolor']
- else:
- bg_color = None
- return bg_color
-
- def _get_style_font(self, style):
- """
- Get the correct font for the style.
- """
- return self.fonts.get_font(style['bold'], style['italic'])
-
- def _get_image_size(self, maxlinelength, maxlineno):
- """
- Get the required image size.
- """
- return (self._get_char_x(maxlinelength) + self.image_pad,
- self._get_line_y(maxlineno + 0) + self.image_pad)
-
- def _draw_linenumber(self, posno, lineno):
- """
- Remember a line number drawable to paint later.
- """
- self._draw_text(
- self._get_linenumber_pos(posno),
- str(lineno).rjust(self.line_number_chars),
- font=self.fonts.get_font(self.line_number_bold,
- self.line_number_italic),
- text_fg=self.line_number_fg,
- text_bg=None,
- )
-
- def _draw_text(self, pos, text, font, text_fg, text_bg):
- """
- Remember a single drawable tuple to paint later.
- """
- self.drawables.append((pos, text, font, text_fg, text_bg))
-
- def _create_drawables(self, tokensource):
- """
- Create drawables for the token content.
- """
- lineno = charno = maxcharno = 0
- maxlinelength = linelength = 0
- for ttype, value in tokensource:
- while ttype not in self.styles:
- ttype = ttype.parent
- style = self.styles[ttype]
- # TODO: make sure tab expansion happens earlier in the chain. It
- # really ought to be done on the input, as to do it right here is
- # quite complex.
- value = value.expandtabs(4)
- lines = value.splitlines(True)
- # print lines
- for i, line in enumerate(lines):
- temp = line.rstrip('\n')
- if temp:
- self._draw_text(
- self._get_text_pos(linelength, lineno),
- temp,
- font = self._get_style_font(style),
- text_fg = self._get_text_color(style),
- text_bg = self._get_text_bg_color(style),
- )
- temp_width, temp_hight = self.fonts.get_text_size(temp)
- linelength += temp_width
- maxlinelength = max(maxlinelength, linelength)
- charno += len(temp)
- maxcharno = max(maxcharno, charno)
- if line.endswith('\n'):
- # add a line for each extra line in the value
- linelength = 0
- charno = 0
- lineno += 1
- self.maxlinelength = maxlinelength
- self.maxcharno = maxcharno
- self.maxlineno = lineno
-
- def _draw_line_numbers(self):
- """
- Create drawables for the line numbers.
- """
- if not self.line_numbers:
- return
- for p in range(self.maxlineno):
- n = p + self.line_number_start
- if (n % self.line_number_step) == 0:
- self._draw_linenumber(p, n)
-
- def _paint_line_number_bg(self, im):
- """
- Paint the line number background on the image.
- """
- if not self.line_numbers:
- return
- if self.line_number_fg is None:
- return
- draw = ImageDraw.Draw(im)
- recth = im.size[-1]
- rectw = self.image_pad + self.line_number_width - self.line_number_pad
- draw.rectangle([(0, 0), (rectw, recth)],
- fill=self.line_number_bg)
- if self.line_number_separator:
- draw.line([(rectw, 0), (rectw, recth)], fill=self.line_number_fg)
- del draw
-
- def format(self, tokensource, outfile):
- """
- Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
- tuples and write it into ``outfile``.
-
- This implementation calculates where it should draw each token on the
- pixmap, then calculates the required pixmap size and draws the items.
- """
- self._create_drawables(tokensource)
- self._draw_line_numbers()
- im = Image.new(
- 'RGB',
- self._get_image_size(self.maxlinelength, self.maxlineno),
- self.background_color
- )
- self._paint_line_number_bg(im)
- draw = ImageDraw.Draw(im)
- # Highlight
- if self.hl_lines:
- x = self.image_pad + self.line_number_width - self.line_number_pad + 1
- recth = self._get_line_height()
- rectw = im.size[0] - x
- for linenumber in self.hl_lines:
- y = self._get_line_y(linenumber - 1)
- draw.rectangle([(x, y), (x + rectw, y + recth)],
- fill=self.hl_color)
- for pos, value, font, text_fg, text_bg in self.drawables:
- if text_bg:
- text_size = draw.textsize(text=value, font=font)
- draw.rectangle([pos[0], pos[1], pos[0] + text_size[0], pos[1] + text_size[1]], fill=text_bg)
- draw.text(pos, value, font=font, fill=text_fg)
- im.save(outfile, self.image_format.upper())
-
-
-# Add one formatter per format, so that the "-f gif" option gives the correct result
-# when used in pygmentize.
-
-class GifImageFormatter(ImageFormatter):
- """
- Create a GIF image from source code. This uses the Python Imaging Library to
- generate a pixmap from the source code.
-
- .. versionadded:: 1.0
- """
-
- name = 'img_gif'
- aliases = ['gif']
- filenames = ['*.gif']
- default_image_format = 'gif'
-
-
-class JpgImageFormatter(ImageFormatter):
- """
- Create a JPEG image from source code. This uses the Python Imaging Library to
- generate a pixmap from the source code.
-
- .. versionadded:: 1.0
- """
-
- name = 'img_jpg'
- aliases = ['jpg', 'jpeg']
- filenames = ['*.jpg']
- default_image_format = 'jpeg'
-
-
-class BmpImageFormatter(ImageFormatter):
- """
- Create a bitmap image from source code. This uses the Python Imaging Library to
- generate a pixmap from the source code.
-
- .. versionadded:: 1.0
- """
-
- name = 'img_bmp'
- aliases = ['bmp', 'bitmap']
- filenames = ['*.bmp']
- default_image_format = 'bmp'
diff --git a/spaces/ali-ghamdan/gfp-Gans/tests/test_stylegan2_clean_arch.py b/spaces/ali-ghamdan/gfp-Gans/tests/test_stylegan2_clean_arch.py
deleted file mode 100644
index 78bb920e73ce28cfec9ea89a4339cc5b87981b47..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/gfp-Gans/tests/test_stylegan2_clean_arch.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import torch
-
-from gfpgan.archs.stylegan2_clean_arch import StyleGAN2GeneratorClean
-
-
-def test_stylegan2generatorclean():
- """Test arch: StyleGAN2GeneratorClean."""
-
- # model init and forward (gpu)
- if torch.cuda.is_available():
- net = StyleGAN2GeneratorClean(
- out_size=32, num_style_feat=512, num_mlp=8, channel_multiplier=1, narrow=0.5).cuda().eval()
- style = torch.rand((1, 512), dtype=torch.float32).cuda()
- output = net([style], input_is_latent=False)
- assert output[0].shape == (1, 3, 32, 32)
- assert output[1] is None
-
- # -------------------- with return_latents ----------------------- #
- output = net([style], input_is_latent=True, return_latents=True)
- assert output[0].shape == (1, 3, 32, 32)
- assert len(output[1]) == 1
- # check latent
- assert output[1][0].shape == (8, 512)
-
- # -------------------- with randomize_noise = False ----------------------- #
- output = net([style], randomize_noise=False)
- assert output[0].shape == (1, 3, 32, 32)
- assert output[1] is None
-
- # -------------------- with truncation = 0.5 and mixing----------------------- #
- output = net([style, style], truncation=0.5, truncation_latent=style)
- assert output[0].shape == (1, 3, 32, 32)
- assert output[1] is None
-
- # ------------------ test make_noise ----------------------- #
- out = net.make_noise()
- assert len(out) == 7
- assert out[0].shape == (1, 1, 4, 4)
- assert out[1].shape == (1, 1, 8, 8)
- assert out[2].shape == (1, 1, 8, 8)
- assert out[3].shape == (1, 1, 16, 16)
- assert out[4].shape == (1, 1, 16, 16)
- assert out[5].shape == (1, 1, 32, 32)
- assert out[6].shape == (1, 1, 32, 32)
-
- # ------------------ test get_latent ----------------------- #
- out = net.get_latent(style)
- assert out.shape == (1, 512)
-
- # ------------------ test mean_latent ----------------------- #
- out = net.mean_latent(2)
- assert out.shape == (1, 512)
diff --git a/spaces/allknowingroger/Image-Models-Test149/app.py b/spaces/allknowingroger/Image-Models-Test149/app.py
deleted file mode 100644
index 77522ffc536c474245fe273cc4de9986fc306122..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test149/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "DavideTHU/lora-trained-xl",
- "Kalyani03/mountain-valley-with-greenary-and-a-parrot",
- "DSP-31/volley-ball-rgv",
- "datascientistjohn/sdxl-self",
- "harikeshava1223/art-khk",
- "digiplay/CleanLinearMix_nsfw",
- "smjain/kishor",
- "Yntec/BeenYou",
- "mussso/lora-trained-xl",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test73/README.md b/spaces/allknowingroger/Image-Models-Test73/README.md
deleted file mode 100644
index 4eea031a1072cb9ea2f1b0d15d427500eb6dd65e..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test73/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test72
----
-
-
\ No newline at end of file
diff --git a/spaces/almakedon/faster-whisper-webui/src/modelCache.py b/spaces/almakedon/faster-whisper-webui/src/modelCache.py
deleted file mode 100644
index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000
--- a/spaces/almakedon/faster-whisper-webui/src/modelCache.py
+++ /dev/null
@@ -1,17 +0,0 @@
-class ModelCache:
- def __init__(self):
- self._cache = dict()
-
- def get(self, model_key: str, model_factory):
- result = self._cache.get(model_key)
-
- if result is None:
- result = model_factory()
- self._cache[model_key] = result
- return result
-
- def clear(self):
- self._cache.clear()
-
-# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times.
-GLOBAL_MODEL_CACHE = ModelCache()
\ No newline at end of file
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/pa_win_wasapi.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/pa_win_wasapi.c
deleted file mode 100644
index c76f302e68c8b35c5f3fa598a438e4c5a28497f1..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/wasapi/pa_win_wasapi.c
+++ /dev/null
@@ -1,6534 +0,0 @@
-/*
- * Portable Audio I/O Library WASAPI implementation
- * Copyright (c) 2006-2010 David Viens
- * Copyright (c) 2010-2019 Dmitry Kostjuchenko
- *
- * Based on the Open Source API proposed by Ross Bencina
- * Copyright (c) 1999-2019 Ross Bencina, Phil Burk
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- @ingroup hostapi_src
- @brief WASAPI implementation of support for a host API.
- @note pa_wasapi currently requires minimum VC 2005, and the latest Vista SDK
-*/
-
-#include
-#include
-#include
-#include
-
-// Max device count (if defined) causes max constant device count in the device list that
-// enables PaWasapi_UpdateDeviceList() API and makes it possible to update WASAPI list dynamically
-#ifndef PA_WASAPI_MAX_CONST_DEVICE_COUNT
- #define PA_WASAPI_MAX_CONST_DEVICE_COUNT 0 // Force basic behavior by defining 0 if not defined by user
-#endif
-
-// Fallback from Event to the Polling method in case if latency is higher than 21.33ms, as it allows to use
-// 100% of CPU inside the PA's callback.
-// Note: Some USB DAC drivers are buggy when Polling method is forced in Exclusive mode, audio output becomes
-// unstable with a lot of interruptions, therefore this define is optional. The default behavior is to
-// not change the Event mode to Polling and use the mode which user provided.
-//#define PA_WASAPI_FORCE_POLL_IF_LARGE_BUFFER
-
-//! Poll mode time slots logging.
-//#define PA_WASAPI_LOG_TIME_SLOTS
-
-// WinRT
-#if defined(WINAPI_FAMILY) && (WINAPI_FAMILY == WINAPI_FAMILY_APP)
- #define PA_WINRT
- #define INITGUID
-#endif
-
-// WASAPI
-// using adjustments for MinGW build from @mgeier/MXE
-// https://github.com/mxe/mxe/commit/f4bbc45682f021948bdaefd9fd476e2a04c4740f
-#include // must be before other Wasapi headers
-#if defined(_MSC_VER) && (_MSC_VER >= 1400) || defined(__MINGW64_VERSION_MAJOR)
- #include
- #define COBJMACROS
- #include
- #include
- #define INITGUID // Avoid additional linkage of static libs, excessive code will be optimized out by the compiler
-#ifndef _MSC_VER
- #include
-#endif
- #include
- #include
- #include // Used to get IKsJackDescription interface
- #undef INITGUID
-// Visual Studio 2010 does not support the inline keyword
-#if (_MSC_VER <= 1600)
- #define inline _inline
-#endif
-#endif
-#ifndef __MWERKS__
- #include
- #include
-#endif
-#ifndef PA_WINRT
- #include
-#endif
-
-#include "pa_util.h"
-#include "pa_allocation.h"
-#include "pa_hostapi.h"
-#include "pa_stream.h"
-#include "pa_cpuload.h"
-#include "pa_process.h"
-#include "pa_win_wasapi.h"
-#include "pa_debugprint.h"
-#include "pa_ringbuffer.h"
-#include "pa_win_coinitialize.h"
-
-#if !defined(NTDDI_VERSION) || (defined(__GNUC__) && (__GNUC__ <= 6) && !defined(__MINGW64__))
-
- #undef WINVER
- #undef _WIN32_WINNT
- #define WINVER 0x0600 // VISTA
- #define _WIN32_WINNT WINVER
-
- #ifndef WINAPI
- #define WINAPI __stdcall
- #endif
-
- #ifndef __unaligned
- #define __unaligned
- #endif
-
- #ifndef __C89_NAMELESS
- #define __C89_NAMELESS
- #endif
-
- #ifndef _AVRT_ //<< fix MinGW dummy compile by defining missing type: AVRT_PRIORITY
- typedef enum _AVRT_PRIORITY
- {
- AVRT_PRIORITY_LOW = -1,
- AVRT_PRIORITY_NORMAL,
- AVRT_PRIORITY_HIGH,
- AVRT_PRIORITY_CRITICAL
- } AVRT_PRIORITY, *PAVRT_PRIORITY;
- #endif
-
- #include // << for IID/CLSID
- #include
- #include
-
- #ifndef __LPCGUID_DEFINED__
- #define __LPCGUID_DEFINED__
- typedef const GUID *LPCGUID;
- #endif
- typedef GUID IID;
- typedef GUID CLSID;
-
- #ifndef PROPERTYKEY_DEFINED
- #define PROPERTYKEY_DEFINED
- typedef struct _tagpropertykey
- {
- GUID fmtid;
- DWORD pid;
- } PROPERTYKEY;
- #endif
-
- #ifdef __midl_proxy
- #define __MIDL_CONST
- #else
- #define __MIDL_CONST const
- #endif
-
- #ifdef WIN64
- #include
- #define FASTCALL
- #include
- #include
- #else
- typedef struct _BYTE_BLOB
- {
- unsigned long clSize;
- unsigned char abData[ 1 ];
- } BYTE_BLOB;
- typedef /* [unique] */ __RPC_unique_pointer BYTE_BLOB *UP_BYTE_BLOB;
- typedef LONGLONG REFERENCE_TIME;
- #define NONAMELESSUNION
- #endif
-
- #ifndef NT_SUCCESS
- typedef LONG NTSTATUS;
- #endif
-
- #ifndef WAVE_FORMAT_IEEE_FLOAT
- #define WAVE_FORMAT_IEEE_FLOAT 0x0003 // 32-bit floating-point
- #endif
-
- #ifndef __MINGW_EXTENSION
- #if defined(__GNUC__) || defined(__GNUG__)
- #define __MINGW_EXTENSION __extension__
- #else
- #define __MINGW_EXTENSION
- #endif
- #endif
-
- #include
- #include