diff --git a/spaces/101-5/Bing-New/README.md b/spaces/101-5/Bing-New/README.md
deleted file mode 100644
index 97c78293ada614367db98a7b0b0f06736eda7024..0000000000000000000000000000000000000000
--- a/spaces/101-5/Bing-New/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Bing New
-emoji: ⚡
-colorFrom: red
-colorTo: indigo
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/101-5/gpt4free/g4f/Provider/Providers/ChatgptAi.py b/spaces/101-5/gpt4free/g4f/Provider/Providers/ChatgptAi.py
deleted file mode 100644
index 00d4cf6f6bfb6435de9978900829662b26f12047..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/Provider/Providers/ChatgptAi.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-import requests, re
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://chatgpt.ai/gpt-4/'
-model = ['gpt-4']
-supports_stream = False
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- chat = ''
- for message in messages:
- chat += '%s: %s\n' % (message['role'], message['content'])
- chat += 'assistant: '
-
- response = requests.get('https://chatgpt.ai/gpt-4/')
-
- nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0]
-
- headers = {
- 'authority': 'chatgpt.ai',
- 'accept': '*/*',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'cache-control': 'no-cache',
- 'origin': 'https://chatgpt.ai',
- 'pragma': 'no-cache',
- 'referer': 'https://chatgpt.ai/gpt-4/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- }
- data = {
- '_wpnonce': nonce,
- 'post_id': post_id,
- 'url': 'https://chatgpt.ai/gpt-4',
- 'action': 'wpaicg_chat_shortcode_message',
- 'message': chat,
- 'bot_id': bot_id
- }
-
- response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php',
- headers=headers, data=data)
-
- yield (response.json()['data'])
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack High Qualitya Movie Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack High Qualitya Movie Download.md
deleted file mode 100644
index 629c72aa9f0afc7e6c5760e860369cdfa3ebc330..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack High Qualitya Movie Download.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
Cracka is a 2020 TV movie directed by Dale Resteghini that depicts a present day white supremacist who gets thrust back in time where the African Americans rule and the whites are the enslaved. The movie has been criticized for its violent and provocative portrayal of racial reversal and slavery, and has sparked controversy and backlash among viewers and critics alike.
-DOWNLOAD →→→ https://byltly.com/2uKzKW
If you are curious about this movie and want to watch it online, you might be wondering how to download Cracka legally and safely. In this article, we will show you some of the options available for Cracka movie download and streaming, as well as some of the risks and challenges involved.
-Cracka is not available on any of the major streaming platforms like Netflix, Hulu, Amazon Prime Video, or Disney+. The movie was originally planned to be released on a new streaming service called Vyre Network, but it was later removed due to technical issues and negative feedback.
-As of now, the only official way to watch Cracka online is through Google Play Movies & TV, where you can rent or buy the movie for $3.99 or $9.99 respectively. You can also watch it on Amazon Prime Video, where you can rent or buy it for $4.99 or $9.99 respectively. However, these options are only available in certain regions, such as the United States, Canada, Australia, and New Zealand.
- -If you are looking for other ways to download Cracka movie online, you might come across some unofficial websites that claim to offer free or cheap downloads of the movie. However, these websites are illegal and risky, as they may contain malware, viruses, or spyware that can harm your device or compromise your personal information. Moreover, downloading or streaming pirated content is a violation of copyright laws and can result in legal consequences.
-If you want to watch Cracka movie safely and legally, we recommend that you use a reputable and licensed streaming service that offers the movie for rent or purchase. You can also use a VPN (virtual private network) service to access geo-restricted content from different regions. A VPN can help you bypass censorship and protect your online privacy and security by encrypting your data and hiding your IP address.
-However, before you watch Cracka movie online, you should be aware that this movie is not suitable for everyone. The movie contains graphic scenes of violence, torture, rape, and racism that can be disturbing and offensive to some viewers. The movie also has a low rating of 3.7 out of 10 on IMDb and a negative score of 13 out of 100 on TMDb, indicating that most people who watched it did not enjoy it or appreciate its message.
-Therefore, if you decide to watch Cracka movie online, you should do so at your own discretion and with caution. You should also be prepared for the possibility of being disappointed or disgusted by the movie's content and quality.
ddb901b051If you are looking for a fun and exciting MMORPG that combines Greek mythology, epic battles, and diverse gameplay, then you might want to check out Godswar Online. This game offers a lot of features and events that will keep you entertained and challenged. One of these events is the godswar auto race, which is a fast-paced and competitive race that rewards you with valuable items and reputation. In this article, we will give you a brief overview of Godswar Online, explain what godswar auto race is and how to participate in it, and show you how to use a godswar auto race hack to gain an edge over your opponents.
-DOWNLOAD — https://byltly.com/2uKyUW
Godswar Online is a free-to-play MMORPG that was released in 2009 by IGG. The game is set in ancient Greece, where you can choose to join either Athens or Sparta as your faction. You can also choose from four classes: warrior, champion, mage, or priest. Each class has its own skills, strengths, and weaknesses, and you can customize your character with various equipment, mounts, pets, and titles.
-The game features a rich and immersive world that is based on Greek mythology. You can explore different regions, such as Olympus, Crete, Troy, and Athens, and encounter various gods, heroes, monsters, and NPCs. You can also interact with other players through chat, trade, guilds, parties, and alliances. The game also offers a variety of quests, dungeons, bosses, PvP modes, and events that will challenge your skills and strategy.
-The four classes in Godswar Online are warrior, champion, mage, and priest. Each class has its own role and function in the game. Warriors are melee fighters that can deal high damage and tank enemies. Champions are also melee fighters that can deal high damage and stun enemies. Mages are ranged spellcasters that can deal high damage and control enemies. Priests are healers that can heal allies and buff them.
-The game offers a lot of activities and events that you can participate in to earn rewards and have fun. Some of these activities are:
-Godswar auto race is one of the events that you can join in Godswar Online. It is a race that involves running from one point to another in a map, while avoiding obstacles and enemies. The race is held every day at 10:00, 14:00, and 18:00 server time. You can join the race by talking to the NPC Hermes in Athens or Sparta.
-The auto race event lasts for 10 minutes, and you can run as many times as you want within that time. The more times you run, the more points you earn. The points can be exchanged for various rewards, such as gold, silver, bronze medals, reputation, experience, and items. The items include mounts, pets, equipment, gems, potions, and more. You can also get a chance to win a lucky draw prize, such as a rare mount or pet.
-To participate in the auto race, you need to meet some requirements and follow some rules. The requirements are:
-The rules are:
-godswar auto race tips and tricks
-how to win godswar auto race every time
-best gear for godswar auto race
-godswar auto race rewards and achievements
-godswar auto race guide and walkthrough
-godswar auto race cheats and hacks
-godswar auto race gameplay and review
-godswar auto race download and install
-godswar auto race online and offline mode
-godswar auto race latest updates and news
-godswar auto race forum and community
-godswar auto race support and feedback
-godswar auto race vs other racing games
-godswar auto race system requirements and compatibility
-godswar auto race free trial and subscription
-godswar auto race best practices and strategies
-godswar auto race beginner and advanced level
-godswar auto race fun and challenging features
-godswar auto race pros and cons
-godswar auto race ratings and testimonials
-godswar auto race history and development
-godswar auto race characters and vehicles
-godswar auto race customization and personalization
-godswar auto race modes and missions
-godswar auto race leaderboards and rankings
-godswar auto race tournaments and events
-godswar auto race coupons and discounts
-godswar auto race referral and affiliate program
-godswar auto race faq and troubleshooting
-godswar auto race comparison and analysis
-godswar auto race statistics and data
-godswar auto race secrets and Easter eggs
-godswar auto race screenshots and videos
-godswar auto race soundtracks and music
-godswar auto race themes and genres
-godswar auto race inspiration and influences
-godswar auto race alternatives and competitors
-godswar auto race merchandise and accessories
-godswar auto race fan art and memes
-godswar auto race trivia and facts
-how to play godswar auto race on pc or mobile device
-how to improve your skills in godswar auto race
-how to unlock new content in godswar auto race
-how to earn more coins in godswar auto race
-how to join a clan in godswar auto race
-how to chat with other players in godswar auto race
-how to report a bug or issue in godswar auto race
-how to request a feature or suggestion in godswar auto race
-how to contact the developers of godswar auto race
-how to leave a review for godswar auto race
The auto race is not easy, as you will face many challenges and competitors. However, there are some tips and tricks that can help you win the race. Here are some of them:
-If you want to have an unfair advantage over your opponents in the auto race, you might want to use a godswar auto race hack. This is a tool that can help you run faster, smoother, and safer in the race. However, before you use it, you should be aware of the risks and consequences of using hacks.
-Using hacks is against the rules and policies of Godswar Online. If you are caught using hacks, you might face some penalties, such as:
-Therefore, use hacks at your own risk and discretion. We are not responsible for any damage or loss that may occur from using hacks.
-If you still want to use a godswar auto race hack, here are the steps on how to download and install it:
-The hack will automatically run for you in the auto race event. It will avoid obstacles and enemies, use shortcuts and hidden paths, and reach the end point within 5 minutes. It will also talk to Hermes before and after the race to get your points. The hack has some features that you can customize, such as:
-Feature | Description |
---|---|
Speed | You can adjust the speed of your movement from 1x to 10x. |
Invisible | You can make yourself invisible to other players and enemies. |
No Damage | You can make yourself immune to any damage or stun from enemies or traps. |
No Collision | You can make yourself pass through any obstacle or wall without stopping. |
You can enable or disable these features by clicking on the check boxes. You can also use hotkeys to activate or deactivate them.
-Godswar auto race is a fun and rewarding event that you can join in Godswar Online. It is a race that tests your speed, skill, and strategy. You can earn points and exchange them for various rewards, such as gold, medals, reputation, experience, and items. You can also use a godswar auto race hack to make your race easier and faster. However, you should be careful and responsible when using hacks, as they can get you banned or penalized. We hope this article has given you some useful information and tips on godswar auto race. If you want to learn more about Godswar Online and its other features and events, you can visit the official website or watch some videos on YouTube . Thank you for reading and happy racing!
-Here are some sources and references that we used for this article:
-Here are some frequently asked questions that you might have about godswar auto race:
-Download File — https://imgfil.com/2uy0pu
Download Zip ✶ https://imgfil.com/2uy1fJ
Download Zip ===== https://imgfil.com/2uy1PM
If you are a fan of fast-paced multiplayer games with colorful graphics and quirky characters, you have probably heard of Brawl Stars. This game from Supercell, the makers of Clash of Clans and Clash Royale, has been a huge hit since its global launch in December 2018. With over 100 million downloads on Google Play Store and millions of active players worldwide, Brawl Stars is one of the most popular mobile games right now.
-Download Zip 🗹 https://urlin.us/2uT2gX
But what if you want to play the latest version of Brawl Stars before it is officially released on your region? Or what if you have a device that is not compatible with the game from the Play Store? Or what if you just want to have more control over your game files and settings? In that case, you might want to download and install an APK file of Brawl Stars.
-An APK file is a package that contains all the files and data needed to run an Android app. By downloading an APK file, you can bypass the restrictions of the Play Store and install apps that are not available in your region or device. You can also update your apps faster and enjoy new features before they are rolled out to everyone else.
-In this article, we will tell you everything you need to know about Brawl Stars 49.181 APK, the latest version of the game as of June 2023. We will show you how to download and install it on your Android device, how to play it, and what are the new features and improvements that it brings. We will also give you some tips and tricks to help you become a better brawler and win more matches.
-Downloading and installing Brawl Stars 49.181 APK is easy and straightforward. Just follow these steps:
-Tips and warnings:
-Brawl Stars is a fun and addictive game that lets you compete with other players in various modes and arenas. You can choose from over 40 different brawlers, each with their own unique skills and abilities, and customize them with skins and gadgets. You can also join a club and chat with other players, or create your own club and invite your friends.
-Here is an overview of the game modes, features, and characters that you can enjoy in Brawl Stars 49.181 APK:
-Brawl Stars has six main game modes that you can play solo or with a team:
-brawl stars 49.181 apk download free
-brawl stars 49.181 apk mod unlimited gems
-brawl stars 49.181 apk latest version
-brawl stars 49.181 apk android
-brawl stars 49.181 apk update
-brawl stars 49.181 apk obb
-brawl stars 49.181 apk hack
-brawl stars 49.181 apk xapk
-brawl stars 49.181 apk for pc
-brawl stars 49.181 apk offline
-brawl stars 49.181 apk no root
-brawl stars 49.181 apk mirror
-brawl stars 49.181 apk pure
-brawl stars 49.181 apk revdl
-brawl stars 49.181 apk rexdl
-brawl stars 49.181 apk uptodown
-brawl stars 49.181 apk apkpure
-brawl stars 49.181 apk apkmirror
-brawl stars 49.181 apk happymod
-brawl stars 49.181 apk an1
-brawl stars 49.181 apk android oyun club
-brawl stars 49.181 apk andropalace
-brawl stars 49.181 apk blackmod
-brawl stars 49.181 apk bluestacks
-brawl stars 49.181 apk by lenov.ru
-brawl stars 49.181 apk club
-brawl stars 49.181 apk cracked
-brawl stars 49.181 apk data
-brawl stars 49.181 apk download for android
-brawl stars 49.181 apk download link
-brawl stars 49.181 apk download modded games.com
-brawl stars 49.181 apk download uptodown.com
-brawl stars 49.181 apk file download
-brawl stars 49.181 apk fileplanet.com
-brawl stars 49.181 apk free gems and coins generator online tool no human verification no survey no offers no root no jailbreak required works on all devices ios android pc mac windows phone tablet laptop desktop etc.
-brawl stars 49.181 apk full unlocked all brawlers skins gadgets star powers maps modes events quests rewards trophies etc.
-brawl stars 49.181 apk game guardian script hack cheat engine mod menu god mode unlimited ammo health speed damage auto aim auto fire auto win etc.
-brawl stars 49.181 apk google play store link install now enjoy the best multiplayer online battle arena game ever made by supercell the creators of clash of clans clash royale hay day boom beach etc.
-brawl stars 49.181 apk how to install guide step by step tutorial with screenshots video instructions tips tricks faqs troubleshooting help support contact us feedback suggestions etc.
-brawl stars 49.181 apk ios iphone ipad ipod touch compatible compatible with all ios versions and devices jailbreak not required no cydia no appvalley no tweakbox no tutuapp no panda helper no ignition etc.
Besides these modes, there are also special events that rotate every week, such as:
-Brawl Stars 49.181 APK has many features that make it more fun and exciting, such as:
-Brawl Stars has over 40 different characters, or brawlers, that you can unlock and play with. Each brawler has their own unique personality, appearance, voice, and abilities. You can also customize them with different skins and gadgets. Here is a table that shows the basic information of each brawler:
-Name | -Type | -Rarity | -Attack | -Super | -Gadget | -Star Power | -
---|---|---|---|---|---|---|
Shelly | -Fighter | -Starter | -Buckshot: Fires a burst of shells that deal more damage at close range. | -Super Shell: Fires a powerful blast that knocks back enemies and destroys obstacles. | -Fast Forward: Dashes forward a short distance. | -Shell Shock: Enemies hit by Super Shell are slowed down for 3 seconds. Band-Aid: When Shelly falls below 40% health, she instantly heals for 1800 health. Recharges in 20 seconds. |
-
Nita | -Fighter | -Trophy Road (10) | -Rupture: Fires a shockwave that pierces through enemies and deals damage. | -Overbearing: Summons a big baby bear that attacks nearby enemies. | -Faux Fur: Nita and her bear gain a 25% shield for 3 seconds. | -Bear With Me: Nita recovers 800 health whenever her bear hits an enemy, and vice versa. Hyper Bear: Nita's bear attacks 60% faster. |
-
Colt | -Sharpshooter | -Trophy Road (60) | -Six-Shooters: Fires a burst of six bullets that deal damage. | -Bullet Train: Fires a long range barrage of 12 bullets that pierce through enemies and destroy obstacles. | -Speedloader: Colt reloads two ammo instantly. | -Slick Boots: Colt moves 10% faster. Magnum Special: Colt's attack range and bullet speed are increased by 11%. |
-
Brawl Stars is a game that offers endless fun and excitement for players of all ages and preferences. Whether you want to play solo or with your friends, whether you want to compete or cooperate, whether you want to be strategic or spontaneous, Brawl Stars has something for you. With Brawl Stars 49.181 APK, you can enjoy the latest version of the game with new features and improvements. You can download and install it easily on your Android device and start brawling right away. Just remember to be careful and responsible when downloading APK files, and to respect the game developer's terms and conditions.
-We hope this article has helped you learn more about Brawl Stars 49.181 APK and how to play it. If you have any questions or comments, feel free to leave them below. We would love to hear from you. And if you liked this article, please share it with your friends who might also enjoy Brawl Stars. Happy brawling!
-Here are some frequently asked questions about Brawl Stars 49.181 APK:
-Brawl Stars 49.181 APK is safe to download and install as long as you get it from a trusted source that scans it for viruses and malware. However, downloading APK files may expose your device to security risks, so make sure you have a reliable antivirus app on your device and only download APK files from reputable websites.
-No, you do not need to uninstall the previous version of B rawl Stars before installing the new one. The APK file will overwrite the existing data and settings, so you do not need to delete anything. However, you may want to back up your progress before installing the APK file, just in case something goes wrong.
-Yes, you can play Brawl Stars 49.181 APK with your friends who have different versions of the game, as long as they are not too far apart. For example, you can play with your friends who have version 49.180 or 49.182, but not with those who have version 48.200 or 50.100. This is because the game developer may introduce changes or fixes that affect the gameplay or compatibility of different versions.
-The system requirements for Brawl Stars 49.181 APK are the same as the official version of the game from the Play Store. You need an Android device that has at least 2 GB of RAM and runs on Android 4.3 or higher. You also need a stable internet connection and enough storage space to download and install the APK file.
-If you have any issues or feedback regarding Brawl Stars, you can contact the developer of the game through their official channels, such as:
-If you are a fan of hip hop and rap music, you might have heard of Aires Willis, a member of the Young Family group from Angola. He recently released his solo mixtape titled Willis, which features nine tracks with different collaborations and styles. In this article, I will show you how to download Aires Willis mixtape for free and legally, and also give you some information about the artist and his project.
-Download Zip ⚹ https://urlin.us/2uT29u
Aires Willis is a young rapper from Luanda, Angola, who is part of the Young Family group, along with Lil Boy, Lil Mac, Okenio M, Young K and Deivly. He started his musical career in 2017 and has since participated in several songs and projects with his group and other artists. Some of his most popular songs are Codeme, Conversa Chata, Guetão and Banzelo.
-Willis is the name of the first solo mixtape by Aires Willis, which was released on February 1st, 2023. The mixtape contains nine tracks with different themes and vibes, ranging from trap to afrobeat. The mixtape also features guest appearances from other rappers such as Altifridi from Mobbers, Lil Drizzy, Yankema and Kess from NZ Gang. The mixtape was produced by various beatmakers such as Edgar Songz, Lil Mac Beats, Lil Boy Beats and others.
-There are many reasons why you should download Willis Mixtape if you are a fan of hip hop and rap music. Here are some of them:
-download aires willis mixtape 2023
-download aires willis codeme mp3
-download aires willis conversa chata feat young k and okenio m
-download aires willis mais perto da morte song
-download aires willis antes da 2k feat lil boy and lil mac
-download aires willis summer party feat kess and deivly
-download aires willis guetao feat lil drizzy and lil mac
-download aires willis banzelo feat altifridi
-download aires willis vampiras de lisboa skit
-download aires willis ciclones feat yankema and okenio m
-download aires willis ep zip file
-download aires willis full album online
-download aires willis latest songs 2023
-download aires willis hip hop rap music
-download aires willis young family member
-download aires willis jox musik website
-download aires willis portal moz news blog
-download aires willis sonangol muzik site
-download aires willis new scientist magazine
-download aires willis the sun newspaper
-download aires willis yahoo news article
-download aires willis free mp3 music
-download aires willis high quality audio
-download aires willis 320 kbps bitrate
-download aires willis fast and easy
-download aires willis direct link anonfiles
-download aires willis tracklist and playlist
-download aires willis lyrics and chords
-download aires willis video and cover art
-download aires willis review and rating
-download aires willis stream and listen online
-download aires willis spotify and apple music
-download aires willis soundcloud and youtube
-download aires willis instagram and facebook
-download aires willis twitter and tiktok
Downloading Willis Mixtape is very easy and fast. You just need to follow these simple steps:
-Track number | Title | Featuring | Genre |
---|---|---|---|
1 | Codeme | None | Trap |
2 | Conversa Chata | Young K and Okenio M | Afrobeat |
3 | Mais Perto da Morte | None | Rap |
4 | Antes da 2k | Lil Boy and Lil Mac | Trap |
5 | Guetão | Altifridi | Drill |
6 | Banzelo | Lil Drizzy and Yankema | Dancehall |
7 | Meu Mundo | Kess | Rap |
8 | Meu Lugar | None | Rap |
9 | Willis | None | Rap |
In conclusion, Aires Willis is a talented rapper from Angola who has released his first solo mixtape called Willis. The mixtape showcases his versatility and creativity, as he explores different genres and topics in his songs. The mixtape is available for free download on various websites, and you can follow the steps in this article to get it on your device. If you like hip hop and rap music, you should definitely check out Aires Willis's mixtape and support his career.
-You can listen to Aires Willis's mixtape online on platforms such as YouTube, SoundCloud and Audiomack. You can also find the links to these platforms on his Instagram page.
-You can contact Aires Willis through his social media accounts, such as Instagram, Twitter and Facebook. You can also send him an email at aireswillis@gmail.com.
-Aires Willis's mixtape has received positive reviews from critics and fans alike. Some of the comments are:
-Aires Willis has cited some of his influences as rappers such as Drake, Lil Wayne, Kendrick Lamar, J Cole, NGA, Prodígio and Monsta. He also listens to other genres of music, such as R&B, pop, rock and reggae.
-Aires Willis has stated that he plans to continue working on his music and releasing more songs and projects. He also hopes to collaborate with more artists, both local and international, and perform live shows for his fans. He also wants to expand his fan base and reach more people with his music.
197e85843dIf you are looking for a budget-friendly, multi-band, handheld radio that can cover 2 meter, 1.25 meter, 70 centimeter, and air band frequencies, you might want to check out the Abbree AR-730. This radio has a lot of features and functions that make it a versatile device for amateur radio enthusiasts, hobbyists, and professionals alike. However, to get the most out of your radio, you need to program it according to your needs and preferences. In this article, we will show you how to download and install the software for programming your Abbree AR-730, how to connect your radio to your computer, and how to program your radio using the software. By following these steps, you will be able to customize your radio and enjoy its full potential.
-DOWNLOAD › https://jinyurl.com/2uNRlN
The Abbree AR-730 is a multi-band, handheld transceiver that can operate on VHF, UHF, and air band frequencies. It has a dual display, dual standby, dual PTT, and dual receiver function that allows you to monitor two channels simultaneously. It also has a wireless copy frequency function that lets you clone another radio's settings without using a cable. It supports NOAA weather channel receive, FM radio receive, DTMF encode and decode, CTCSS/DCS encode and decode, VOX function, keypad lock, scan function, squelch level adjustment, battery save mode, and more. It has a high-capacity 2200mAh Li-ion battery that can last up to 12 hours of continuous use. It comes with a Type-C charging cable that can charge the radio faster and more conveniently. It also has a sturdy and durable design that can withstand harsh environments.
-Programming your Abbree AR-730 is necessary if you want to use it for different purposes and scenarios. For example, you might want to program different memory channels for different repeaters or frequencies that you frequently use or want to access quickly. You might also want to program different settings for different functions or modes of operation, such as tone mode, power level, bandwidth, offset direction, offset frequency, etc. Programming your radio also allows you to customize it according to your personal preferences, such as display color, backlight time, beep tone, etc. Programming your radio can enhance your communication experience and make your radio more efficient and convenient.
-To program your Abbree AR-730, you need a software that can communicate with your radio and edit its configuration. There are two main sources where you can get the software: the official website of Abbree Electronic Co., Ltd., or other third-party websites that offer compatible software. The official website of Abbree Electronic Co., Ltd. is [1](https://www.abbree.cn/download/), where you can find various downloads for different models of radios, including the AR-730. The software for the AR-730 is called APS-AR730 Programming Software. You can download it for free from the website by clicking on the link under "ABBREE" category. Alternatively, you can also get the software from other sources that offer similar or compatible software for programming radios. One example is RT Systems Inc., which provides APS- AR730 Programming Software. You can download it for a fee from the website by clicking on the link under "ABBREE" category. Both software are compatible with Windows operating systems and have similar features and functions. However, the software from RT Systems Inc. might have some advantages, such as easier installation, better customer support, and more frequent updates. You can choose the software that suits your needs and preferences best.
-Once you have decided which software to use, you need to download and install it on your computer. Here are the steps to do so:
-If you choose to use the software from the official website of Abbree Electronic Co., Ltd., follow these steps:
-If you choose to use the software from RT Systems Inc., follow these steps:
-abbree ar-730 programming software
-abbree ar-730 usb driver download
-abbree ar-730 firmware update
-abbree ar-730 software windows 10
-abbree ar-730 software manual
-abbree ar-730 software free download
-abbree ar-730 software mac
-abbree ar-730 software linux
-abbree ar-730 software installation
-abbree ar-730 software troubleshooting
-abbree ar-730 software review
-abbree ar-730 software alternative
-abbree ar-730 software compatibility
-abbree ar-730 software features
-abbree ar-730 software support
-abbree ar-730 software license
-abbree ar-730 software version
-abbree ar-730 software requirements
-abbree ar-730 software tutorial
-abbree ar-730 software tips
-abbree ar-730 software guide
-abbree ar-730 software forum
-abbree ar-730 software reddit
-abbree ar-730 software youtube
-abbree ar-730 software video
-abbree ar-730 software demo
-abbree ar-730 software online
-abbree ar-730 software website
-abbree ar-730 software link
-abbree ar-730 software file
-abbree ar-730 software zip
-abbree ar-730 software rar
-abbree ar-730 software exe
-abbree ar-730 software pdf
-abbree ar-730 software csv
-abbree ar-730 software rtsystemsinc.com[^2^]
After downloading the software, you need to install it on your computer. The installation process may vary depending on which software you use, but generally, you need to follow these steps:
-Congratulations! You have successfully downloaded and installed the software for programming your Abbree AR-730. Now, you are ready to connect your radio to your computer and start programming it.
-To program your Abbree AR-730 using the software, you need to connect your radio to your computer using a cable. There are two types of cables that you can use: a USB cable or a programming cable. Here are the steps to connect your radio using either cable:
-If you want to use a USB cable, follow these steps:
-If you want to use a programming cable, follow these steps:
-After connecting your radio to your computer using either cable, you need to check the COM port settings on your computer. The COM port is the communication port that allows your computer and your radio to exchange data. You need to make sure that the COM port number on your computer matches the COM port number on your software. Here are the steps to check the COM port settings:
-Congratulations! You have successfully connected your radio to your computer and set up the COM port settings. Now, you are ready to program your radio using the software.
-To program your Abbree AR-730 using the software, you need to follow three main steps: read the current configuration from the radio, edit the memory channels and other settings, and write the new configuration to the radio. Here are the steps to do so:
-Before you start editing the configuration of your radio, you need to read the current configuration from the radio and load it into the software. This will allow you to see what settings are already programmed on your radio and avoid overwriting them by mistake. Here are the steps to read the current configuration from the radio:
-After reading the current configuration from the radio, you can start editing the memory channels and other settings according to your needs and preferences. You can add, delete, modify, or copy memory channels, as well as change other settings, such as power level, bandwidth, offset direction, offset frequency, etc. Here are some examples of how to edit the memory channels and other settings:
-You can edit as many memory channels and other settings as you want. You can also use the "Import" and "Export" functions to import or export data from or to a CSV file. You can also use the "Print" function to print out the configuration of your radio.
-After editing the memory channels and other settings, you need to write the new configuration to the radio and save it. This will overwrite the previous configuration on your radio and apply the changes that you have made. Here are the steps to write the new configuration to the radio:
-Congratulations! You have successfully programmed your Abbree AR-730 using the software. Now, you can turn on your radio and test its functions and performance.
-In this article, we have shown you how to download and install the software for programming your Abbree AR-730, how to connect your radio to your computer, and how to program your radio using the software. By following these steps, you will be able to customize your radio and enjoy its full potential. Programming your radio can enhance your communication experience and make your radio more efficient and convenient.
-Here are the main points that we have covered in this article:
-Here are some tips and tricks that can help you program your Abbree AR-730 better and easier:
-Here are some frequently asked questions and answers about programming your Abbree AR-730:
-If you are a fan of soccer games, you probably have heard of FIFA Mobile, the official mobile game of FIFA, the world's governing body of soccer. FIFA Mobile lets you build your ultimate team of soccer stars, compete in various modes, and experience the thrill of the beautiful game on your phone.
-Download ✑ https://jinyurl.com/2uNMU8
But did you know that there is a modded version of FIFA Mobile that is exclusive to Japan? It's called FIFA Mobile Japan mod apk, and it offers some unique features and benefits that you won't find in the original game. In this article, we will tell you what FIFA Mobile Japan mod apk is, what are its main features, how to download and install it, and some tips and tricks to make the most out of it. Let's get started!
-FIFA Mobile Japan mod apk is a modified version of FIFA Mobile that is developed by NEXON Co., Ltd., a Japanese gaming company. It is only available in Japan, but you can download it from third-party sources if you want to try it out.
-FIFA Mobile Japan mod apk has some differences from the original game, such as:
-fifa mobile japan apk download
-fifa mobile japan mod menu
-fifa mobile japan unlimited money
-fifa mobile japan hack apk
-fifa mobile japan latest version
-fifa mobile japan android game
-fifa mobile japan free download
-fifa mobile japan apk mod
-fifa mobile japan apk obb
-fifa mobile japan apk data
-fifa mobile japan apk pure
-fifa mobile japan mod apk 2023
-fifa mobile japan mod apk offline
-fifa mobile japan mod apk revdl
-fifa mobile japan mod apk rexdl
-fifa mobile japan mod apk happymod
-fifa mobile japan mod apk 5play
-fifa mobile japan mod apk android 1
-fifa mobile japan mod apk unlimited coins
-fifa mobile japan mod apk no root
-fifa mobile japan mod apk online
-fifa mobile japan mod apk update
-fifa mobile japan mod apk mirror
-fifa mobile japan mod apk mega
-fifa mobile japan mod apk mediafire
-fifa mobile japan mod apk 10.0.04
-fifa mobile japan mod apk 9.1.02
-fifa mobile japan mod apk 9.0.05
-fifa mobile japan mod apk 8.1.01
-fifa mobile japan mod apk 7.0.03
-fifa mobile japan mod apk 6.0.02
-fifa mobile japan mod apk 5.0.01
-fifa mobile japan mod apk 4.0.04
-fifa mobile japan mod apk 3.0.03
-fifa mobile japan mod apk 2.0.02
-fifa mobile jp mod apk download
-fifa soccer jp mod apk download
-nexon co ltd jp co nexon fmja mod apk download
-ea sports jp co nexon fmja hack download
-ea sports jp co nexon fmja cheat download
If you are looking for a new way to enjoy soccer on your phone, FIFA Mobile Japan mod apk might be a good option for you. You can create your own team using real clubs and players from Japan and Asia, and enjoy a variety of content such as online competitions and simulation leagues.
-FIFA Mobile Japan mod apk has many features that make it stand out from the original game. Here are some of them:
-FIFA Mobile Japan mod apk is the only licensed FIFA World Cup 2022 mobile game where you can replay the official tournament brackets with any of the 32 qualified nations. You can also rewrite history and take control of 15 non-qualified nations that didn't make it to the World Cup. You can play in authentic World Cup stadiums (Al Bayt and Lusail), wear official World Cup kits and badges, use the official match ball, and listen to localized World Cup commentary. You can also participate in live events that correspond with the real-world tournament throughout the soccer season.
-FIFA Mobile Japan mod apk has a special event called Fearless 23, where you can get players who contributed to their league or Champions League/Europa League/Europa Conference League victory in the previous season. These players have boosted stats and skills that reflect their performance in those competitions. You can also get exclusive rewards such as kits, badges, coins, gems, etc. by completing various challenges in this event.
-FIFA Mobile Japan mod apk introduces a new class of players called Eternal Legend. These are legendary players who have made history in soccer, such as Zidane, Beckham, Ronaldo, Maldini, etc. You can get these players by exchanging tokens earned from live events or by buying them from the market. These players have no OVR limit and can be trained indefinitely. You can also upgrade their skills and abilities by using skill boost items. You can create your dream team of soccer legends with Eternal Legend players.
-FIFA Mobile Japan mod apk improves the passing system by adding new ways to pass the ball. Some of the new passing options are:
-You can also control the direction, power, and curve of your passes by using gestures on the screen. You can swipe, tap, drag, or flick to make different types of passes. You can also use buttons to make quick passes or long passes. The advanced passing system gives you more freedom and creativity in your gameplay.
-If you want to try FIFA Mobile Japan mod apk, you will need to download it from a third-party source, since it is not available on the official app stores. Here are the steps to download and install FIFA Mobile Japan mod apk:
-Note: You may need to use a VPN app to change your location to Japan in order to play FIFA Mobile Japan mod apk. You may also need to update the game regularly from the same website where you downloaded it.
-To help you get started with FIFA Mobile Japan mod apk, here are some tips and tricks that you can use:
-One of the most important aspects of FIFA Mobile Japan mod apk is building your ultimate team. You can choose from hundreds of clubs and players from Japan and Asia, as well as from other regions. You can also get special players from events, modes, or the market. However, you should not just focus on getting the highest-rated players, but also on creating a balanced team that suits your playstyle and formation. You should consider factors such as chemistry, skills, positions, roles, etc. when building your team.
-Another way to improve your team is by training your players. You can use training items or coins to increase the OVR (overall rating) of your players. You can also use skill boost items to enhance their skills and abilities. Training your players will make them stronger, faster, and more effective on the pitch. However, you should be careful not to overtrain your players, as this will increase their contract cost and reduce their stamina.
-FIFA Mobile Japan mod apk offers a variety of modes that you can play, such as:
-Playing different modes will help you improve your skills, test your strategies, and have fun with other players.
-FIFA Mobile Japan mod apk is a great alternative to FIFA Mobile if you want to experience a different version of soccer on your phone. It has more licensed teams, players, and leagues from Japan and Asia, more live events and tournaments that reflect the real-world soccer season in Japan and Asia, more exclusive content and rewards, and better graphics, gameplay, and controls that are optimized for mobile devices. You can download and install FIFA Mobile Japan mod apk from third-party sources and enjoy a new way to enjoy soccer on your phone. You can also use some tips and tricks to build a balanced team, train your players, and play different modes. FIFA Mobile Japan mod apk is a fun and exciting game that will keep you entertained for hours.
-Here are some frequently asked questions and answers about FIFA Mobile Japan mod apk:
-FIFA Mobile Japan mod apk is generally safe to use, as long as you download it from a reliable website and scan it for viruses before installing it. However, you should be aware that using a modded version of FIFA Mobile may violate the terms of service of the original game and may result in your account being banned or suspended. You should also be careful not to share your personal or financial information with any third-party sources or apps.
-Yes, you can play FIFA Mobile Japan mod apk with your friends, as long as they also have the same version of the game installed on their devices. You can invite them to join your league, play friendly matches, or compete in online modes. You can also chat with them in the game and share your progress and achievements.
-There are several ways to get more coins and gems in FIFA Mobile Japan mod apk, such as:
-You should spend your coins and gems wisely on things that will improve your team and gameplay, such as training items, skill boost items, players, etc.
-To update FIFA Mobile Japan mod apk, you will need to download the latest version of the game from the same website where you downloaded it before. You will also need to download the latest OBB data file and extract it to the Android/OBB folder on your device. You should always backup your game data before updating to avoid losing your progress and settings.
-If you are looking for some alternatives to FIFA Mobile Japan mod apk, you can try these games:
-[Arxiv] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer | Github Repo
- """ - ) - - submit_button.click(fn=inference, inputs=input_video, outputs=label) - example_videos.click(fn=set_example_video, inputs=example_videos, outputs=example_videos.components) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_2x_coco.py deleted file mode 100644 index 68ce4d250ac673a274d1458963eb02614e4f5f98..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_2x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './fovea_r50_fpn_4x4_1x_coco.py' -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco.py deleted file mode 100644 index 2a47c60b6312a4b1ae1a7c9b20ada89608568df0..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_2x_ms_480_960_coco.py +++ /dev/null @@ -1,71 +0,0 @@ -_base_ = [ - '../_base_/models/retinanet_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' -] -# model settings -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(depth=101), - bbox_head=dict( - _delete_=True, - type='SABLRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - norm_cfg=norm_cfg, - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=3.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False)) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 960)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -data = dict(train=dict(pipeline=train_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/cascade_rpn_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/cascade_rpn_head.py deleted file mode 100644 index e32ee461951e685fb44a461033293159e3439717..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/cascade_rpn_head.py +++ /dev/null @@ -1,784 +0,0 @@ -from __future__ import division -import copy -import warnings - -import torch -import torch.nn as nn -from mmcv import ConfigDict -from mmcv.cnn import normal_init -from mmcv.ops import DeformConv2d, batched_nms - -from mmdet.core import (RegionAssigner, build_assigner, build_sampler, - images_to_levels, multi_apply) -from ..builder import HEADS, build_head -from .base_dense_head import BaseDenseHead -from .rpn_head import RPNHead - - -class AdaptiveConv(nn.Module): - """AdaptiveConv used to adapt the sampling location with the anchors. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the convolution - kernel_size (int or tuple): Size of the conv kernel. Default: 3 - stride (int or tuple, optional): Stride of the convolution. Default: 1 - padding (int or tuple, optional): Zero-padding added to both sides of - the input. Default: 1 - dilation (int or tuple, optional): Spacing between kernel elements. - Default: 3 - groups (int, optional): Number of blocked connections from input - channels to output channels. Default: 1 - bias (bool, optional): If set True, adds a learnable bias to the - output. Default: False. - type (str, optional): Type of adaptive conv, can be either 'offset' - (arbitrary anchors) or 'dilation' (uniform anchor). - Default: 'dilation'. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - dilation=3, - groups=1, - bias=False, - type='dilation'): - super(AdaptiveConv, self).__init__() - assert type in ['offset', 'dilation'] - self.adapt_type = type - - assert kernel_size == 3, 'Adaptive conv only supports kernels 3' - if self.adapt_type == 'offset': - assert stride == 1 and padding == 1 and groups == 1, \ - 'Adaptive conv offset mode only supports padding: {1}, ' \ - f'stride: {1}, groups: {1}' - self.conv = DeformConv2d( - in_channels, - out_channels, - kernel_size, - padding=padding, - stride=stride, - groups=groups, - bias=bias) - else: - self.conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size, - padding=dilation, - dilation=dilation) - - def init_weights(self): - """Init weights.""" - normal_init(self.conv, std=0.01) - - def forward(self, x, offset): - """Forward function.""" - if self.adapt_type == 'offset': - N, _, H, W = x.shape - assert offset is not None - assert H * W == offset.shape[1] - # reshape [N, NA, 18] to (N, 18, H, W) - offset = offset.permute(0, 2, 1).reshape(N, -1, H, W) - offset = offset.contiguous() - x = self.conv(x, offset) - else: - assert offset is None - x = self.conv(x) - return x - - -@HEADS.register_module() -class StageCascadeRPNHead(RPNHead): - """Stage of CascadeRPNHead. - - Args: - in_channels (int): Number of channels in the input feature map. - anchor_generator (dict): anchor generator config. - adapt_cfg (dict): adaptation config. - bridged_feature (bool, optional): whether update rpn feature. - Default: False. - with_cls (bool, optional): wheather use classification branch. - Default: True. - sampling (bool, optional): wheather use sampling. Default: True. - """ - - def __init__(self, - in_channels, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[1.0], - strides=[4, 8, 16, 32, 64]), - adapt_cfg=dict(type='dilation', dilation=3), - bridged_feature=False, - with_cls=True, - sampling=True, - **kwargs): - self.with_cls = with_cls - self.anchor_strides = anchor_generator['strides'] - self.anchor_scales = anchor_generator['scales'] - self.bridged_feature = bridged_feature - self.adapt_cfg = adapt_cfg - super(StageCascadeRPNHead, self).__init__( - in_channels, anchor_generator=anchor_generator, **kwargs) - - # override sampling and sampler - self.sampling = sampling - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - def _init_layers(self): - """Init layers of a CascadeRPN stage.""" - self.rpn_conv = AdaptiveConv(self.in_channels, self.feat_channels, - **self.adapt_cfg) - if self.with_cls: - self.rpn_cls = nn.Conv2d(self.feat_channels, - self.num_anchors * self.cls_out_channels, - 1) - self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_anchors * 4, 1) - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - """Init weights of a CascadeRPN stage.""" - self.rpn_conv.init_weights() - normal_init(self.rpn_reg, std=0.01) - if self.with_cls: - normal_init(self.rpn_cls, std=0.01) - - def forward_single(self, x, offset): - """Forward function of single scale.""" - bridged_x = x - x = self.relu(self.rpn_conv(x, offset)) - if self.bridged_feature: - bridged_x = x # update feature - cls_score = self.rpn_cls(x) if self.with_cls else None - bbox_pred = self.rpn_reg(x) - return bridged_x, cls_score, bbox_pred - - def forward(self, feats, offset_list=None): - """Forward function.""" - if offset_list is None: - offset_list = [None for _ in range(len(feats))] - return multi_apply(self.forward_single, feats, offset_list) - - def _region_targets_single(self, - anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - featmap_sizes, - label_channels=1): - """Get anchor targets based on region for single level.""" - assign_result = self.assigner.assign( - anchors, - valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - self.anchor_scales[0], - self.anchor_strides, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_labels=None, - allowed_border=self.train_cfg.allowed_border) - flat_anchors = torch.cat(anchors) - sampling_result = self.sampler.sample(assign_result, flat_anchors, - gt_bboxes) - - num_anchors = flat_anchors.shape[0] - bbox_targets = torch.zeros_like(flat_anchors) - bbox_weights = torch.zeros_like(flat_anchors) - labels = flat_anchors.new_zeros(num_anchors, dtype=torch.long) - label_weights = flat_anchors.new_zeros(num_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - labels[pos_inds] = 1 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - def region_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - featmap_sizes, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """See :func:`StageCascadeRPNHead.get_targets`.""" - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list) = multi_apply( - self._region_targets_single, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - featmap_sizes=featmap_sizes, - label_channels=label_channels) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore=None, - label_channels=1): - """Compute regression and classification targets for anchors. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - featmap_sizes (list[Tensor]): Feature mapsize each level - gt_bboxes_ignore (list[Tensor]): Ignore bboxes of each images - label_channels (int): Channel of label. - - Returns: - cls_reg_targets (tuple) - """ - if isinstance(self.assigner, RegionAssigner): - cls_reg_targets = self.region_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore_list=gt_bboxes_ignore, - label_channels=label_channels) - else: - cls_reg_targets = super(StageCascadeRPNHead, self).get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - label_channels=label_channels) - return cls_reg_targets - - def anchor_offset(self, anchor_list, anchor_strides, featmap_sizes): - """ Get offest for deformable conv based on anchor shape - NOTE: currently support deformable kernel_size=3 and dilation=1 - - Args: - anchor_list (list[list[tensor])): [NI, NLVL, NA, 4] list of - multi-level anchors - anchor_strides (list[int]): anchor stride of each level - - Returns: - offset_list (list[tensor]): [NLVL, NA, 2, 18]: offset of DeformConv - kernel. - """ - - def _shape_offset(anchors, stride, ks=3, dilation=1): - # currently support kernel_size=3 and dilation=1 - assert ks == 3 and dilation == 1 - pad = (ks - 1) // 2 - idx = torch.arange(-pad, pad + 1, dtype=dtype, device=device) - yy, xx = torch.meshgrid(idx, idx) # return order matters - xx = xx.reshape(-1) - yy = yy.reshape(-1) - w = (anchors[:, 2] - anchors[:, 0]) / stride - h = (anchors[:, 3] - anchors[:, 1]) / stride - w = w / (ks - 1) - dilation - h = h / (ks - 1) - dilation - offset_x = w[:, None] * xx # (NA, ks**2) - offset_y = h[:, None] * yy # (NA, ks**2) - return offset_x, offset_y - - def _ctr_offset(anchors, stride, featmap_size): - feat_h, feat_w = featmap_size - assert len(anchors) == feat_h * feat_w - - x = (anchors[:, 0] + anchors[:, 2]) * 0.5 - y = (anchors[:, 1] + anchors[:, 3]) * 0.5 - # compute centers on feature map - x = x / stride - y = y / stride - # compute predefine centers - xx = torch.arange(0, feat_w, device=anchors.device) - yy = torch.arange(0, feat_h, device=anchors.device) - yy, xx = torch.meshgrid(yy, xx) - xx = xx.reshape(-1).type_as(x) - yy = yy.reshape(-1).type_as(y) - - offset_x = x - xx # (NA, ) - offset_y = y - yy # (NA, ) - return offset_x, offset_y - - num_imgs = len(anchor_list) - num_lvls = len(anchor_list[0]) - dtype = anchor_list[0][0].dtype - device = anchor_list[0][0].device - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - - offset_list = [] - for i in range(num_imgs): - mlvl_offset = [] - for lvl in range(num_lvls): - c_offset_x, c_offset_y = _ctr_offset(anchor_list[i][lvl], - anchor_strides[lvl], - featmap_sizes[lvl]) - s_offset_x, s_offset_y = _shape_offset(anchor_list[i][lvl], - anchor_strides[lvl]) - - # offset = ctr_offset + shape_offset - offset_x = s_offset_x + c_offset_x[:, None] - offset_y = s_offset_y + c_offset_y[:, None] - - # offset order (y0, x0, y1, x2, .., y8, x8, y9, x9) - offset = torch.stack([offset_y, offset_x], dim=-1) - offset = offset.reshape(offset.size(0), -1) # [NA, 2*ks**2] - mlvl_offset.append(offset) - offset_list.append(torch.cat(mlvl_offset)) # [totalNA, 2*ks**2] - offset_list = images_to_levels(offset_list, num_level_anchors) - return offset_list - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Loss function on single scale.""" - # classification loss - if self.with_cls: - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_reg = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - if self.with_cls: - return loss_cls, loss_reg - return None, loss_reg - - def loss(self, - anchor_list, - valid_flag_list, - cls_scores, - bbox_preds, - gt_bboxes, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - anchor_list (list[list]): Multi level anchors of each image. - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in bbox_preds] - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - featmap_sizes, - gt_bboxes_ignore=gt_bboxes_ignore, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - if self.sampling: - num_total_samples = num_total_pos + num_total_neg - else: - # 200 is hard-coded average factor, - # which follows guided anchoring. - num_total_samples = sum([label.numel() - for label in labels_list]) / 200.0 - - # change per image, per level anchor_list to per_level, per_image - mlvl_anchor_list = list(zip(*anchor_list)) - # concat mlvl_anchor_list - mlvl_anchor_list = [ - torch.cat(anchors, dim=0) for anchors in mlvl_anchor_list - ] - - losses = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - mlvl_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - if self.with_cls: - return dict(loss_rpn_cls=losses[0], loss_rpn_reg=losses[1]) - return dict(loss_rpn_reg=losses[1]) - - def get_bboxes(self, - anchor_list, - cls_scores, - bbox_preds, - img_metas, - cfg, - rescale=False): - """Get proposal predict.""" - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - anchor_list[img_id], img_shape, - scale_factor, cfg, rescale) - result_list.append(proposals) - return result_list - - def refine_bboxes(self, anchor_list, bbox_preds, img_metas): - """Refine bboxes through stages.""" - num_levels = len(bbox_preds) - new_anchor_list = [] - for img_id in range(len(img_metas)): - mlvl_anchors = [] - for i in range(num_levels): - bbox_pred = bbox_preds[i][img_id].detach() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - img_shape = img_metas[img_id]['img_shape'] - bboxes = self.bbox_coder.decode(anchor_list[img_id][i], - bbox_pred, img_shape) - mlvl_anchors.append(bboxes) - new_anchor_list.append(mlvl_anchors) - return new_anchor_list - - # TODO: temporary plan - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_anchors, - img_shape, - scale_factor, - cfg, - rescale=False): - """Transform outputs for a single batch item into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Box reference for each scale level - with shape (num_total_anchors, 4). - img_shape (tuple[int]): Shape of the input image, - (height, width, 3). - scale_factor (ndarray): Scale factor of the image arange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - - Returns: - Tensor: Labeled boxes have the shape of (n,5), where the - first 4 columns are bounding box positions - (tl_x, tl_y, br_x, br_y) and the 5-th column is a score - between 0 and 1. - """ - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - # bboxes from different level should be independent during NMS, - # level_ids are used as labels for batched NMS to separate them - level_ids = [] - mlvl_scores = [] - mlvl_bbox_preds = [] - mlvl_valid_anchors = [] - for idx in range(len(cls_scores)): - rpn_cls_score = cls_scores[idx] - rpn_bbox_pred = bbox_preds[idx] - assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:] - rpn_cls_score = rpn_cls_score.permute(1, 2, 0) - if self.use_sigmoid_cls: - rpn_cls_score = rpn_cls_score.reshape(-1) - scores = rpn_cls_score.sigmoid() - else: - rpn_cls_score = rpn_cls_score.reshape(-1, 2) - # We set FG labels to [0, num_class-1] and BG label to - # num_class in RPN head since mmdet v2.5, which is unified to - # be consistent with other head since mmdet v2.0. In mmdet v2.0 - # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head. - scores = rpn_cls_score.softmax(dim=1)[:, 0] - rpn_bbox_pred = rpn_bbox_pred.permute(1, 2, 0).reshape(-1, 4) - anchors = mlvl_anchors[idx] - if cfg.nms_pre > 0 and scores.shape[0] > cfg.nms_pre: - # sort is faster than topk - # _, topk_inds = scores.topk(cfg.nms_pre) - if torch.onnx.is_in_onnx_export(): - # sort op will be converted to TopK in onnx - # and k<=3480 in TensorRT - _, topk_inds = scores.topk(cfg.nms_pre) - scores = scores[topk_inds] - else: - ranked_scores, rank_inds = scores.sort(descending=True) - topk_inds = rank_inds[:cfg.nms_pre] - scores = ranked_scores[:cfg.nms_pre] - rpn_bbox_pred = rpn_bbox_pred[topk_inds, :] - anchors = anchors[topk_inds, :] - mlvl_scores.append(scores) - mlvl_bbox_preds.append(rpn_bbox_pred) - mlvl_valid_anchors.append(anchors) - level_ids.append( - scores.new_full((scores.size(0), ), idx, dtype=torch.long)) - - scores = torch.cat(mlvl_scores) - anchors = torch.cat(mlvl_valid_anchors) - rpn_bbox_pred = torch.cat(mlvl_bbox_preds) - proposals = self.bbox_coder.decode( - anchors, rpn_bbox_pred, max_shape=img_shape) - ids = torch.cat(level_ids) - - # Skip nonzero op while exporting to ONNX - if cfg.min_bbox_size > 0 and (not torch.onnx.is_in_onnx_export()): - w = proposals[:, 2] - proposals[:, 0] - h = proposals[:, 3] - proposals[:, 1] - valid_inds = torch.nonzero( - (w >= cfg.min_bbox_size) - & (h >= cfg.min_bbox_size), - as_tuple=False).squeeze() - if valid_inds.sum().item() != len(proposals): - proposals = proposals[valid_inds, :] - scores = scores[valid_inds] - ids = ids[valid_inds] - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You ' \ - f'set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - 'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set' \ - f' iou_threshold in nms and ' \ - f'nms_thr at the same time, but get' \ - f' {cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - dets, keep = batched_nms(proposals, scores, ids, cfg.nms) - return dets[:cfg.max_per_img] - - -@HEADS.register_module() -class CascadeRPNHead(BaseDenseHead): - """The CascadeRPNHead will predict more accurate region proposals, which is - required for two-stage detectors (such as Fast/Faster R-CNN). CascadeRPN - consists of a sequence of RPNStage to progressively improve the accuracy of - the detected proposals. - - More details can be found in ``https://arxiv.org/abs/1909.06720``. - - Args: - num_stages (int): number of CascadeRPN stages. - stages (list[dict]): list of configs to build the stages. - train_cfg (list[dict]): list of configs at training time each stage. - test_cfg (dict): config at testing time. - """ - - def __init__(self, num_stages, stages, train_cfg, test_cfg): - super(CascadeRPNHead, self).__init__() - assert num_stages == len(stages) - self.num_stages = num_stages - self.stages = nn.ModuleList() - for i in range(len(stages)): - train_cfg_i = train_cfg[i] if train_cfg is not None else None - stages[i].update(train_cfg=train_cfg_i) - stages[i].update(test_cfg=test_cfg) - self.stages.append(build_head(stages[i])) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def init_weights(self): - """Init weight of CascadeRPN.""" - for i in range(self.num_stages): - self.stages[i].init_weights() - - def loss(self): - """loss() is implemented in StageCascadeRPNHead.""" - pass - - def get_bboxes(self): - """get_bboxes() is implemented in StageCascadeRPNHead.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None): - """Forward train function.""" - assert gt_labels is None, 'RPN does not require gt_labels' - - featmap_sizes = [featmap.size()[-2:] for featmap in x] - device = x[0].device - anchor_list, valid_flag_list = self.stages[0].get_anchors( - featmap_sizes, img_metas, device=device) - - losses = dict() - - for i in range(self.num_stages): - stage = self.stages[i] - - if stage.adapt_cfg['type'] == 'offset': - offset_list = stage.anchor_offset(anchor_list, - stage.anchor_strides, - featmap_sizes) - else: - offset_list = None - x, cls_score, bbox_pred = stage(x, offset_list) - rpn_loss_inputs = (anchor_list, valid_flag_list, cls_score, - bbox_pred, gt_bboxes, img_metas) - stage_loss = stage.loss(*rpn_loss_inputs) - for name, value in stage_loss.items(): - losses['s{}.{}'.format(i, name)] = value - - # refine boxes - if i < self.num_stages - 1: - anchor_list = stage.refine_bboxes(anchor_list, bbox_pred, - img_metas) - if proposal_cfg is None: - return losses - else: - proposal_list = self.stages[-1].get_bboxes(anchor_list, cls_score, - bbox_pred, img_metas, - self.test_cfg) - return losses, proposal_list - - def simple_test_rpn(self, x, img_metas): - """Simple forward test function.""" - featmap_sizes = [featmap.size()[-2:] for featmap in x] - device = x[0].device - anchor_list, _ = self.stages[0].get_anchors( - featmap_sizes, img_metas, device=device) - - for i in range(self.num_stages): - stage = self.stages[i] - if stage.adapt_cfg['type'] == 'offset': - offset_list = stage.anchor_offset(anchor_list, - stage.anchor_strides, - featmap_sizes) - else: - offset_list = None - x, cls_score, bbox_pred = stage(x, offset_list) - if i < self.num_stages - 1: - anchor_list = stage.refine_bboxes(anchor_list, bbox_pred, - img_metas) - - proposal_list = self.stages[-1].get_bboxes(anchor_list, cls_score, - bbox_pred, img_metas, - self.test_cfg) - return proposal_list - - def aug_test_rpn(self, x, img_metas): - """Augmented forward test function.""" - raise NotImplementedError diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/rpn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/rpn.py deleted file mode 100644 index 1a77294549d1c3dc7821063c3f3d08bb331fbe59..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/rpn.py +++ /dev/null @@ -1,154 +0,0 @@ -import mmcv -from mmcv.image import tensor2imgs - -from mmdet.core import bbox_mapping -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class RPN(BaseDetector): - """Implementation of Region Proposal Network.""" - - def __init__(self, - backbone, - neck, - rpn_head, - train_cfg, - test_cfg, - pretrained=None): - super(RPN, self).__init__() - self.backbone = build_backbone(backbone) - self.neck = build_neck(neck) if neck is not None else None - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head.update(train_cfg=rpn_train_cfg) - rpn_head.update(test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.init_weights(pretrained=pretrained) - - def init_weights(self, pretrained=None): - """Initialize the weights in detector. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(RPN, self).init_weights(pretrained) - self.backbone.init_weights(pretrained=pretrained) - if self.with_neck: - self.neck.init_weights() - self.rpn_head.init_weights() - - def extract_feat(self, img): - """Extract features. - - Args: - img (torch.Tensor): Image tensor with shape (n, c, h ,w). - - Returns: - list[torch.Tensor]: Multi-level features that may have - different resolutions. - """ - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Dummy forward function.""" - x = self.extract_feat(img) - rpn_outs = self.rpn_head(x) - return rpn_outs - - def forward_train(self, - img, - img_metas, - gt_bboxes=None, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - if (isinstance(self.train_cfg.rpn, dict) - and self.train_cfg.rpn.get('debug', False)): - self.rpn_head.debug_imgs = tensor2imgs(img) - - x = self.extract_feat(img) - losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None, - gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - x = self.extract_feat(img) - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - if rescale: - for proposals, meta in zip(proposal_list, img_metas): - proposals[:, :4] /= proposals.new_tensor(meta['scale_factor']) - - return [proposal.cpu().numpy() for proposal in proposal_list] - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - proposal_list = self.rpn_head.aug_test_rpn( - self.extract_feats(imgs), img_metas) - if not rescale: - for proposals, img_meta in zip(proposal_list, img_metas[0]): - img_shape = img_meta['img_shape'] - scale_factor = img_meta['scale_factor'] - flip = img_meta['flip'] - flip_direction = img_meta['flip_direction'] - proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - return [proposal.cpu().numpy() for proposal in proposal_list] - - def show_result(self, data, result, top_k=20, **kwargs): - """Show RPN proposals on the image. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - top_k (int): Plot the first k bboxes only - if set positive. Default: 20 - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - mmcv.imshow_bboxes(data, result, top_k=top_k) diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/model.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/model.py deleted file mode 100644 index 5677da7ec2cebaa44c9328ece4873359f459426a..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/model.py +++ /dev/null @@ -1,935 +0,0 @@ -""" CLAP Model - -Adapted from CLIP: https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -Adapted to the Audio Task. -""" - -from collections import OrderedDict -from dataclasses import dataclass -from email.mime import audio -from typing import Tuple, Union, Callable, Optional - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -from .timm_model import TimmModel -import logging -from .utils import freeze_batch_norm_2d - -from .pann_model import create_pann_model -from .htsat import create_htsat_model -from transformers import BertModel, RobertaModel, BartModel, RobertaConfig -from transformers.tokenization_utils_base import BatchEncoding - - -class MLPLayers(nn.Module): - def __init__(self, units=[512, 512, 512], nonlin=nn.ReLU(), dropout=0.1): - super(MLPLayers, self).__init__() - self.nonlin = nonlin - self.dropout = dropout - - sequence = [] - for u0, u1 in zip(units[:-1], units[1:]): - sequence.append(nn.Linear(u0, u1)) - sequence.append(self.nonlin) - sequence.append(nn.Dropout(self.dropout)) - sequence = sequence[:-2] - - self.sequential = nn.Sequential(*sequence) - - def forward(self, X): - X = self.sequential(X) - return X - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential( - OrderedDict( - [ - ("-1", nn.AvgPool2d(stride)), - ( - "0", - nn.Conv2d( - inplanes, - planes * self.expansion, - 1, - stride=1, - bias=False, - ), - ), - ("1", nn.BatchNorm2d(planes * self.expansion)), - ] - ) - ) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__( - self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None - ): - super().__init__() - self.positional_embedding = nn.Parameter( - torch.randn(spacial_dim**2 + 1, embed_dim) / embed_dim**0.5 - ) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute( - 2, 0, 1 - ) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, - key=x, - value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat( - [self.q_proj.bias, self.k_proj.bias, self.v_proj.bias] - ), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False, - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, image_size=224, width=64): - super().__init__() - self.output_dim = output_dim - self.image_size = image_size - - # the 3-layer stem - self.conv1 = nn.Conv2d( - 3, width // 2, kernel_size=3, stride=2, padding=1, bias=False - ) - self.bn1 = nn.BatchNorm2d(width // 2) - self.conv2 = nn.Conv2d( - width // 2, width // 2, kernel_size=3, padding=1, bias=False - ) - self.bn2 = nn.BatchNorm2d(width // 2) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.avgpool = nn.AvgPool2d(2) - self.relu = nn.ReLU(inplace=True) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(image_size // 32, embed_dim, heads, output_dim) - - self.init_parameters() - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def init_parameters(self): - if self.attnpool is not None: - std = self.attnpool.c_proj.in_features**-0.5 - nn.init.normal_(self.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.layer1, self.layer2, self.layer3, self.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert ( - unlocked_groups == 0 - ), "partial locking not currently supported for this model" - for param in self.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self) - - def stem(self, x): - for conv, bn in [ - (self.conv1, self.bn1), - (self.conv2, self.bn2), - (self.conv3, self.bn3), - ]: - x = self.relu(bn(conv(x))) - x = self.avgpool(x) - return x - - def forward(self, x): - x = self.stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - return x.to(orig_type) - - -class QuickGELU(nn.Module): - # NOTE This is slower than nn.GELU or nn.SiLU and uses more GPU memory - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, act_layer: Callable = nn.GELU): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential( - OrderedDict( - [ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", act_layer()), - ("c_proj", nn.Linear(d_model * 4, d_model)), - ] - ) - ) - self.ln_2 = LayerNorm(d_model) - - def attention(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0] - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - x = x + self.attention(self.ln_1(x), attn_mask=attn_mask) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, width: int, layers: int, heads: int, act_layer: Callable = nn.GELU - ): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock(width, heads, act_layer=act_layer) - for _ in range(layers) - ] - ) - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - for r in self.resblocks: - x = r(x, attn_mask=attn_mask) - return x - - -class VisualTransformer(nn.Module): - def __init__( - self, - image_size: int, - patch_size: int, - width: int, - layers: int, - heads: int, - output_dim: int, - act_layer: Callable = nn.GELU, - ): - super().__init__() - self.image_size = image_size - self.output_dim = output_dim - self.conv1 = nn.Conv2d( - in_channels=3, - out_channels=width, - kernel_size=patch_size, - stride=patch_size, - bias=False, - ) - - scale = width**-0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter( - scale * torch.randn((image_size // patch_size) ** 2 + 1, width) - ) - self.ln_pre = LayerNorm(width) - - self.text_branch = Transformer(width, layers, heads, act_layer=act_layer) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert ( - unlocked_groups == 0 - ), "partial locking not currently supported for this model" - for param in self.parameters(): - param.requires_grad = False - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat( - [ - self.class_embedding.to(x.dtype) - + torch.zeros( - x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device - ), - x, - ], - dim=1, - ) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_branch(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - -@dataclass -class CLAPVisionCfg: - layers: Union[Tuple[int, int, int, int], int] = 12 - width: int = 768 - patch_size: int = 16 - image_size: Union[Tuple[int, int], int] = 224 - timm_model_name: str = ( - None # a valid model name overrides layers, width, patch_size - ) - timm_model_pretrained: bool = ( - False # use (imagenet) pretrained weights for named model - ) - timm_pool: str = ( - "avg" # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '') - ) - timm_proj: str = ( - "linear" # linear projection for timm model output ('linear', 'mlp', '') - ) - - -# Audio Config Class -@dataclass -class CLAPAudioCfp: - model_type: str = "PANN" - model_name: str = "Cnn14" - sample_rate: int = 48000 - # Param - audio_length: int = 1024 - window_size: int = 1024 - hop_size: int = 1024 - fmin: int = 50 - fmax: int = 14000 - class_num: int = 527 - mel_bins: int = 64 - clip_samples: int = 480000 - - -@dataclass -class CLAPTextCfg: - context_length: int - vocab_size: int - width: int - heads: int - layers: int - model_type: str - - -class CLAP(nn.Module): - def __init__( - self, - embed_dim: int, - audio_cfg: CLAPAudioCfp, - text_cfg: CLAPTextCfg, - quick_gelu: bool = False, - enable_fusion: bool = False, - fusion_type: str = "None", - joint_embed_shape: int = 512, - mlp_act: str = "relu", - ): - super().__init__() - if isinstance(audio_cfg, dict): - audio_cfg = CLAPAudioCfp(**audio_cfg) - if isinstance(text_cfg, dict): - text_cfg = CLAPTextCfg(**text_cfg) - - self.audio_cfg = audio_cfg - self.text_cfg = text_cfg - self.enable_fusion = enable_fusion - self.fusion_type = fusion_type - self.joint_embed_shape = joint_embed_shape - self.mlp_act = mlp_act - - self.context_length = text_cfg.context_length - - # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more - # memory efficient in recent PyTorch releases (>= 1.10). - # NOTE: timm models always use native GELU regardless of quick_gelu flag. - act_layer = QuickGELU if quick_gelu else nn.GELU - - if mlp_act == "relu": - mlp_act_layer = nn.ReLU() - elif mlp_act == "gelu": - mlp_act_layer = nn.GELU() - else: - raise NotImplementedError - - # audio branch - # audio branch parameters - if audio_cfg.model_type == "PANN": - self.audio_branch = create_pann_model(audio_cfg, enable_fusion, fusion_type) - elif audio_cfg.model_type == "HTSAT": - self.audio_branch = create_htsat_model( - audio_cfg, enable_fusion, fusion_type - ) - else: - logging.error(f"Model config for {audio_cfg.model_type} not found") - raise RuntimeError(f"Model config for {audio_cfg.model_type} not found.") - - # text branch - # text branch parameters - if text_cfg.model_type == "transformer": - self.text_branch = Transformer( - width=text_cfg.width, - layers=text_cfg.layers, - heads=text_cfg.heads, - act_layer=act_layer, - ) - self.vocab_size = text_cfg.vocab_size - self.token_embedding = nn.Embedding(text_cfg.vocab_size, text_cfg.width) - self.positional_embedding = nn.Parameter( - torch.empty(self.context_length, text_cfg.width) - ) - self.ln_final = LayerNorm(text_cfg.width) - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(text_cfg.width, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - elif text_cfg.model_type == "bert": - self.text_branch = BertModel.from_pretrained("bert-base-uncased") - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(768, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - elif text_cfg.model_type == "roberta": - self.text_branch = RobertaModel.from_pretrained("roberta-base") - - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(768, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - elif text_cfg.model_type == "bart": - self.text_branch = BartModel.from_pretrained("facebook/bart-base") - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(768, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - else: - logging.error(f"Model config for {text_cfg.model_type} not found") - raise RuntimeError(f"Model config for {text_cfg.model_type} not found.") - self.text_branch_type = text_cfg.model_type - # text branch parameters - - # audio branch parameters - self.audio_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - - # below here is text branch parameters - - # ============================================================================================================ - self.audio_projection = nn.Sequential( - nn.Linear(embed_dim, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - - self.logit_scale_a = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - self.logit_scale_t = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - self.register_buffer("attn_mask", self.build_attention_mask(), persistent=False) - - self.init_text_branch_parameters() - - def init_text_branch_parameters(self): - if self.text_branch_type == "transformer": - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - proj_std = (self.text_branch.width**-0.5) * ( - (2 * self.text_branch.layers) ** -0.5 - ) - attn_std = self.text_branch.width**-0.5 - fc_std = (2 * self.text_branch.width) ** -0.5 - for block in self.text_branch.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - if self.text_branch_type == "bert" or self.text_branch_type == "roberta": - width = self.text_branch.embeddings.word_embeddings.weight.shape[-1] - elif self.text_branch_type == "bart": - width = self.text_branch.shared.weight.shape[-1] - else: - width = self.text_branch.width - nn.init.constant_(self.logit_scale_a, np.log(1 / 0.07)) - nn.init.constant_(self.logit_scale_t, np.log(1 / 0.07)) - - # deprecated - # if hasattr(self.visual, 'init_parameters'): - # self.visual.init_parameters() - - # if self.text_projection is not None: - # nn.init.normal_(self.text_projection, std=width**-0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def encode_audio(self, audio, device): - return self.audio_branch( - audio, mixup_lambda=None, device=device - ) # mix lambda needs to add - - # def list_of_dict_of_tensor2dict_of_tensor(self, x, device): - # tmp = {} - # for k in x[0].keys(): - # tmp[k] = [] - # for i in range(len(x)): - # tmp[k].append(x[i][k][:77]) - # for k in x[0].keys(): - # tmp[k] = torch.tensor(tmp[k]).to(device=device, non_blocking=True) - # return tmp - - def encode_text(self, text, device): - if self.text_branch_type == "transformer": - text = text.to(device=device, non_blocking=True) - x = self.token_embedding(text) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_branch(x, attn_mask=self.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = self.text_projection(x[torch.arange(x.shape[0]), text.argmax(dim=-1)]) - elif self.text_branch_type == "bert": - # text = self.list_of_dict_of_tensor2dict_of_tensor(text, device) - # text = BatchEncoding(text) - x = self.text_branch( - input_ids=text["input_ids"].to(device=device, non_blocking=True), - attention_mask=text["attention_mask"].to( - device=device, non_blocking=True - ), - token_type_ids=text["token_type_ids"].to( - device=device, non_blocking=True - ), - )["pooler_output"] - x = self.text_projection(x) - elif self.text_branch_type == "roberta": - x = self.text_branch( - input_ids=text["input_ids"].to(device=device, non_blocking=True), - attention_mask=text["attention_mask"].to( - device=device, non_blocking=True - ), - )["pooler_output"] - x = self.text_projection(x) - elif self.text_branch_type == "bart": - x = torch.mean( - self.text_branch( - input_ids=text["input_ids"].to(device=device, non_blocking=True), - attention_mask=text["attention_mask"].to( - device=device, non_blocking=True - ), - )["encoder_last_hidden_state"], - axis=1, - ) - x = self.text_projection(x) - else: - logging.error(f"Model type {self.text_branch_type} not found") - raise RuntimeError(f"Model type {self.text_branch_type} not found.") - return x - - def forward(self, audio, text, device=None): - """Forward audio and text into the CLAP - - Parameters - ---------- - audio: torch.Tensor (batch_size, audio_length) - the time-domain audio input / the batch of mel_spec and longer list. - text: torch.Tensor () // need to add - the text token input - """ - if device is None: - if audio is not None: - device = audio.device - elif text is not None: - device = text.device - if audio is None and text is None: - # a hack to get the logit scale - return self.logit_scale_a.exp(), self.logit_scale_t.exp() - elif audio is None: - return self.encode_text(text, device=device) - elif text is None: - return self.audio_projection( - self.encode_audio(audio, device=device)["embedding"] - ) - audio_features = self.audio_projection( - self.encode_audio(audio, device=device)["embedding"] - ) - audio_features = F.normalize(audio_features, dim=-1) - - text_features = self.encode_text(text, device=device) - # print("text_features", text_features) - # print("text_features.shape", text_features.shape) - # print("text_features.type", type(text_features)) - text_features = F.normalize(text_features, dim=-1) - - audio_features_mlp = self.audio_transform(audio_features) - text_features_mlp = self.text_transform(text_features) - # Four outputs: audio features (basic & MLP), text features (basic & MLP) - return ( - audio_features, - text_features, - audio_features_mlp, - text_features_mlp, - self.logit_scale_a.exp(), - self.logit_scale_t.exp(), - ) - - def get_logit_scale(self): - return self.logit_scale_a.exp(), self.logit_scale_t.exp() - - def get_text_embedding(self, data): - """Get the text embedding from the model - - Parameters - ---------- - data: torch.Tensor - a tensor of text embedding - - Returns - ---------- - text_embed: torch.Tensor - a tensor of text_embeds (N, D) - - """ - device = next(self.parameters()).device - for k in data: - data[k] = data[k].to(device) - text_embeds = self.encode_text(data, device=device) - text_embeds = F.normalize(text_embeds, dim=-1) - - return text_embeds - - def get_audio_embedding(self, data): - """Get the audio embedding from the model - - Parameters - ---------- - data: a list of dict - the audio input dict list from 'get_audio_feature' method - - Returns - ---------- - audio_embed: torch.Tensor - a tensor of audio_embeds (N, D) - - """ - device = next(self.parameters()).device - input_dict = {} - keys = data[0].keys() - for k in keys: - input_dict[k] = torch.cat([d[k].unsqueeze(0) for d in data], dim=0).to( - device - ) - - audio_embeds = self.audio_projection( - self.encode_audio(input_dict, device=device)["embedding"] - ) - audio_embeds = F.normalize(audio_embeds, dim=-1) - - return audio_embeds - - def audio_infer(self, audio, hopsize=None, device=None): - """Forward one audio and produce the audio embedding - - Parameters - ---------- - audio: (audio_length) - the time-domain audio input, notice that it must be only one input - hopsize: int - the overlap hopsize as the sliding window - - Returns - ---------- - output_dict: { - key: [n, (embedding_shape)] if "HTS-AT" - or - key: [(embedding_shape)] if "PANN" - } - the list of key values of the audio branch - - """ - - assert not self.training, "the inference mode must be run at eval stage" - output_dict = {} - # PANN - if self.audio_cfg.model_type == "PANN": - audio_input = audio.unsqueeze(dim=0) - output_dict[key] = self.encode_audio(audio_input, device=device)[ - key - ].squeeze(dim=0) - elif self.audio_cfg.model_type == "HTSAT": - # repeat - audio_len = len(audio) - k = self.audio_cfg.clip_samples // audio_len - if k > 1: - audio = audio.repeat(k) - audio_len = len(audio) - - if hopsize is None: - hopsize = min(hopsize, audio_len) - - if audio_len > self.audio_cfg.clip_samples: - audio_input = [ - audio[pos : pos + self.audio_cfg.clip_samples].clone() - for pos in range( - 0, audio_len - self.audio_cfg.clip_samples, hopsize - ) - ] - audio_input.append(audio[-self.audio_cfg.clip_samples :].clone()) - audio_input = torch.stack(audio_input) - output_dict[key] = self.encode_audio(audio_input, device=device)[key] - else: - audio_input = audio.unsqueeze(dim=0) - output_dict[key] = self.encode_audio(audio_input, device=device)[ - key - ].squeeze(dim=0) - - return output_dict - - -def convert_weights_to_fp16(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [ - *[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], - "in_proj_bias", - "bias_k", - "bias_v", - ]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -# Ignore the state dict of the vision part -def build_model_from_openai_state_dict( - state_dict: dict, model_cfg, enable_fusion: bool = False, fusion_type: str = "None" -): - - embed_dim = model_cfg["embed_dim"] - audio_cfg = model_cfg["audio_cfg"] - text_cfg = model_cfg["text_cfg"] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len( - set( - k.split(".")[2] - for k in state_dict - if k.startswith(f"transformer.resblocks") - ) - ) - - audio_cfg = CLAPAudioCfp(**audio_cfg) - text_cfg = CLAPTextCfg(**text_cfg) - - model = CLAP( - embed_dim, - audio_cfg=audio_cfg, - text_cfg=text_cfg, - quick_gelu=True, # OpenAI models were trained with QuickGELU - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - state_dict["logit_scale_a"] = state_dict["logit_scale"] - state_dict["logit_scale_t"] = state_dict["logit_scale"] - pop_keys = list(state_dict.keys())[::] - # pop the visual branch saved weights - for key in pop_keys: - if key.startswith("visual."): - state_dict.pop(key, None) - - for key in ["logit_scale", "input_resolution", "context_length", "vocab_size"]: - state_dict.pop(key, None) - - # not use fp16 - # convert_weights_to_fp16(model) - model.load_state_dict(state_dict, strict=False) - return model.eval() - - -def trace_model(model, batch_size=256, device=torch.device("cpu")): - model.eval() - audio_length = model.audio_cfg.audio_length - example_audio = torch.ones((batch_size, audio_length), device=device) - example_text = torch.zeros( - (batch_size, model.context_length), dtype=torch.int, device=device - ) - model = torch.jit.trace_module( - model, - inputs=dict( - forward=(example_audio, example_text), - encode_text=(example_text,), - encode_image=(example_audio,), - ), - ) - model.audio_cfg.audio_length = audio_length # Question: what does this do? - return model diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/lr_scheduler.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/lr_scheduler.py deleted file mode 100644 index 8803e87b9e60cffdbe048c97c282d353191ae4c8..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/solver/lr_scheduler.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import math -from bisect import bisect_right -from typing import List -import torch -from fvcore.common.param_scheduler import ( - CompositeParamScheduler, - ConstantParamScheduler, - LinearParamScheduler, - ParamScheduler, -) - -logger = logging.getLogger(__name__) - - -class WarmupParamScheduler(CompositeParamScheduler): - """ - Add an initial warmup stage to another scheduler. - """ - - def __init__( - self, - scheduler: ParamScheduler, - warmup_factor: float, - warmup_length: float, - warmup_method: str = "linear", - ): - """ - Args: - scheduler: warmup will be added at the beginning of this scheduler - warmup_factor: the factor w.r.t the initial value of ``scheduler``, e.g. 0.001 - warmup_length: the relative length (in [0, 1]) of warmup steps w.r.t the entire - training, e.g. 0.01 - warmup_method: one of "linear" or "constant" - """ - end_value = scheduler(warmup_length) # the value to reach when warmup ends - start_value = warmup_factor * scheduler(0.0) - if warmup_method == "constant": - warmup = ConstantParamScheduler(start_value) - elif warmup_method == "linear": - warmup = LinearParamScheduler(start_value, end_value) - else: - raise ValueError("Unknown warmup method: {}".format(warmup_method)) - super().__init__( - [warmup, scheduler], - interval_scaling=["rescaled", "fixed"], - lengths=[warmup_length, 1 - warmup_length], - ) - - -class LRMultiplier(torch.optim.lr_scheduler._LRScheduler): - """ - A LRScheduler which uses fvcore :class:`ParamScheduler` to multiply the - learning rate of each param in the optimizer. - Every step, the learning rate of each parameter becomes its initial value - multiplied by the output of the given :class:`ParamScheduler`. - - The absolute learning rate value of each parameter can be different. - This scheduler can be used as long as the relative scale among them do - not change during training. - - Examples: - :: - LRMultiplier( - opt, - WarmupParamScheduler( - MultiStepParamScheduler( - [1, 0.1, 0.01], - milestones=[60000, 80000], - num_updates=90000, - ), 0.001, 100 / 90000 - ), - max_iter=90000 - ) - """ - - # NOTES: in the most general case, every LR can use its own scheduler. - # Supporting this requires interaction with the optimizer when its parameter - # group is initialized. For example, classyvision implements its own optimizer - # that allows different schedulers for every parameter group. - # To avoid this complexity, we use this class to support the most common cases - # where the relative scale among all LRs stay unchanged during training. In this - # case we only need a total of one scheduler that defines the relative LR multiplier. - - def __init__( - self, - optimizer: torch.optim.Optimizer, - multiplier: ParamScheduler, - max_iter: int, - last_iter: int = -1, - ): - """ - Args: - optimizer, last_iter: See ``torch.optim.lr_scheduler._LRScheduler``. - ``last_iter`` is the same as ``last_epoch``. - multiplier: a fvcore ParamScheduler that defines the multiplier on - every LR of the optimizer - max_iter: the total number of training iterations - """ - if not isinstance(multiplier, ParamScheduler): - raise ValueError( - "_LRMultiplier(multiplier=) must be an instance of fvcore " - f"ParamScheduler. Got {multiplier} instead." - ) - self._multiplier = multiplier - self._max_iter = max_iter - super().__init__(optimizer, last_epoch=last_iter) - - def state_dict(self): - # fvcore schedulers are stateless. Only keep pytorch scheduler states - return {"base_lrs": self.base_lrs, "last_epoch": self.last_epoch} - - def get_lr(self) -> List[float]: - multiplier = self._multiplier(self.last_epoch / self._max_iter) - return [base_lr * multiplier for base_lr in self.base_lrs] - - -""" -Content below is no longer needed! -""" - -# NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes -# only on epoch boundaries. We typically use iteration based schedules instead. -# As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean -# "iteration" instead. - -# FIXME: ideally this would be achieved with a CombinedLRScheduler, separating -# MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it. - - -class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler): - def __init__( - self, - optimizer: torch.optim.Optimizer, - milestones: List[int], - gamma: float = 0.1, - warmup_factor: float = 0.001, - warmup_iters: int = 1000, - warmup_method: str = "linear", - last_epoch: int = -1, - ): - logger.warning( - "WarmupMultiStepLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!" - ) - if not list(milestones) == sorted(milestones): - raise ValueError( - "Milestones should be a list of" " increasing integers. Got {}", milestones - ) - self.milestones = milestones - self.gamma = gamma - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - super().__init__(optimizer, last_epoch) - - def get_lr(self) -> List[float]: - warmup_factor = _get_warmup_factor_at_iter( - self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor - ) - return [ - base_lr * warmup_factor * self.gamma ** bisect_right(self.milestones, self.last_epoch) - for base_lr in self.base_lrs - ] - - def _compute_values(self) -> List[float]: - # The new interface - return self.get_lr() - - -class WarmupCosineLR(torch.optim.lr_scheduler._LRScheduler): - def __init__( - self, - optimizer: torch.optim.Optimizer, - max_iters: int, - warmup_factor: float = 0.001, - warmup_iters: int = 1000, - warmup_method: str = "linear", - last_epoch: int = -1, - ): - logger.warning( - "WarmupCosineLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!" - ) - self.max_iters = max_iters - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - super().__init__(optimizer, last_epoch) - - def get_lr(self) -> List[float]: - warmup_factor = _get_warmup_factor_at_iter( - self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor - ) - # Different definitions of half-cosine with warmup are possible. For - # simplicity we multiply the standard half-cosine schedule by the warmup - # factor. An alternative is to start the period of the cosine at warmup_iters - # instead of at 0. In the case that warmup_iters << max_iters the two are - # very close to each other. - return [ - base_lr - * warmup_factor - * 0.5 - * (1.0 + math.cos(math.pi * self.last_epoch / self.max_iters)) - for base_lr in self.base_lrs - ] - - def _compute_values(self) -> List[float]: - # The new interface - return self.get_lr() - - -def _get_warmup_factor_at_iter( - method: str, iter: int, warmup_iters: int, warmup_factor: float -) -> float: - """ - Return the learning rate warmup factor at a specific iteration. - See :paper:`ImageNet in 1h` for more details. - - Args: - method (str): warmup method; either "constant" or "linear". - iter (int): iteration at which to calculate the warmup factor. - warmup_iters (int): the number of warmup iterations. - warmup_factor (float): the base warmup factor (the meaning changes according - to the method used). - - Returns: - float: the effective warmup factor at the given iteration. - """ - if iter >= warmup_iters: - return 1.0 - - if method == "constant": - return warmup_factor - elif method == "linear": - alpha = iter / warmup_iters - return warmup_factor * (1 - alpha) + alpha - else: - raise ValueError("Unknown warmup method: {}".format(method)) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/_static/css/custom.css b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/_static/css/custom.css deleted file mode 100644 index 6c511764cf4c1d55a227619a98e5ba6578619ad7..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/_static/css/custom.css +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Copyright (c) Facebook, Inc. and its affiliates. - * some extra css to make markdown look similar between github/sphinx - */ - -/* - * Below is for install.md: - */ -.rst-content code { - white-space: pre; - border: 0px; -} - -.rst-content th { - border: 1px solid #e1e4e5; -} - -.rst-content th p { - /* otherwise will be default 24px for regular paragraph */ - margin-bottom: 0px; -} - -.rst-content .line-block { - /* otherwise will be 24px */ - margin-bottom: 0px; -} - -div.section > details { - padding-bottom: 1em; -} diff --git a/spaces/BREWDAcademy/Brewd-Diffusion/app.py b/spaces/BREWDAcademy/Brewd-Diffusion/app.py deleted file mode 100644 index 38b90ecbb4cf516bad6c738b580b2dc932b6b6b2..0000000000000000000000000000000000000000 --- a/spaces/BREWDAcademy/Brewd-Diffusion/app.py +++ /dev/null @@ -1,391 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import torch -from diffusers import AutoencoderKL, StableDiffusionXLPipeline -import uuid - -DESCRIPTION = '''# BREWD Stable Diffusion: SSD-1B -''' -if not torch.cuda.is_available(): - DESCRIPTION += "\nRunning on CPU 🥶 This demo does not work on CPU.
" - -MAX_SEED = np.iinfo(np.int32).max -CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv("CACHE_EXAMPLES", "1") == "1" -MAX_IMAGE_SIZE = int(os.getenv("MAX_IMAGE_SIZE", "1024")) -USE_TORCH_COMPILE = os.getenv("USE_TORCH_COMPILE", "1") == "1" -ENABLE_CPU_OFFLOAD = os.getenv("ENABLE_CPU_OFFLOAD", "0") == "1" -ENABLE_REFINER = os.getenv("ENABLE_REFINER", "0") == "1" - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -style_list = [ - { - "name": "(No style)", - "prompt": "{prompt}", - "negative_prompt": "", - }, - { - "name": "Cinematic", - "prompt": "cinematic still {prompt} . emotional, harmonious, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy", - "negative_prompt": "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured", - }, - { - "name": "Photographic", - "prompt": "cinematic photo {prompt} . 35mm photograph, film, bokeh, professional, 4k, highly detailed", - "negative_prompt": "drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly", - }, - { - "name": "Anime", - "prompt": "anime artwork {prompt} . anime style, key visual, vibrant, studio anime, highly detailed", - "negative_prompt": "photo, deformed, black and white, realism, disfigured, low contrast", - }, - { - "name": "Manga", - "prompt": "manga style {prompt} . vibrant, high-energy, detailed, iconic, Japanese comic style", - "negative_prompt": "ugly, deformed, noisy, blurry, low contrast, realism, photorealistic, Western comic style", - }, - { - "name": "Digital Art", - "prompt": "concept art {prompt} . digital artwork, illustrative, painterly, matte painting, highly detailed", - "negative_prompt": "photo, photorealistic, realism, ugly", - }, - { - "name": "Pixel art", - "prompt": "pixel-art {prompt} . low-res, blocky, pixel art style, 8-bit graphics", - "negative_prompt": "sloppy, messy, blurry, noisy, highly detailed, ultra textured, photo, realistic", - }, - { - "name": "Fantasy art", - "prompt": "ethereal fantasy concept art of {prompt} . magnificent, celestial, ethereal, painterly, epic, majestic, magical, fantasy art, cover art, dreamy", - "negative_prompt": "photographic, realistic, realism, 35mm film, dslr, cropped, frame, text, deformed, glitch, noise, noisy, off-center, deformed, cross-eyed, closed eyes, bad anatomy, ugly, disfigured, sloppy, duplicate, mutated, black and white", - }, - { - "name": "Neonpunk", - "prompt": "neonpunk style {prompt} . cyberpunk, vaporwave, neon, vibes, vibrant, stunningly beautiful, crisp, detailed, sleek, ultramodern, magenta highlights, dark purple shadows, high contrast, cinematic, ultra detailed, intricate, professional", - "negative_prompt": "painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly, disfigured", - }, - { - "name": "3D Model", - "prompt": "professional 3d model {prompt} . octane render, highly detailed, volumetric, dramatic lighting", - "negative_prompt": "ugly, deformed, noisy, low poly, blurry, painting", - }, -] - -styles = {k["name"]: (k["prompt"], k["negative_prompt"]) for k in style_list} -STYLE_NAMES = list(styles.keys()) -DEFAULT_STYLE_NAME = "Cinematic" - - -def apply_style(style_name: str, positive: str, negative: str = "") -> Tuple[str, str]: - p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME]) - if not negative: - negative = "" - return p.replace("{prompt}", positive), n + negative - - -if torch.cuda.is_available(): - vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) - pipe = StableDiffusionXLPipeline.from_pretrained( - "segmind/SSD-1B", - vae=vae, - torch_dtype=torch.float16, - use_safetensors=True, - variant="fp16", - ) - if ENABLE_REFINER: - refiner = DiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-xl-refiner-1.0", - vae=vae, - torch_dtype=torch.float16, - use_safetensors=True, - variant="fp16", - ) - - if ENABLE_CPU_OFFLOAD: - pipe.enable_model_cpu_offload() - if ENABLE_REFINER: - refiner.enable_model_cpu_offload() - else: - pipe.to(device) - if ENABLE_REFINER: - refiner.to(device) - print("Loaded on Device!") - - if USE_TORCH_COMPILE: - pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - if ENABLE_REFINER: - refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) - print("Model Compiled!") - -def save_image(img): - unique_name = str(uuid.uuid4()) + '.png' - img.save(unique_name) - return unique_name - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - -def generate( - prompt: str, - negative_prompt: str = "", - style: str = DEFAULT_STYLE_NAME, - prompt_2: str = "", - negative_prompt_2: str = "", - use_negative_prompt: bool = False, - use_prompt_2: bool = False, - use_negative_prompt_2: bool = False, - seed: int = 0, - width: int = 1024, - height: int = 1024, - guidance_scale_base: float = 5.0, - guidance_scale_refiner: float = 5.0, - num_inference_steps_base: int = 25, - num_inference_steps_refiner: int = 25, - apply_refiner: bool = False, - randomize_seed: bool = False, - progress = gr.Progress(track_tqdm=True) -): - seed = randomize_seed_fn(seed, randomize_seed) - generator = torch.Generator().manual_seed(seed) - - if not use_negative_prompt: - negative_prompt = None # type: ignore - if not use_prompt_2: - prompt_2 = None # type: ignore - if not use_negative_prompt_2: - negative_prompt_2 = None # type: ignore - prompt, negative_prompt = apply_style(style, prompt, negative_prompt) - if not apply_refiner: - image = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type="pil", - ).images[0] - else: - latents = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type="latent", - ).images - image = refiner( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - guidance_scale=guidance_scale_refiner, - num_inference_steps=num_inference_steps_refiner, - image=latents, - generator=generator, - ).images[0] - - image_path = save_image(image) - print(image_path) - return [image_path], seed - -examples = [ - '3D digital art of a playful squirrel with oversized glasses reading a book, surrounded by autumn leaves, serene, natural background', - 'A fluffy bunny wearing a flower crown, hopping through a vibrant meadow, with a soft, colorful, and peaceful scenery', - 'Professional portrait photo of a whimsical owl wearing a detective hat, perched on a branch, investigating the forest mysteries, under the moonlight', - 'A curious fox exploring a quaint, rustic village, with cobblestone streets and flower-laden cottages, under the soft glow of dawn', - 'A serene lake reflecting the whimsical dance of butterflies, surrounded by blossoming flowers, as the sun casts a gentle, golden glow', - 'Cinematic still of a gentle deer prancing through an enchanted forest, with fairy lights illuminating the path, creating a magical, peaceful ambiance' -] - - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Group(): - with gr.Row(): - prompt = gr.Text( - label="Prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - container=False, - ) - run_button = gr.Button("Run", scale=0) - result = gr.Gallery(label="Result", columns=1, show_label=False) - with gr.Accordion("Advanced options", open=False): - with gr.Row(): - use_negative_prompt = gr.Checkbox(label="Use negative prompt", value=False) - use_prompt_2 = gr.Checkbox(label="Use prompt 2", value=False) - use_negative_prompt_2 = gr.Checkbox(label="Use negative prompt 2", value=False) - style_selection = gr.Radio( - show_label=True, container=True, interactive=True, - choices=STYLE_NAMES, - value=DEFAULT_STYLE_NAME, - label='Image Style' - ) - negative_prompt = gr.Text( - label="Negative prompt", - max_lines=1, - placeholder="Enter a negative prompt", - visible=False, - ) - prompt_2 = gr.Text( - label="Prompt 2", - max_lines=1, - placeholder="Enter your prompt", - visible=False, - ) - negative_prompt_2 = gr.Text( - label="Negative prompt 2", - max_lines=1, - placeholder="Enter a negative prompt", - visible=False, - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - with gr.Row(visible=False): - width = gr.Slider( - label="Width", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - height = gr.Slider( - label="Height", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - apply_refiner = gr.Checkbox(label="Apply refiner", value=False, visible=ENABLE_REFINER) - with gr.Row(): - guidance_scale_base = gr.Slider( - label="Guidance scale for base", - minimum=1, - maximum=20, - step=0.1, - value=9.0, - ) - num_inference_steps_base = gr.Slider( - label="Number of inference steps for base", - minimum=10, - maximum=100, - step=1, - value=25, - ) - with gr.Row(visible=False) as refiner_params: - guidance_scale_refiner = gr.Slider( - label="Guidance scale for refiner", - minimum=1, - maximum=20, - step=0.1, - value=5.0, - ) - num_inference_steps_refiner = gr.Slider( - label="Number of inference steps for refiner", - minimum=10, - maximum=100, - step=1, - value=25, - ) - - gr.Examples( - examples=examples, - inputs=prompt, - outputs=[result, seed], - fn=generate, - cache_examples=CACHE_EXAMPLES, - ) - - use_negative_prompt.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt, - outputs=negative_prompt, - queue=False, - api_name=False, - ) - use_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_prompt_2, - outputs=prompt_2, - queue=False, - api_name=False, - ) - use_negative_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt_2, - outputs=negative_prompt_2, - queue=False, - api_name=False, - ) - apply_refiner.change( - fn=lambda x: gr.update(visible=x), - inputs=apply_refiner, - outputs=refiner_params, - queue=False, - api_name=False, - ) - - gr.on( - triggers=[ - prompt.submit, - negative_prompt.submit, - prompt_2.submit, - negative_prompt_2.submit, - run_button.click, - ], - fn=generate, - inputs=[ - prompt, - negative_prompt, - style_selection, - prompt_2, - negative_prompt_2, - use_negative_prompt, - use_prompt_2, - use_negative_prompt_2, - seed, - width, - height, - guidance_scale_base, - guidance_scale_refiner, - num_inference_steps_base, - num_inference_steps_refiner, - apply_refiner, - randomize_seed - ], - outputs=[result, seed], - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() \ No newline at end of file diff --git a/spaces/BartPoint/VoiceChange_Beta/infer_pack/commons.py b/spaces/BartPoint/VoiceChange_Beta/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange_Beta/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Benson/text-generation/Examples/Amanda El Aventurero Juego Completo Descargar Gratis Pc.md b/spaces/Benson/text-generation/Examples/Amanda El Aventurero Juego Completo Descargar Gratis Pc.md deleted file mode 100644 index 308f9c619ab0356aca6803c9668774af7094352d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Amanda El Aventurero Juego Completo Descargar Gratis Pc.md +++ /dev/null @@ -1,60 +0,0 @@ - -Si estás buscando un emocionante y divertido juego de terror que te mantenga enganchado y entretenido, entonces definitivamente deberías echar un vistazo a Amanda the Adventurer. Este juego es una obra maestra de rompecabezas espeluznantes estilo sala de escape, animaciones espeluznantes y narración interactiva. En este artículo, te contaremos todo lo que necesitas saber sobre Amanda la aventurera, y cómo puedes descargar el juego completo gratis en tu PC.
-Amanda the Adventurer es un juego de terror desarrollado por MANGLEDmaw Games y publicado por DreadXP. Fue lanzado el 25 de abril de 2023, en Steam. El juego está inspirado en los clásicos dibujos animados CGI de los 90, pero con un toque oscuro y retorcido.
-Download Zip 🔗 https://bltlly.com/2v6M7w
El juego sigue a Riley Park, quien hereda la casa de su tía Kate y encuentra una colección de cintas de VHS en el ático. Las cintas parecen ser episodios de una caricatura infantil de principios de los 2000 llamada Amanda the Adventurer, protagonizada por una niña llamada Amanda y su mejor amiga Wooly the Sheep. Riley decide ver las cintas, pero pronto se da cuenta de que algo está muy mal. Amanda y Wooly parecen estar comunicándose directamente con Riley a través de la televisión, y tienen algunos planes siniestros para ellos.
-El juego es una experiencia corta pero intensa de terror para un solo jugador que combina cintas animadas con rompecabezas estilo sala de escape. El jugador tiene que ver las cintas y seguir las instrucciones de Amanda, mientras busca pistas y resolver acertijos ocultos en las cintas. El juego tiene múltiples finales dependiendo de las opciones y acciones del jugador.
-Amanda the Adventurer no es solo otro juego de terror. Es un juego único y original que ofrece muchos beneficios para los jugadores que aman este género.
-Jugar un juego de terror al estilo sala de escape puede mejorar tus habilidades cognitivas, como la memoria, la atención, la resolución de problemas, la creatividad y la lógica. También puede mejorar sus habilidades emocionales, como el manejo del estrés, la resiliencia, la empatía y el coraje. Además, puede proporcionarle un sentido de logro, satisfacción y diversión.
-El juego tiene muchos desafíos y puzzles que pondrán a prueba tus habilidades y lógica. Tendrás que prestar atención a cada detalle de las cintas, encontrar objetos ocultos, descifrar códigos, descifrar símbolos, manipular objetos y más. También tendrás que lidiar con las demandas, amenazas, trucos y sorpresas de Amanda. El juego no es fácil, pero es gratificante.
-El juego tiene una experiencia inmersiva e interactiva que te mantiene al límite. Sentirás que eres parte de la historia, ya que Amanda y Wooly te hablan y reaccionan a tus acciones. También escuchará y verá sonidos realistas y gráficos que crean una atmósfera espeluznante. El juego te hará sentir asustado, curioso, divertido y sorprendido.
-Si estás interesado en jugar a Amanda the Adventurer, te estarás preguntando cómo descargar el juego completo gratis en tu PC. Bueno, tenemos buenas noticias para ti. Hay una forma legal y segura de obtener el juego de Steam, sin pagar nada.
-Una vez que tengas una llave de Steam para Amanda la Aventurera, puedes seguir estos pasos para instalar y ejecutar el juego en tu ordenador:
- -Para disfrutar del juego al máximo, es posible que desee optimizar su rendimiento y configuración. Estos son algunos consejos y trucos que pueden ayudarle:
-Si usted es un fan del baloncesto y los videojuegos, es posible que se pregunte cómo descargar NBA 2K21 en Android. NBA 2K21 es la última entrega de la popular serie NBA 2K, que ofrece gráficos realistas, jugabilidad y características para los entusiastas del baloncesto. En este artículo, te mostraremos cómo descargar NBA 2K21 en Android, así como algunos consejos y trucos para disfrutar del juego.
-Download File ✓ https://bltlly.com/2v6JxX
NBA 2K21 es un juego de simulación de baloncesto desarrollado por Visual Concepts y publicado por 2K Sports. Es la 22ª edición de la franquicia NBA 2K, que se basa en la Asociación Nacional de Baloncesto (NBA). NBA 2K21 fue lanzado el 4 de septiembre de 2020 para Microsoft Windows, PlayStation 4, Xbox One, Nintendo Switch y Stadia, y el 10 de noviembre de 2020 para PlayStation 5 y Xbox Series X/S. También está disponible para dispositivos móviles, incluidos Android e iOS.
-NBA 2K21 ofrece una variedad de características que lo convierten en uno de los mejores juegos de baloncesto del mercado. Algunas de estas características son:
-Para descargar NBA 2K21 en Android, necesitas tener un dispositivo compatible que cumpla con los siguientes requisitos:
-Ahora que sabes lo que es NBA 2K21 y lo que ofrece, vamos a ver cómo descargarlo en Android. Estos son los pasos que debes seguir:
-Lo primero que tienes que hacer es comprobar si tu dispositivo es compatible con NBA 2K21. Puedes hacer esto visitando la página de la aplicación móvil NBA 2K en Google Play Store y revisando la sección de compatibilidad. Si su dispositivo es compatible, verá una marca de verificación verde al lado. Si su dispositivo no es compatible, verá una marca de cruz roja al lado. Alternativamente, también puede usar la aplicación Device Compatibility Checker para escanear su dispositivo y ver si cumple con los requisitos para NBA 2K21.
- -Lo siguiente que tienes que hacer es descargar la aplicación NBA 2K Mobile de Google Play Store. Esta aplicación es la versión móvil oficial de NBA 2K21, que le permite jugar el juego en su dispositivo Android. Para descargar la aplicación, siga estos pasos:
-El paso final es elegir tu equipo favorito de la NBA y empezar a jugar el juego. Puedes elegir entre cualquiera de los 30 equipos de la NBA, como Los Angeles Lakers, Brooklyn Nets, Golden State Warriors, Milwaukee Bucks y más. Para elegir tu equipo y empezar a jugar, sigue estos pasos:
-NBA 2K21 es un juego divertido y desafiante que requiere habilidad, estrategia y práctica. Para ayudarte a mejorar tu juego y disfrutarlo más, aquí hay algunos consejos y trucos que puedes usar:
-Para mejorar tus habilidades y rendimiento en NBA 2K21, necesitas dominar lo básico del baloncesto, como disparar, pasar, driblar, defender, rebotar y más. Puedes hacer esto jugando el modo tutorial, practicando en diferentes modos, viendo videos y guías en línea y aprendiendo de otros jugadores. Estos son algunos consejos específicos que puedes usar:
-Para personalizar a tu jugador y equipo en NBA 2K21, necesitas usar las diversas opciones y modos que te permiten crear y editar tu propio personaje, equipo, logotipo, camiseta, corte y más. Puede hacer esto accediendo a los siguientes modos y opciones:
-Para ganar recompensas y monedas en NBA 2K21, necesitas jugar el juego y completar varias tareas y desafíos. Las recompensas y las monedas son útiles para desbloquear y actualizar diferentes elementos y características en el juego. Puedes ganar recompensas y monedas haciendo las siguientes cosas:
-Para unirse a torneos y eventos en NBA 2K21, necesita estar en línea y tener una conexión a Internet estable. Los torneos y eventos son competiciones especiales que te permiten jugar contra otros jugadores en línea y ganar recompensas y premios exclusivos. Puedes unirte a torneos y eventos haciendo las siguientes cosas:
-NBA 2K21 es un increíble juego de baloncesto que puedes descargar y jugar en tu dispositivo Android. Ofrece gráficos realistas, jugabilidad y características que te harán sentir como un verdadero jugador de la NBA. Para descargar NBA 2K21 en Android, debes seguir estos pasos:
-También puedes usar estos consejos y trucos para mejorar tu juego y disfrutarlo más:
-Esperamos que este artículo te haya ayudado a aprender a descargar NBA 2K21 en Android. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!
-Aquí hay algunas preguntas frecuentes sobre NBA 2K21 en Android:
-DATA EXPUNGED (TEST PROXY)
- - ---- - - -# We were lost in a Foreign Land - - -While you are here I will leave you with one of my favorite songs LMAO.
-